← Home About Archive Photos Replies Subscribe Also on Micro.blog
  • This Week in Ed Zitron

    Ed Zitron is the CEO of EZPR, a public relations firm based in Las Vegas. He's best known to me as the most entertaining critic when it comes to technology, particularly Silicon Valley technology companies. His newsletter, Where's Your Ed At, and podcast, Better Offline, are both worth subscribing to.

    He made 2 memorable podcast appearances this week. The first was as a replacement for Leo Laporte, who is on holidays, on This Week in Google on the TWiT podcast network. The second was as a guest on the Tech Won't Save Us podcast hosted by Paris Marx.

    Ed can be too caustic for some people. He can be insulting about people he doesn't like and is more confrontational when products don't meet the hype. There is a danger in this approach that you can be too dismissive of new technology that is not ready for prime time. However, he seems to be right about cryptocurrencies and the metaverse at this point in time. I'm not sure he's wrong about AI yet.

    One of the topics covered included an interesting story from The Information about Amazon and Google trying to quietly bring down expectations on generative AI.

    → 11:36 PM, Mar 16
  • Meredith Whitaker on AI Hype

    Meredith Whitaker, the President of Signal and chief advisor to the AI Now Institute, appeared on the Big Technology Podcast and she had some interesting things to say about OpenAI, Microsoft and the hype that has built around AI since the release of ChatGPT.

    ChatGPT itself is not an innovation. It's an advertisement that was very, very expensive that was placed by Microsoft to advertise the capacities of generative AI and to advertise their Azure GPT APIs that they were selling after effectively absorbing OpenAI as a Microsoft subsidiary. But the technology or frameworks on which ChatGPT are based are dated from 2017.

    So, Microsoft puts up this ad, everyone gets a little experience of communicating with something that seems strikingly like a sentient interlocutor. You have a supercharged chat bot that everyone can experience and have a kind of story about. It's a bit like those viral "upload your face and we'll tell you what kind of person you are" data collection schemes that we saw across Facebook in the 2010s and then an entire narrative of innovation or a narrative of scientific progress gets built around this sort of ChatGPT moment.

    Suddenly generative AI is the new kind of AI. Suddenly claims about sentience and about the superintelligence and AI being on the cusp of breaking into full consciousness and perhaps, endangering human life. All of this almost like religious rhetoric builds up in response to ChatGPT.

    I'm not a champion of Google but I think we need to be very careful about how are we defining innovation and how are we defining progress in AI because what I'm seeing is a reflexive narrative building around what is a very impressive ad for a large, generative language model but not anything we should understand as constitutionally innovative.

    Meredith Whitaker on ChatGPT

    She also talks about the dangers of trusting the models to return factual information.

    I didn't say useless. I said not that useful in most serious contexts or that's what I think. If it's a low stakes lit review, a scan of these docs could point you in the right direction. It also might not. It also might miss certain things because you're looking for certain terms but actually, there's an entire field of the literature that uses different terms and actually if you want to research this and understand it, you should do the reading.

    Not maybe trust a proxy that is only as good as the data it's trained on and the data it's trained on is the internet plus whatever fine-tuning data you're using.

    I'm not saying it's useless, I'm saying it is vastly over-hyped and the claims that are being made around it are I think leading to a regulatory environment that is a bit disconnected from reality and to a popular understanding of these technologies that are far over-credulous about the capabilities.

    Any serious context where factuality matters is not somewhere where you can trust one of these systems.

    Meredith Whitaker on AI Hype and Doing the Reading

    I remember Ezra Klein talking about the importance of doing the reading and the connections that can be formed in your mind as the material becomes more familiar to you. That depth of knowledge can provoke insights to create something new or to improve an existing service. Loading all your books into an expert system does not help this type of thinking if you never read them yourself.

    Productivity in knowledge work is still incentivized to produce more volume rather than more quality. There's great story about Bill Atkinson when Apple decided to track the productivity by the number of lines of code that they wrote in a week. According to Folklore.org:

    Bill Atkinson, the author of Quickdraw and the main user interface designer, who was by far the most important Lisa implementer, thought that lines of code was a silly measure of software productivity. He thought his goal was to write as small and fast a program as possible, and that the lines of code metric only encouraged writing sloppy, bloated, broken code.

    He recently was working on optimizing Quickdraw's region calculation machinery, and had completely rewritten the region engine using a simpler, more general algorithm which, after some tweaking, made region operations almost six times faster. As a by-product, the rewrite also saved around 2,000 lines of code.

    -2000 Lines Of Code (Andy Hertzfeld/Folklore.org)

    I'm afraid that the diligence and craft displayed by Bill Atkinson would not be rewarded today when developers are encouraged to crank out as much code as possible using GitHub Copilot or some other AI assistant.

    → 11:24 PM, Jan 17
  • Yes, Google Results Have Gotten Worse

    404 Media reported on a study published by German researchers from Leipzig University, Bauhaus-University Weimar, and the Center for Scalable Data Analytics and Artificial Intelligence titled "Is Google Getting Worse? A Longitudinal Investigation of SEO Spam in Search Engines".

    Google isn't the only search engine dealing with this issue. Jason Keobler writes:

    Notably, Google, Bing, and DuckDuckGo all have the same problems, and in many cases, Google performed better than Bing and DuckDuckGo by the researchers' measures.

    Google Search Really Has Gotten Worse, Researchers Find (Jason Koebler/404 Media)

    The research does highlight how much damage search engine optimization (SEO) has done to the ecosystem of the internet. The release of generative AI is only going to make the problem worse. Amazon is dealing with product titles and reviews being generated using ChatGPT.

    David Roth had a good piece on Defector about the promises made by the developers and boosters of AI and its actual use in the present day.

    One reason it is not very interesting is that everything they have touted as the future of some essential human thing or other—the future of art, or money—has mostly crashed out in ways that left behind very little useful residue. Another is that the ways in which AI is used in the present, by your lower-effort plagiarists and scammers, are so manifestly not the future of anything that works, but rather both the present and the future of shitting-up web search results, which is roughly analogous to saying that robocalls about homeowners insurance are the future of human communication.

    The Future Of E-Commerce Is A Product Whose Name Is A Boilerplate AI-Generated Apology (David Roth/Defector)
    → 11:09 PM, Jan 16
  • RSS
  • JSON Feed
  • Micro.blog