← Home About Archive Photos Replies Subscribe Also on Micro.blog
  • Meredith Whitaker on AI Hype

    Meredith Whitaker, the President of Signal and chief advisor to the AI Now Institute, appeared on the Big Technology Podcast and she had some interesting things to say about OpenAI, Microsoft and the hype that has built around AI since the release of ChatGPT.

    ChatGPT itself is not an innovation. It's an advertisement that was very, very expensive that was placed by Microsoft to advertise the capacities of generative AI and to advertise their Azure GPT APIs that they were selling after effectively absorbing OpenAI as a Microsoft subsidiary. But the technology or frameworks on which ChatGPT are based are dated from 2017.

    So, Microsoft puts up this ad, everyone gets a little experience of communicating with something that seems strikingly like a sentient interlocutor. You have a supercharged chat bot that everyone can experience and have a kind of story about. It's a bit like those viral "upload your face and we'll tell you what kind of person you are" data collection schemes that we saw across Facebook in the 2010s and then an entire narrative of innovation or a narrative of scientific progress gets built around this sort of ChatGPT moment.

    Suddenly generative AI is the new kind of AI. Suddenly claims about sentience and about the superintelligence and AI being on the cusp of breaking into full consciousness and perhaps, endangering human life. All of this almost like religious rhetoric builds up in response to ChatGPT.

    I'm not a champion of Google but I think we need to be very careful about how are we defining innovation and how are we defining progress in AI because what I'm seeing is a reflexive narrative building around what is a very impressive ad for a large, generative language model but not anything we should understand as constitutionally innovative.

    Meredith Whitaker on ChatGPT

    She also talks about the dangers of trusting the models to return factual information.

    I didn't say useless. I said not that useful in most serious contexts or that's what I think. If it's a low stakes lit review, a scan of these docs could point you in the right direction. It also might not. It also might miss certain things because you're looking for certain terms but actually, there's an entire field of the literature that uses different terms and actually if you want to research this and understand it, you should do the reading.

    Not maybe trust a proxy that is only as good as the data it's trained on and the data it's trained on is the internet plus whatever fine-tuning data you're using.

    I'm not saying it's useless, I'm saying it is vastly over-hyped and the claims that are being made around it are I think leading to a regulatory environment that is a bit disconnected from reality and to a popular understanding of these technologies that are far over-credulous about the capabilities.

    Any serious context where factuality matters is not somewhere where you can trust one of these systems.

    Meredith Whitaker on AI Hype and Doing the Reading

    I remember Ezra Klein talking about the importance of doing the reading and the connections that can be formed in your mind as the material becomes more familiar to you. That depth of knowledge can provoke insights to create something new or to improve an existing service. Loading all your books into an expert system does not help this type of thinking if you never read them yourself.

    Productivity in knowledge work is still incentivized to produce more volume rather than more quality. There's great story about Bill Atkinson when Apple decided to track the productivity by the number of lines of code that they wrote in a week. According to Folklore.org:

    Bill Atkinson, the author of Quickdraw and the main user interface designer, who was by far the most important Lisa implementer, thought that lines of code was a silly measure of software productivity. He thought his goal was to write as small and fast a program as possible, and that the lines of code metric only encouraged writing sloppy, bloated, broken code.

    He recently was working on optimizing Quickdraw's region calculation machinery, and had completely rewritten the region engine using a simpler, more general algorithm which, after some tweaking, made region operations almost six times faster. As a by-product, the rewrite also saved around 2,000 lines of code.

    -2000 Lines Of Code (Andy Hertzfeld/Folklore.org)

    I'm afraid that the diligence and craft displayed by Bill Atkinson would not be rewarded today when developers are encouraged to crank out as much code as possible using GitHub Copilot or some other AI assistant.

    → 11:24 PM, Jan 17
  • Kashmir Hill on Life Without the Tech Giants

    While reading Kashmir Hill's profile of Mike Masnick I was reminded of the series she did on "Life Without the Tech Giants" while she was working for Gizmodo in 2019.

    It was eye opening to see how much of the digital infrastructure runs through such a small number of companies. Sometimes there is no alternative as their services have been embedded into business and government systems and can't be avoided.

    I remember being surprised at how many services ran through AWS. I thought someone with the size of Netflix would be running their own infrastructure.

    I'd be interested to see how many services are being run through the 3 largest cloud providers today: Amazon Web Services, Microsoft Azure and Google Cloud Platform.

    The series is still worth reading and viewing today.

    • Life Without the Tech Giants
    • I Tried to Block Amazon From My Life. It Was Impossible
    • I Cut Facebook Out of My Life. Surprisingly, I Missed It
    • I Cut Google Out Of My Life. It Screwed Up Everything
    • I Cut Microsoft Out of My Life—or So I Thought
    • I Cut Apple Out of My Life. It Was Devastating
    • I Cut the 'Big Five' Tech Giants From My Life. It Was Hell
    → 3:01 PM, Aug 7
  • RSS
  • JSON Feed
  • Micro.blog