I feel [it] might be much too complicated, unless somebody *wants* to explore using AI because their job description says “Look for actual useful AI uses”. In today’s tech world, I assume such job descriptions do exist. Sigh…
Linus Torvalds, in an email, “Re: [GIT PULL] io_uring fix for 6.17-rc5” [lore.kernel.org], on the Linux Kernel mailing list
Linus being Linus.
I don’t have any survey or stats to back it up, but I feel that we are turning the corner on AI hype. As I browse through my news feeds every morning the number of negative AI articles seems to be on the rise. Not negative in the “Gen AI is theft” or “Gen AI is just a way to fire people”. That is there, but that’s been there since the beginning. The AI companies and businesses adopting it are determined to push right through those issues.
No, this negativity is the disillusionment of the people, and companies, who are pushing. There have been articles about companies who laid off large swaths of people, only to have to hire them back. And of companies who said they were using AI but, in fact, were using cheep overseas labor. Then there was that study from MIT, that 95% of all AI initiatives at companies fail.
I think this negativity goes back to an idea that Cory Doctorow expressed back in 2023:
The universe of low-stakes, high-dollar applications for AI is so small that I can’t think of anything that belongs in it.
Cory Doctorow, in Cory Doctorow: What Kind of Bubble is AI [locusmag.com]
I posted about that back in January 2024, in The A.I. Bubble and Life-Changing Use Cases [confusion.cc], and I don’t think anything has changed. I think there is a lot of AI out there, a lot of it bloating software and services both useful and useless. Some small bit of it useful to some fraction of people. But I don’t think any of it comes close to achieving anything like a justification from the hype.
Microsoft is among the most avid pushers of AI. After the early and ludicrous investment in AI (I guess memories of “missing” the internet last a long time…), the have to push AI in every nook and cranny of their vast empire of software and services. Adobe is another company whose products I use and has baked in AI all over the place.
I just spent two paragraphs disparaging AI but I do find some of it useful. But I think Doctorow’s statement is accurate. Let me give you an example of what I mean.
In Adobe Express and Lightroom, the two products from Adobe that I use, there are a number of AI tools. Setting aside Firefly their image and video generating AI, there are tools to automatically mask an image in Lightroom – to select the subject, or the sky, the background, or a person/people and even specific parts of a person: facial skin, all skin, hair, eyes, etc. Once selected you can apply edits to this (or everything but what you selected). The AI masking is pretty good, it managed to select things I want most of the time. This is a great timesaver, or in my case, makes it so I can edit my photos in a way I would not previously. It takes too long to manually mask things, I only ever did very basic masking, but the AI tools spread things up a lot.
I find the useful set of tools in Microsoft’s software, their “Copilots” to be similar. The auto summary tool in Outlook that allows you to include a summary of a long thread of email in a meeting invite or when your forward the email to some poor unsuspecting colleague is great. But it’s giving most people to ability to do something that they never did, too often people forward long email threads to people with no summary or context, the dreaded “adding so-and-so” or “+someone” or, even worse, the cursed “++”. People are lazy, I’ve not even seen many using this auto summarization yet. But at my job we only just got access to it. I remain hopeful despite the complete lack of evidence.
One, more example: Apple Intelligence, oft maligned, has turned out to be useful on at least one occasion for me. My daughter sent me a recipe and asked me to get the ingredients she needed. The recipe was in ‘Mercian Freedom Units, and quaint as cups and tsps are they don’t sell shit in freedom units. So I copied the list into a note and asked Siri to convert it to metric. To my utter surprise it did it in one go and correctly.
But that’s it. This is the sum total of useful AI I have. Well, except for AI turning search engines into answer engines and killing the internet [confusion.cc]. All of this is automating low value, time consuming tasks. At best these tasks are menial and at worst they are left undone because the value is below the bullshit job threshold.
What about Vibe Coding?
I remain skeptical. If it’s just replacing the interns and college graduate coders with AI then it’s replacing menial work. And is just part of the corporate push to reduce jobs through automation. Nothing new about that. But I can’t see vibe coding as a good thing. The core idea of using a probabilistic generative AI to write code in a world where we have been pushing for more deterministic secure code seems to be going the wrong direction.
I read an article a week or so ago somewhere (maybe in The Economist, but I can’t find it again) about how the business who want to build or use AI needs to Victorian civil engineering for guidance. The story was that in the early days of modern engineering —building structures like bridges with steel— people didn’t have a full understanding of all the capabilities and properties of steel. And the quality of steel was highly variable so excessive caution and over-engineering was needed to ensure that bridges didn’t fall into rivers.
Unfortunately the way modern companies work I cannot imagine any company approving the investments in over engineering AI the way a Victorian bridge was over engineered. It would not be financially responsible to do more than the bare minimum. And I fear that without costly over engineering high-stakes use cases are out of reach. So we are left with low-stakes use cases. Are their high-value versions of those?
So far the people with “look for actual AI use cases” in their job descriptions have come up with low-stakes, low-value use cases. On aggregate these might be enough to justify AI, by improving performance and laying off vast swaths of the workforce in companies they might be able to generate some return on investment.
But eliminating jobs through efficiently and productivity has a downside. The Economist had a guest article last week on this. In “Two scholars ask whether democracy can survive if AI does all the jobs” make a chilling point:
[L]abour automation isn’t just an economic problem; it’s also a political one. Right now, democratic governments depend on their citizens financially. But in a world of AI-powered UBI, the opposite would be true. Imagine a world in which citizens are burdensome dependants of a state that no longer needs them for anything.
Raymond Douglas and David Duvenaud in “Two scholars ask whether democracy can survive if AI does all the jobs” published in The Economist, September 27th, 2005.
It is a chilling thought. And it reminds me that the only positive view of the future I know of across Sci-Fi is Star Trek, where Earth is some sort of Marxist utopia where scarcity has been “solved” and humans have all devoted themselves to “the betterment of humanity”.
In conclusion. Companies are going to spend trillions to make AI automate low-value work, ending bullshit jobs and making us all dependent on our governments to take care of our needs.