The AI future is better and worse than we think
The dual problem is that artefacts are now cheap and easy to generate, while the bases of our systems have stagnated.
When I wrote about AI for the Daily Maverick in June 2022, I struggled to get my piece published. The feedback was that AI was not relevant to Daily Maverick’s readers, and while I feel somewhat vindicated, I wish I hadn’t been quite so right. AI has had more than just an impact, yet I think we’re missing some of the most profound challenges — and opportunities — it’s creating.
This is the first part of a series of articles I’m writing about AI, the problems, and the potentials that everyone is missing.
2025: Humanity’s last year
Humans are so successful due to our ability to adapt, learn, and build on the knowledge of those before us. AI might be touted as a massive leap forward in our collective technological advance, but there’s a threat that the AI bros haven’t acknowledged: Large Language Models (LLMs) themselves are stuck in time, unable to advance. The point that an LLM is trained is the point at which it stops learning. As we rely on LLMs more and more, this becomes an incredibly dangerous trap.
I will use my own world, software development, as an example. Typically, software developers work in a few specific languages, on platforms we are familiar with, and in frameworks that we know. Using this base of knowledge, I can craft many useful applications for businesses and individuals, and AI is great at helping me do that. It knows the languages, platforms and frameworks really well.
But those underlying technologies are developing, at a slower pace than we churn out apps, but continually incrementally improving. When I compare the products I build 10 or 20 years ago to now, I’m amazed by how far those incremental improvements have brought us.
The same, slow improvements — the real basic building blocks of our knowledge society — could be seen in any industry, from practicing law (evolving legislation and new case law) to building skyscrapers (new methods; new materials).
I’m the annoying guy in the team who always wants to try the new thing: I tend to use the bleeding-edge of my tools, and occasionally I cut myself, but it’s still a lot more fun than playing it safe. (And this is why I can never work for a corporation — that’s not how they build their software.) This is where I’ve seen the limits of AI. They simply don’t know about the latest changes to the base of our knowledge. Until the next version is trained, and assuming the new ways of doing things are in the training data, it simply refuses to acknowledge the new reality.
(Without getting too technical, there are ways around this, adding the latest data to our prompts, but this burns tokens, makes things slow, and the AI always gravitates to its default over time.)
As we itterate through generations of LLMs, the problem is exacerbating. AIs are no longer trained primarily on humanity’s knowledge-base — they’re trained on data created by other AIs. The previous state gets reinforced.
There’s a real risk here that we dramatically slow, or even halt, our species’ impressive march forward. It won’t seem like it because we will still be productive. We will churn out apps, legal contracts, new medicines, architectural plans and AI music at an ever-greater pace. But these don’t drive our progress. They are the results of our base knowledge, not the cause.
Systems that are dominant or monopolistic now will tend to become even more so over time. What the AI recommends we use is what more of us will use, and newcomers with new ideas will find it significantly harder to break into the mainstream.
Slop, slop everywhere, and not a drop to drink
The nature of LLMs’ inability to learn is only one of the knowledge walls we are running head-long into. The second, and possibly more damaging, is AI-generated content. 2025 will be the cut-off point where we can trust any knowledge anymore.
It’s already at the point where I automatically check the date for any amazing video I see on YouTube. Kangaroo Attacks Drone? It’s eight years old, it’s safe. Kangaroo Jumps on Trampoline is 2025 — definitely bullshit.
Viral kangaroo videos might not be too important, but all knowledge from every industry will have a 2025 cutoff. Between January and September of 2025, 82% of new books on herbal remedies were “likely” written by AI. I’m sorry for the few genuine authors during this period, but there’s not going to be a way to separate your blood, sweat and tears from unscrupulous “authors” giving potentially dangerous advice. No-one relying on herbal remedies should be ingesting or applying salves or teas that could have been hallucinated by even the best LLMs. All post-2024 herbal remedy books are, ironically, poisoned, forever. Development in herbal medicine has in effect been stopped in its tracks.
This problem has affected the “softer” industries with lower barriers to publishing first. My wife tells me this is a massive problem in the sewing and knitting pattern world, for instance. But I’ve also heard from someone in architecture that they’re starting to see more and more unbuildable plans; the legal profession is rife with hallucinated case law; fiction magazines have had to stop accepting submissions.
Welcome to the post-knowledge era
The dual problem is that artefacts are now cheap and easy to generate, while the bases of our systems have stagnated.
For a species that has always relied upon— and prided ourselves on — adaptability and learning, it’s deeply ironic that our greatest achievement, the collection of all our knowledge into incredibly smart semantic machines, threatens that.
This isn’t our first two-edged-sword, offering both a dramatic leap forward and the end of us. Some are clearly both — nuclear and gunpowder spring to mind— while sometimes we only see the danger in retrospect, like CFCs and leaded petrol. The wisest thing would be to proceed Amishly slowly and cautiously, but that isn’t our nature.
LLMs and AI might be stagnant, but humans aren’t. We will adapt, and you can see from the actions of the world’s CEOs how they think that adaptation will go. Their position is surprisingly non-humanist. I think differently, and I’ll write about that in my next instalment, how AI will end capitalism.