I'm obliged to the anonymous reader who sent me the link to Matt Shumer's latest blog article about the current state of artificial intelligence (AI). It's a remarkable article - so much so that I can't begin to cover all its points in a short post like this. Here's a small sample to whet your appetite.
For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter.
. . .
I've always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren't incremental improvements. This is a different thing entirely.
And here's why this matters to you, even if you don't work in tech.
The AI labs made a deliberate choice. They focused on making AI great at writing code first... because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That's why they did it first. My job started changing before yours not because they were targeting software engineers... it was just a side effect of where they chose to aim first.
They've now done it. And they're moving on to everything else.
The experience that tech workers have had over the past year, of watching AI go from "helpful tool" to "does my job better than I do", is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think "less" is more likely.
. . .
The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.
. . .
This is different from every previous wave of automation, and I need you to understand why. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.
. . .
We're past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet.
It's about to.
There's much more at the link.
I can only recommend very strongly that you click over to Mr. Shumer's blog and read the entire article. He knows whereof he speaks, and does so with far more authority and experience than most so-called "experts" in the field. If you wish, compare what he says with Elon Musk's views on the short-term evolution of AI. They're pretty much in step with each other.
This is extraordinarily important. It's going to affect all of us in ways we can hardly foresee or imagine right now. Naysayers who dismiss AI as "just another fad" or "only a large language model" or "only as good as its programmers" are missing the point. AI is becoming a self-perpetuating, self-improving, self-expanding phenomenon that may well have a greater impact on human society - in a vastly shorter time - than the Renaissance. Its impact is likely to be at least as great.
Go read the whole thing, and talk to your spouses, your children and those of your friends who are in the workforce about these things. How can we prepare for the "Brave New World" that confronts us? Mr. Shumer offers several very useful suggestions. Which of them can we apply to ourselves?
Peter
No comments:
Post a Comment