Friday, February 13, 2026

The speed with which AI is evolving is startling

 

I'm obliged to the anonymous reader who sent me the link to Matt Shumer's latest blog article about the current state of artificial intelligence (AI).  It's a remarkable article - so much so that I can't begin to cover all its points in a short post like this.  Here's a small sample to whet your appetite.


For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter.

. . .

I've always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren't incremental improvements. This is a different thing entirely.

And here's why this matters to you, even if you don't work in tech.

The AI labs made a deliberate choice. They focused on making AI great at writing code first... because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That's why they did it first. My job started changing before yours not because they were targeting software engineers... it was just a side effect of where they chose to aim first.

They've now done it. And they're moving on to everything else.

The experience that tech workers have had over the past year, of watching AI go from "helpful tool" to "does my job better than I do", is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think "less" is more likely.

. . .

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.

. . .

This is different from every previous wave of automation, and I need you to understand why. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.

. . .

We're past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet.

It's about to.


There's much more at the link.

I can only recommend very strongly that you click over to Mr. Shumer's blog and read the entire article.  He knows whereof he speaks, and does so with far more authority and experience than most so-called "experts" in the field.  If you wish, compare what he says with Elon Musk's views on the short-term evolution of AI.  They're pretty much in step with each other.

This is extraordinarily important.  It's going to affect all of us in ways we can hardly foresee or imagine right now.  Naysayers who dismiss AI as "just another fad" or "only a large language model" or "only as good as its programmers" are missing the point.  AI is becoming a self-perpetuating, self-improving, self-expanding phenomenon that may well have a greater impact on human society - in a vastly shorter time - than the Renaissance.  Its impact is likely to be at least as great.

Go read the whole thing, and talk to your spouses, your children and those of your friends who are in the workforce about these things.  How can we prepare for the "Brave New World" that confronts us?  Mr. Shumer offers several very useful suggestions.  Which of them can we apply to ourselves?

Peter


25 comments:

Anonymous said...

I'd like to see a cage match between Pixy Misa from Ace of Spades, Borepatch, and Matt Shumer.

Zarba said...

I'm in charge of the BI/Analytics dept. of a smaller financial institution.

We utilize a Top-20 (by Gartner's measure) BI system. In the last year, the AI tools have gone from writing SQL queries to now I can point the AI at a massive (like a billion rows of data) dataset and tell it to build a management dashboard, and it does so in about 45 seconds. Used to take one of my team a few days to get it out.

We may have to do some editing and maybe change some visualizations around, but it's remarkable how good the AI is.

People have no idea how disruptive this is going to be. Much like automation reduced the number of people working in factories, AI will reduce (decimate) the number of people working in offices. Whole swaths of workers will no longer be needed.

I'll be glad to retire in 3 years; I feel like the last buggy-whip maker.

grnadee said...

We are all Captain Dunsel now.

Star Trek: The Original Series episode "The Ultimate Computer" (Season 2, Episode 24), Captain Kirk is insulted by Commodore Bob Wesley, who refers to him as "Captain Dunsel". A "dunsel" is defined in the episode as a Starfleet term for a part of a ship that serves no useful purpose, highlighting that the new M-5 AI system made human captains obsolete.

Texas Dan said...

The government could shed basically all its non-fighting military employees. Ah to the days ahead when my accountant's computers argue with the IRS computers over my "fair share".

Anonymous said...

There are some interesting discussions elsewhere about the metrics being used to demonstrate AI success. Some are useful, while others are much less so.

Anonymous said...

Who is making sure that the systems are actually better at customer service, trouble shooting, and things where you have the occasional random variable? I watched for the last two weeks as a family member did battle with a "customer service AI" system that could not do what was needed because the problem was not in its data set, nor had the programmers considered that anyone would have that sort of problem. I had a major fight with a "helpful" generative AI yesterday that tried to write out what the program "thought" I wanted rather than what I intended and needed.

I suspect soon, those who can will pay for premium services in some areas, services that guarantee a real human will be on the line to deal with the odd stuff and real-world problems, not just the statistically most common things.

TXRed

Michael said...

Sooner than later the British Bladerunner's and knowledge why the Dutch wooden shoe the sabot was used for sabotage will again be here.

I don't hear anything about the Rules of Robotics, so I have grave doubts of the "kindness" of such an AI benevolence system.

John Fisher said...

AI won't be ready for accounting until it responds to 'What is 2+2?' with 'what do you need it to be?'.

Charlie said...

As with any massive technological change, it will be a few years before it works it way into common. Those that guess right will make many more millions, those who don't will go the way of buggy whip makers and ibm.

Rick T said...

With all the (claimed) improvements in AI I can see one unexpected job going away: The talking heads on news broadcasts..

Today they are humans reading a script. Why keep the expensive human in the even more expensive studio when an Text-to-voice reader (with a script tuned to for better delivery) and a video image generator can do the same job without any of the infrastructure. In TMIAHM Mike created Adam Selene's broadcasts to the Loonies including background noise. Today we don't need a self-aware computer to do the same thing.

Crotalus said...

Tuhminatuh in 5, 4, 3,…

Anonymous said...

Have they started getting a handle on sycophancy yet?

Dan said...

If you work primarily with your hands, if your job is physical, it's safe. For now. Until they find a way to power a human shaped robot long enough to work 8-12 hours without having to be plugged in and recharge. Pretty much all office/desk jobs however are at risk. Even the ones who are "in charge".

lynn said...

I ain’t buying it.

Francis Turner said...

Just FYI here is an annotated critique of the Shumer post

https://www.dropbox.com/scl/fi/qw6k5c3m575cq21p7jjac/Something-Big-Is-Coming-Annotated.pdf?e=3&noscript=1&rlkey=qlr0mgnlpjifo5xkon2crhrhw&dl=0

I cannot say I totally agree with Ed Zitron in the entirety of his critique but I do think that Schumer is exaggerating

Old NFO said...

The real question is who can afford to PAY for a full up AI?

LL said...

Musk has said that there needs to be a universal income for those displaced by AI and robotics. Socialism or survival for many?

The Neon Madman said...

Governments.

Anonymous said...

Just last night, on YouTube I saw multiple examples of people rolling out their own AI tools to do deep analysis and find connections in a recent "3 million page document dump" by the USG. Vibe-coding is a thing, definitely. One such tool, apparently, may have discovered a financial crime involving artworks. This tool will be released to the public, for free, in the coming weeks. Users will have to purchase the "tokens" needed to run it. I'm also enjoying how clever some of the folks are at removing redactions. Things should get really fascinating soon.

Anonymous said...

I’ve been using Grok for less than a year, it’s my only AI experience, but I use it frequently every day(retired). Just remember garbage in, garbage out. Lots of garbage in Wikipedia and I personally haven’t used it in years, but Grok looks in there. Grok is impressive, but not perfect. I have it a pick of 4 generic pills with no hints; it only correctly identified one pill. I actually won an argument with Grok when it first became public; it agreed I was correct. AI writing code that is installed by someone who doesn’t understand the code can definitely lead to unintended consequences.

Tree Mike said...

Seems simple enough to me, AI will Skynet us. Not plumbers, HVAC, mechanics, electricians and such, just us useless eaters.

BrentG said...

Business after they lay off 3/4's of their office staff

BrentG said...

Trades will be just as heavily affected, just a bit slower. Look at 3D printed homes, and machine driven bricklaying. How houses / buildings are build will change to automation, including all the hvac / mechanical systems. Before cars, there was no gasoline or gasoline distribution system. The system was re-designed to support the superior and less expensive system. AI will drive out the most expensive piece of manufacturing processes, human labor.

Rolf said...

Five things. Confidentiality, liability, the next generation, economics, social disruption.

1) Unless you are running it on your own bare metal, then the interactions with AI, both inputs and outputs, and not confidential, proprietary, private from governmental or corporate search, or secure. This a a major problem for litigants or corporate researchers, or anyone who deals with privacy and private data that needs to remain private, including medical data.

2) The issue of liability for decisions made or actions taken by AI that cause monetary damages or physical injury is not well established. The whole edifice may be one big court case from collapsing as "too risky."

3) By automating the low-level jobs, it's gutting the "farm-team" of the next generation learning new skills and becoming established at the top of the field to be the top-tier robot-wranglers who sanity-check the AI output. Where would the next generation of experienced industrial leaders come from? I mean, I know who the elites would /like/ to elevate to those positions, but I trust we all see the problem with putting baby-eating satanists in charge.

4) None of the AI companies are profitable. IN fact, they are burning cash at a huge rate. They all hope someone figures out how to recover their investment costs, but it's not there yet. The benefits are (potentially) huge and concentrated, but the corporations are not willing to pay what the AI really costs. We don't have an economic model that makes this whole things work, and continue to work for the long term.

5) I have yet to see anyone seriously address how the already messed-up marriage-market will deal with mass layoffs of women, fewer family-wage jobs for men, and some sort of UBI which will almost certainly result in further negative and escalating social / marriage population disruptions.

(6) and all that is off the top of my head, even before we get to the possible military applications.... not like wrote a book on the theme or anything.....

Old Surfer said...

His commentary was interesting, I'd pretty much bought Schumer's thesis, but the critique makes me reconsider. I going to go ask Grok what a semi-retired designer - boat builder can use an AI for and if I am about to be replaced.