The BBC has an interesting article about how artificial intelligence learned from its environment a navigation technique for high-altitude balloons that its creators had never considered.
The gaggle of Google employees peered at their computer screens in bewilderment. They had spent many months honing an algorithm designed to steer an unmanned hot air balloon all the way from Puerto Rico to Peru. But something was wrong. The balloon, controlled by its machine mind, kept veering off course.
Salvatore Candido of Google's now-defunct Project Loon venture, which aimed to bring internet access to remote areas via the balloons, couldn't explain the craft’s trajectory. His colleagues manually took control of the system and put it back on track.
It was only later that they realised what was happening. Unexpectedly, the artificial intelligence (AI) on board the balloon had learned to recreate an ancient sailing technique first developed by humans centuries, if not thousands of years, ago. "Tacking" involves steering a vessel into the wind and then angling outward again so that progress in a zig-zag, roughly in the desired direction, can still be made.
Under unfavourable weather conditions, the self-flying balloons had learned to tack all by themselves. The fact they had done this, unprompted, surprised everyone, not least the researchers working on the project.
"We quickly realised we'd been outsmarted when the first balloon allowed to fully execute this technique set a flight time record from Puerto Rico to Peru," wrote Candido in a blog post about the project. "I had never simultaneously felt smarter and dumber at the same time."
This is just the sort of thing that can happen when AI is left to its own devices. Unlike traditional computer programs, AIs are designed to explore and develop novel approaches to tasks that their human engineers have not explicitly told them about.
But while learning how to do these tasks, sometimes AIs come up with an approach so inventive that it can astonish even the people who work with such systems all the time. That can be a good thing, but it could also make things controlled by AIs dangerously unpredictable – robots and self-driving cars could end up making decisions that put humans in harm's way.
There's more at the link.
The article provides a number of interesting examples of how machine learning has surprised its creators, and those nominally in charge of it. I knew of some of them, but not all.
I was involved in an early implementation of "expert systems" (applied AI) in the computer field, back in the 1980's. We used an expert system to automate the design and programming of commercial computer systems, in an attempt to cut out much of the low-level drudgery and free our programmers and analysts to concentrate on higher-end, more complex problems. It worked, after a fashion, but was primitive in the extreme compared to some of the systems now on the market.
That's one reason why programming wages have dropped so much, comparatively speaking, compared to half a century ago. Back then, we were highly skilled, very scarce professionals, paid because we were the "magicians" who made computers do what their owners wanted. Nowadays, all the basic stuff has been written so many times that it's easier and cheaper to buy a software package than write your own. When it comes to specialized systems, sure, companies still need programmers and analysts, but they're working at a much higher level than they used to, leaving the drudgery to pre-written code modules that they call in when needed to do the donkey-work.
Given our mention yesterday of automation in the farming industry, one wonders just how far AI and expert systems can go. I suspect we ain't seen nothing yet . . .