The BBC has an interesting article about how artificial intelligence learned from its environment a navigation technique for high-altitude balloons that its creators had never considered.
The gaggle of Google employees peered at their computer screens in bewilderment. They had spent many months honing an algorithm designed to steer an unmanned hot air balloon all the way from Puerto Rico to Peru. But something was wrong. The balloon, controlled by its machine mind, kept veering off course.
Salvatore Candido of Google's now-defunct Project Loon venture, which aimed to bring internet access to remote areas via the balloons, couldn't explain the craft’s trajectory. His colleagues manually took control of the system and put it back on track.
It was only later that they realised what was happening. Unexpectedly, the artificial intelligence (AI) on board the balloon had learned to recreate an ancient sailing technique first developed by humans centuries, if not thousands of years, ago. "Tacking" involves steering a vessel into the wind and then angling outward again so that progress in a zig-zag, roughly in the desired direction, can still be made.
Under unfavourable weather conditions, the self-flying balloons had learned to tack all by themselves. The fact they had done this, unprompted, surprised everyone, not least the researchers working on the project.
"We quickly realised we'd been outsmarted when the first balloon allowed to fully execute this technique set a flight time record from Puerto Rico to Peru," wrote Candido in a blog post about the project. "I had never simultaneously felt smarter and dumber at the same time."
This is just the sort of thing that can happen when AI is left to its own devices. Unlike traditional computer programs, AIs are designed to explore and develop novel approaches to tasks that their human engineers have not explicitly told them about.
But while learning how to do these tasks, sometimes AIs come up with an approach so inventive that it can astonish even the people who work with such systems all the time. That can be a good thing, but it could also make things controlled by AIs dangerously unpredictable – robots and self-driving cars could end up making decisions that put humans in harm's way.
There's more at the link.
The article provides a number of interesting examples of how machine learning has surprised its creators, and those nominally in charge of it. I knew of some of them, but not all.
I was involved in an early implementation of "expert systems" (applied AI) in the computer field, back in the 1980's. We used an expert system to automate the design and programming of commercial computer systems, in an attempt to cut out much of the low-level drudgery and free our programmers and analysts to concentrate on higher-end, more complex problems. It worked, after a fashion, but was primitive in the extreme compared to some of the systems now on the market.
That's one reason why programming wages have dropped so much, comparatively speaking, compared to half a century ago. Back then, we were highly skilled, very scarce professionals, paid because we were the "magicians" who made computers do what their owners wanted. Nowadays, all the basic stuff has been written so many times that it's easier and cheaper to buy a software package than write your own. When it comes to specialized systems, sure, companies still need programmers and analysts, but they're working at a much higher level than they used to, leaving the drudgery to pre-written code modules that they call in when needed to do the donkey-work.
Given our mention yesterday of automation in the farming industry, one wonders just how far AI and expert systems can go. I suspect we ain't seen nothing yet . . .
Peter
13 comments:
Tacking works for boats due to the reaction between the approaching wind and the boat's keel in the water. So I do not understand how a balloon could tack, i.e., advance into the wind, as it has no 'keel' to provide a reaction against anything.
Max Tegmark has a very interesting book called "Life 3.0 " All AI into the future.
In my opinion, this isn't true AI but more a basic type of machine learning.
There are numerous reports of unexpected behavior from these type of systems, showing the need to keep a human in the loop.
They also show the need for better directions to the systems - I have read fictional novels where weapon systems not much more advanced than these were turned on their masters through various means and for various purposes... Any system can be misused, accidentally or intentionally, and must be guarded against that. These unexpected results show that the (human) programmers are not doing a good job overseeing the systems, which to me is a sign that things WILL turn out badly at some point.
Just like with unmanned cars, these are life critical systems being designed by people used to the lower levels of certainty and error common in computers and web platforms - they needs to be designed to higher levels of certainty, like the way that autopilots and nuclear power plant systems are.
Heh, reality is smacking the programmers in the face. Machines CAN and do learn.
I'm with Ed P. I want to know exactly how you "tack" a balloon.
The Two Faces Of Tomorrow by James Hogan was an excellent SF treatment of the unintended danger to humans.
Not tacking per se. Wind blows in different directions depending on altitude. Changing altitude changes direction.
Deep Learning / AI has a habit of learning things about ‘Human Biodiversity’ too. Unfortunately this breaks the current prime directive which is ‘Though Shalt Not Notice’.
The Usual Suspects really really don’t like this and are busily beavering away to construct this shiny new gate-keeping edifice called ‘Ethical AI’.
In other words they want to censor input raw data *and* the eventual outputs after the black box has been trained.
Great.
The "wind" around a balloon is zero. It travels at the speed of the air it sits in, and there is nothing to tack with or against. Dave above nails it: the only thing a balloon controls is its altitude to choose the airmass with speed and direction it wants. As I sit here today, I see clouds moving in three different directions above me.
Cue self-driving cars, who decide the safest right turn is across the sidewalk full of pedestrians, in 3, 2,...
"Waitwhat? We didn't think of that!?!?"
Those aren't 'hot air' balloons, they are more like a weather balloon, filled with a lighter-than-air gas, like methane or helium. But you can't expect a balloon head to know that.
As to the programming aspect of lower level programs being "canned", there is the current "No Code" programming. The "programmer" just gives high level directives to the system and the system cranks out the code. There is still a level of programming but one does not have to write all of the decision statements, loops, etc.. The only issue is that this kind of coding can create very large programs compared to the task at hand. This was even apparent two or three decades ago when operating systems went to something higher level, such as C, than assembly language. I saw something years ago where someone had created an MSDOS compatible system in assembly that was about 10-20% the size of the MS offering.
I really hate the term AI. It brings in a lot of expectations that can't be met and ideas we can't define. Heck, I haven't seen a definition of intelligence that is clear and concise. If you can't describe it, you can't program it.
I use a thought experiment to demonstrate this. Assume, for the sake of argument, that humans are intelligent (I've met some of you!). And assume that an amoeba is not intelligent. Where in the remaining spectrum of animals does intelligence start? A dog or a horse certainly display intelligence. Earthworms?
Just my pet peeve.
Post a Comment