Courtesy of a link at Joel's place, we find an article in the MIT Technology Review on the ethical and moral implications of self-driving cars. Here's an excerpt.
Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?
One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.
But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.
. . .
In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.
This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.
And therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.
There's more at the link.
Food for thought. I can see real advantages to self-driving or 'autonomous' cars; our recent 4,000-mile road trip had moments where driver tiredness became a safety factor, which it would not have been if an 'autonomous mode' had been available. On the other hand, I'm darned if I'll entrust my safety on the road to an algorithm that may or may not take my best interests into account. That, of course, includes the life of my wife. What if the algorithm senses an imminent collision, and decides that the best - perhaps the only - way to handle the crisis is to take the impact on the passenger side of the car, which would result in the death of my wife? You think I'm going to let a machine make that call, when I'd make precisely the opposite one? Yeah, right!
This is going to take a lot of thought . . . and I don't know that there are any easy or widely-acceptable answers.