Friday, October 23, 2015

That's a hell of an ethical dilemma


Courtesy of a link at Joel's place, we find an article in the MIT Technology Review on the ethical and moral implications of self-driving cars.  Here's an excerpt.

Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

. . .

In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.

This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.

And therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.

There's more at the link.

Food for thought.  I can see real advantages to self-driving or 'autonomous' cars;  our recent 4,000-mile road trip had moments where driver tiredness became a safety factor, which it would not have been if an 'autonomous mode' had been available.  On the other hand, I'm darned if I'll entrust my safety on the road to an algorithm that may or may not take my best interests into account.  That, of course, includes the life of my wife.  What if the algorithm senses an imminent collision, and decides that the best - perhaps the only - way to handle the crisis is to take the impact on the passenger side of the car, which would result in the death of my wife?  You think I'm going to let a machine make that call, when I'd make precisely the opposite one?  Yeah, right!

This is going to take a lot of thought . . . and I don't know that there are any easy or widely-acceptable answers.

Peter

19 comments:

  1. That it is... Autonomy has a LONG way to go before it's actually going to be trusted... And hopefully there will always be an over-ride capability...

    ReplyDelete
  2. trying to program a computer to make ethical judgements is foolhardy (is the car going to be able to tell the difference between a crowd of people and dummys??)

    the car will just be programmed to stop as quickly as possible (or for advanced brains, calculate if it can avoid the impact). It's not going to try and figure out how many people are on board vs how many may or may not be in it's way.

    ReplyDelete
  3. Or to put it another way, why should my life as the driver/owner be forfeit just because 1 or even 10 other people are too stupid to obey the traffic laws and are crossing on a red light?

    Or why should my car run itself into a wall when the screeching of my brakes might be sufficient to cause those people to get out of the way in time? Humans can't always guess what other humans are going to do, how the heck is a computer going to model that behavior successfully?

    Having said that...honestly, autonomous mode should only be allowed on highways. There are too many variables that cannot be calculated or predicted in stop-and-go city driving with pedestrians involved.

    ReplyDelete
  4. Toss this in the mix. An individual has been using their self-driving car without incident for (insert length of time here). Suddenly, a catastrophic system failure at the same time as a traffic event requires the driver to instantly take control. The only problem---the driver doesn't remember how.

    ReplyDelete
  5. Here's what's going to happen:

    The auto insurance industry will lean on the autonomous-car-makers to program the cars to do whatever will cause the least damage to themselves.

    Hint: 10 pedestrians are a lot softer than a wall.

    It won't be ethics, it'll be the bottom line.


    Don in Oregon

    ReplyDelete
  6. Better to trust in God than technology.

    ReplyDelete
  7. Inconsiderate BastardOctober 24, 2015 at 2:24 AM

    Let's page back a bit - autonomous cars will be programmed to avoid the taking of a life, or lives, in some manner (see: Asimov, Isaac, "the Three Laws of Robotics"); whether that becomes "sacrifice the occupants" or "running over people causes less vehicle damage than swerving into a wall" has yet to be decided.

    Anyway, Brad & family are tooling down the road in their Magnificent Driving Machine (MDM) and Cletus Sr. steps off the curb and stands in the road. The MDM recognizes a human and per its algorithm, slows, then stops. Cletus Jr. steps off the curb and stands in the road behind the MDM. The MDM is now immobilized because its algorithms rank "protect human life" at the top of the list, allowing the Cletii to dispatch Brad, make Mrs. Brad their weekend toy, sell Brad Jr. into slavery, and they get a nifty MDM to boot.

    In today's world the solution to the problem would be to simply drive over Cletus Sr. and be done with it, but thanks to Elon Musk and the brain trust at Google, that option disappeared with software control of our new MDM vehicles.

    I wonder if self driving cars will come with bullet proof windows and a food supply.

    ReplyDelete
  8. Old_NFO: Sure, I fully expect there to be an over-ride capability. For the police. The rest of us, we should just sit on our hands and trust the government.

    How far fetched does that sound?

    ReplyDelete
  9. We will have self driving cars that are orders of magnitude safer than human drivers long before the cars can make those ethical decisions. Focusing on who will the algorithm choose to sacrifice will be like deciding to avoid gun ownership strictly because of the risk of mechanically caused accidental discharge.

    ReplyDelete
  10. I see it as a matter of individual freedom. Why would I want a self driven car. Autos are the pinnacle of freedom. They made it possible for people to move freely over vast distances with relative ease. To surrender this to a vehicle that can be remotely controlled is to be enslaved by those that have that control. A second issue is if there is a mix of controlled and free-driven cars on the road where will the liability issues fall. If my self driven car is at fault, I am responsible. If a controlled driven car is in an accident, who is responsible; the owner or the manufacturer/computer producer? Lawyers are drooling over this.

    ReplyDelete
  11. Plot of a future murder mystery: the perpetrator gets a crowd of ten friends, maybe accomplices, maybe uninformed of what's going on, to run into the road in front of the intended victim's car. The car drives the victim into a wall. Who is more guilty, the perpetrator of the "let's wander out in front of an incoming car" group or the person who pushed for self-driving cars to sacrifice the driver?

    There are ALWAYS unintended consequences. I could almost see this becoming like a flash-mob (the criminal kind) type of activity somewhere. I prefer to take my chances with driving the car myself.

    ReplyDelete
  12. The self driving example has already been in existence for years as self-piloting planes. And a "Computer bug" overrode the pilots' commands. However, when the investigation was complete 'pilot error' was blamed. A pilot either has command of the plane, or he doesn't; there is no middle ground. The computer can "assist", but ultimately the pilot is in control. Except with some manufacturers where he isn't in control. The computer in this case said, "the plane is landing", and the pilot said, "NO!! It's not!" The computer said, "Oh yes it is," and down it went. You can Google for other examples of the same problem, where the computer overrides the pilots' commands, with a crash as the result.

    Now apply that to cars, where there is a much larger landscape to program for and eliminate computer bugs from. And add in a Moral Judgement case on top of that.

    Are the drivers in control of their cars, or aren't they? It's the same question with a different type of transportation.

    -- Steve

    ReplyDelete
  13. There was a recent article about how Tesla had used a wireless software update to enable the self-driving capabilities.

    Why can't some third party forge an update, and make a self crashing car?

    If the driver doesn't have a mechanical override, this will be used to murder people. Even with driver override, one could do something beyond the driver's ability to recover during a moment of inattention.

    With just a manual driver, you have one control system that can go wrong, and the drunk, high or incompetent status is relatively easy to establish. Throw in an individual electronic driver for each vehicle, and you have twice as many things that can malfunction or be damaged. That may be more variation for an electronic brain to anticipate.

    1. Keep vehicles manual
    2. Breathalyzer
    3. consider extreme measures for drugs that one doesn't have an interlock for

    ReplyDelete
  14. Once the gangs figure out that stopping and carjacking a "self-driving" car and robbing its driver is trivial, people will no longer think they are so great. Put out obstacles, slow it down. Have semi-expendables stand in its path, locking up the "escape routes." Poof, frozen into place. Once bad guys realize that the cops can have them delivered to the door of the Big House by remote control, they wont' want them, either. If the cops car's have RC built in, the gangs will be HIGHLY motivated to hack that system.

    There are more than a few systems designs issues that are not technical in nature.

    ReplyDelete
  15. Ah, the Boeing vs. Airbus paradox.
    Let the pilot control the plane or glide gently into the trees?

    ReplyDelete
  16. On the one hand, an automated car might well see problems coming before even the most attentive driver could, and avoid any number of possible dangers.

    On the other, all things made by man's hand are prone to failure, shiny new technologies more so than most.

    Considering that manual transmissions are still around and being fitted to new cars, I expect it'll be a long time- if ever -before these become 'standard'.

    ReplyDelete
  17. I believe that all these great ideas for safety, such as the "airbag" that originally was safe (for tall people, no real data on how many short people were killed by their airbags) should be tested by the federal government. The federal government buys thousands of cars, and all federal cars should have these safety improvements installed first for a period of 5 years before the government can mandate their installation on private cars.

    ReplyDelete
  18. This issue doesn't require much thought, the obvious answer is don't build the things. Fatigue issues can be mitigated by planning better than they can by a self driving car. This is just one more assault on personal independence and sovereignty by companies who want to cater to the lowest common denominator which in this case is essentially people who are too lazy to live their own lives. Keeping such people from having automobiles in the first place would be a much better approach. We need to Raise the bar, not lower it.

    ReplyDelete
  19. From a control perspective why shouldn't you think of a self-driving car the same way you think of a public transportation vehicle, where you trust the driver's skills and ethics? So the driver is an algorithm - what exactly is wrong with that, especially considering some professional drivers I've known? I live in Frankfurt, Germany and the Frankfurt Airport has the self-driven shuttle between Terminal 1 and 2 in service for over 15 years now and it works great. Sure, it's a "controlled environment" but it works and the lessons learned can be used for the next step - the autonomous cars. Sure, there are going to be teething problems, but to say absolutely no is a little bit luddite, don't you think so?

    ReplyDelete

ALL COMMENTS ARE MODERATED. THEY WILL APPEAR AFTER OWNER APPROVAL, WHICH MAY BE DELAYED.