Can Your Self-Driving Car Kill You to Protect Others?
Many of you have likely read, perhaps with some trepidation, about self-driving cars being the wave of the future. Florida is one of just a few states already testing these “autonomous” cars. While humans are too often guilty of reckless or negligent behavior behind the wheel, the thought of handing that power over to a programmed vehicle can be a frightening one.
As manufacturers work to develop collision-avoidance programs for these vehicles, there are some ethical decisions that have to be made. For example, faced with a dangerous situation, cars can be programmed to make the move that will harm the fewest number of people, perhaps at the expense of the occupants of that car. They can also be programmed to protect the occupants if there’s even the slightest chance of harm to them — even if it means almost certainly injuring or killing others.
Whether government entities like the Department of Motor Vehicles will have a say, and how much of one, in the safety of these collision-avoidance algorithms remains to be seen. Google, perhaps the best-known maker of self-driving cars, has recommended that manufacturers be allowed to “self-certify” that their vehicles are programmed to operate safely. Further, consumers may insist that they be able to override their car’s programming if they consider it necessary.
Obviously, the concept of vehicles that make the decisions for us creates all types of liability questions. If you’re harmed in an auto accident in your self-driving vehicle because of a “decision” that it made, can you sue the manufacturer? If your vehicle causes an accident, either because of a malfunction or because of what its program told it to do, who is responsible? These are just a few of the many issues that will need to be worked out before self-driving cars become ubiquitous on Florida roads.
Source: Los Angeles Times, “Will your driverless car kill you so others may live?,” Eric Schwitzgebel, Dec. 04, 2015