Let’s say a driverless car merges onto a freeway where every lane is occupied by drivers going 10 mph over the legal limit. By sticking to the speed limit, the driverless car would become a dangerous obstacle endangering the occupants of every vehicle on that road. In order to protect its occupants, the normally obedient car could decide to break the speed limit—and the law. The conundrum faced by this car presents the most difficult technological (and cultural) adaptation we’ll face: robot ethics.
We don’t have the Three Laws (or Will Smith) to protect us. (Source: 20th Century Fox)
As Erin Carson with TechRepublic asks: “How conservative should the car be to avoid accidents? Should cars be able to break the law and speed in order to keep passengers safe?”  If we decide it is acceptable to break some laws in the interest of passenger safety, the ethical debate will have to settle on just how far a car can go to protect its occupants. Ultimately, it will come down to the engineers and programmers to shape the decision making skills and priorities of these intelligent machines.
The car will also be able to detect other vehicles’ sizes and will be able to make a value judgment about what’s more of a threat, given two or more reckless drivers. A reckless motorcycle, for instance, might be less of a threat than a reckless semi truck, so the car would stay further away from the truck. (Presumably, one hopes, without further endangering the motorcycle).
The “one hopes” at the end of that block quote is one of most worrying things I’ve read about driverless cars. If the prime directive (as we’ll call it) of these cars is to safely deliver their passengers to their destination, then what kind of actions will they be allowed to take to achieve it?
Take a 4-lane road (that’s two lanes in each direction) with a median divider, a posted 65 mph speed limit and traffic moving roughly at that speed. Heading in one direction are a driverless car (with passengers) and motorcyclist side-by-side, each in their own lane; traveling in the opposite direction is a wildly out of control truck. With little warning, the truck flies over the median divider with a trajectory that would surely lead to a fatal collision for the occupants of the driverless car. Thanks to its high speed electronic brain, the car has the time to evaluate its options and decide on a course of action.
The first option would take into account the motorcycle blocking the second lane, leaving no choice but to scrub off some speed with emergency braking in an attempt to reduce the severity of the impact. The second option is inspired by Koebler’s “one hopes” at the end of his scenario. Let’s assume the car, following its prime directive, values the lives of its occupants more than any other—the other in this case being the motorcyclist. At the cost of the rider’s life, the car could decide the best course of action to be to swerve into the #2 lane and save its own occupants from certain catastrophic injury or death. Either choice leads to serious injury or death, but it is a choice the car is capable of, and one it will most certainly need to make.
The ethical problem with letting the car decide what to do (and who to sacrifice) is that the car is not, in fact, the one making the decision. Every action the car takes is predetermined by the priorities set during its design and construction—priorities that were decided upon by humans. At some point during the car’s coding this scenario will present itself to the engineers, and it will be up to them to make the decision that will save—or sacrifice—lives in the future. We need to have the conversation that will face these tough questions now, before the decisions are made for us.
Had the out of control truck been run by a computer instead of a person, it would never have created the dangerous situation in the first place. Likewise, had the motorcyclist instead been a passenger in a driverless car, the networking capabilities of the vehicles would have found a way to move them both simultaneously out of harm’s way. The truth is, the dilemmas brought up in these hypotheticals have no good answer as long as unpredictable people and vulnerable vehicles are allowed to share the road with driverless cars.