Trolley Problems

The New York Times has the latest in a long series of pop think pieces that wonder how driverless cars will deal with variations on the so-called “trolley problem”: is it ethical to make a decision that saves some people’s lives at the expense of another person’s life? This article asks whether your driverless car should hit a pedestrian to save your life. Not surprisingly, most people in the studies chose to save their own lives over that of a hypothetical pedestrian.

Should driverless cars be programmed to serve the greater good, even at the expense of the passenger?

How would you even know what the greater good is? What if the passenger has children in the car? What if the passenger is the only person in the car, but is the sole breadwinner for a large family and is taking care of disabled relatives? What if the passenger is 70 years old and the pedestrian is 35 years old? What if the passenger is 70 but in excellent health and the pedestrian is 35 with a terminal illness? What if the passenger in the car was on their way to commit a crime? Even if we, as a society, could agree on what the greater good might be, there would not be enough time to determine all the relevant information in time for anyone – human or machine – to make an ethical decision.

Quite simply, I do not think humanity is about to take the extraordinary step of allowing a fully automated system to decide who dies and who lives.

Instead, driverless cars will be expected to perform the same way we expect almost all other machines to perform: with an extreme deference to preserving human life. Think about the machines you interact with on a daily basis. You have to be negligent in the extreme to get killed by a machine because of something the machine couldn’t avoid doing (as opposed to something idiotic that the machine’s human operator might make it do). And even if you do something incredibly negligent, like step in front of a train or stick your arm inside rotating machinery, the machine’s owner might still end up with significant liability for your injury.

Many people view autonomous vehicles as incremental improvements to automobiles. And it’s true that there will be great improvements in safety from “driver assist” technologies like systems that help keep you in the lane and keep you from hitting the car or pedestrian in front of you. These technologies will save lives without a doubt.

However, full automation is not an incremental improvement. It’s a shift to a much different level of social/cultural expectations and liability. Drivers are held to extremely low levels of liability for damage they cause; in California, the state minimum is $15,000 for death to one person and $30,000 for death to multiple persons. In contrast, Metrolink paid out $4,200,000 for each death in the 2008 Chatsworth crash, and the number was only that low because of federal law that caps railroad liability at $200,000,000 per incident.

The reason railroads have much higher liability limits than drivers is that most people in the public identify as or with drivers, while very few people identify as railroads. If the state tried to raise the auto insurance minimums to $4 million per death, insurance premiums would skyrocket and there’d be a political revolt.

In other words, if you maim someone with your car, but you have the state minimum auto insurance and few assets, that person is shit out of luck. Google, on the other hand, is not going to have its liability capped at $15,000 per death. It has the financial wherewithal to pay for insurance that actually covers the damages caused by auto accidents, it has the assets to pay damages in excess of its insurance limits, and it’s not going to get any sympathy from the public if a driverless car runs over someone’s kid, someone’s mom, or someone’s grandpa.

It seems to me, then, that fully autonomous vehicles will by necessity take a very large discrete step towards eliminating deaths from automobiles. They will be programmed to do so by having very conservative software. The large corporations – and their insurers – responsible for the software, and maybe for owning and operating the vehicles as well, will demand it. No one is going to accept an incremental improvement in safety in exchange for a hundredfold increase in liability. Fully autonomous vehicles will only kill someone in cases where the victim is grossly negligent, and even then, there will likely be out of court settlements.

The nature of Silicon Valley frequently rewards entrepreneurs for being the first to the market with a product, even it means frequent incremental updates to fix bugs. As long as they don’t deal with the security of private data, software problems usually have minor consequences. Apps freeze and crash; Google Maps has its share of erroneous data; formatting in Word is still frustrating as hell. But as Theranos shows, other industries don’t work that way.

Railroad signaling may offer clues as to what will be expected of fully autonomous vehicles. Braking performance assumptions are very conservative. Automatic train control is not expected to be marginally safer than human drivers; it’s expected to completely eliminate train to train collisions. The system is designed to assume it’s not safe to move unless otherwise directed, not to assume that it’s okay to move unless informed otherwise. Railroads were just forced to spend billions on Positive Train Control, one function of which is to help protect railway workers against the trolley problem by insuring it never comes up in the first place.

It’s not that incremental improvements aren’t good. It’s just that cultural expectations change when we turn a task over to a machine. We don’t expect machines to make ethical decisions, we expect them to be safe enough that they never have to.

Advertisements

3 thoughts on “Trolley Problems

  1. Boris

    Good points. One thing I thought when reading the ny times article is that people tend to assume that driverless cars would behave similarly to human operators. Humans speed, run lights, and game other drivers for advantages. No automated system would do any of those things. On a street with a 20 mph speed limit, an automated car would be able to stop, or at the very least, cause minimal damage in a crash. Most likely, a human driver in that situation would be traveling about 10 mph over the speed limit.

    I for one can’t wait to get people out from behind the wheel.

    Reply
    1. ofsevit

      This assumes humans want to travel in slow-moving AVs. With a top speed of 20 mph, it’s going to take a lot longer to make a 5 mile trip. Some people might bike (since it would take the same amount of time) but others wouldn’t give up a car which can make the trip in 10 minutes for an AV which makes it in 20.

      Reply
  2. John Alexander Thacker

    An auto centric comparison, instead of Metrolink, would be to compare various “unintended acceleration” problems or other crashes. When it’s related to driver error, we see the small limits you’re talking about. For manufacturers, lawsuits, fines, and recall costs are enormous, even in cases where the engineering seems to support the idea that the engineering was not at fault and the problem is entirely the driver. Juries are simply much more likely to find that far away corporate at fault than a driver.

    This may not be optimal, in that it may slow down the transition to a safer transportation technology, but I completely agree with you that it’s what’s going to happen and people should be aware.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s