I've posted a lot of videos of self-driving cars over the past few years. Click here to see some of them in action.
Many experts believe that most - if not all - cars on the road will be partially or fully automated in the next few decades. In fact, Google's self-driving car is so advanced, it can even recognize cyclists' hand gestures and avoid them on the road. It's even street legal in four states: Nevada, Florida, California and Michigan.
Removing the element of human error from the roads is an attractive idea. Think about it: no more drunk driving, distracted driving, falling asleep at the wheel or even just plain bad driving. But as these cars become more advanced and closer to full-fledged implementation, hard ethical questions have arisen.
Patrick Lin, PhD, wrote a fascinating article for Wired about these questions. He writes:
Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?
The computer that drives a car would make this decision in a blink of an eye. Crashing into the Mini Cooper may maximize your chances of survival at the expense of the other driver.
Or what if the car's computer was faced with a choice to swerve into a family of pedestrians or swerve off a cliff and avoid harm to them while sacrificing the passenger?
Computers don't really "think." They follow the rules that programmers lay out for them until their logical conclusions. We don't know how programmers will instruct driving computers to "decide" when it comes to these situations.
That's why there needs to be a public conversation about how we want our cars to behave on the road in the future. How would you program your car to drive in a crash-optimization scenario?