I've posted a lot of videos of self-driving cars over the past few years. Click here to see some of them in action.
Many experts believe that most - if not all - cars on the road will be partially or fully automated in the next few decades. In fact, Google's self-driving car is so advanced, it can even recognize cyclists' hand gestures and avoid them on the road. It's even street legal in four states: Nevada, Florida, California and Michigan.
Removing the element of human error from the roads is an attractive idea. Think about it: no more drunk driving, distracted driving, falling asleep at the wheel or even just plain bad driving. But as these cars become more advanced and closer to full-fledged implementation, hard ethical questions have arisen.
Patrick Lin, PhD, wrote a fascinating article for Wired about these questions. He writes:
Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?