Human error accounts for
9 out of 10 vehicle accidents. That alone is a compelling argument for building more autonomy into cars. After all, a robot car won't get moody or distracted, but will remain alert at all times. Moreover, it will respond quickly and consistently to dangerous situations, if programmed correctly. The problem, of course, is that it
will respond, and
you may not always be happy with the decisions it makes.
For instance, what happens if 5 children playing tag suddenly run in front of your robot car — should it opt for the greater good and avoid them, even if that puts you in mortal danger? Or should it hand over control and let you decide? Some would argue that such questions are moot, for the simple reason that autonomous cars may significantly reduce accidents overall. Nonetheless, these questions go the heart of how we see ourselves in relation to the machines we use every day. They demand discussion.
Speaking of discussion, I'd love to hear your thoughts on any of these articles. I don't agree with everything they say, but they certainly got me thinking. I think they'll do the same for you.
- The Psychology Of Anthropomorphic Robots (Fast Company) — Convincing people to trust a self-driving car is surprisingly easy: just give it a cute face and a warm voice.
- The Robot Car of Tomorrow May Just Be Programmed to Hit You (WIRED) — In a situation where a robot car must hit either of two vehicles, should it hit the vehicle with the better crash rating? If so, wouldn't that penalize people for buying safer cars? A look at why examining edge cases is important in evaluating crash-avoidance algorithms.
- The Ethics of Autonomous Cars (The Atlantic) — Will your robot car know when to follow the law — and when to break it? And who gets to decide how your car will decide?
- IEET Readers Divided on Robot Cars That Sacrifice Drivers’ Lives (IEET) — In response to the above story, the Institute for Ethics and Emerging Technologies asked its readers whether a robot car should sacrifice the driver's life to save the lives of others. Not everyone was convinced.
- How to Make Driverless Cars Behave (TIME) — Did you know that Stanford’s CARS group has already developed tools to help automakers code morality into their cars? Yeah, I didn’t either. On the other hand, if driverless cars lead to far fewer accidents overall, will they even need embedded morality?
- When 'Driver' Goes the Way of 'Computer' (The Atlantic) — Many of us imagine that autonomous vehicles will look and feel a lot like today’s cars. But guess what: once the human driver is out of the picture, long-standing assumptions about how cars are designed go out the proverbial window.
- The end of driving (as we know it) (Fortune) — In Los Angeles, people drive 300 million miles every day. Now imagine if they could spend some or all of that time doing something else.
- A Path Towards More Sustainable Personal Mobility (Stanford Energy Club) — If you find the Los Angeles statistic startling, consider this: every year in the US, light duty vehicles travel three trillion passenger miles — that’s 3x1012. Autonomous vehicles could serve as one element in a multi-pronged approach to reduce this number and help the environment.
- How Shared Vehicles Are Changing the Way We Get Around (StreetsBlog USA) — If access is more important than ownership, will fleets of sharable autonomous cars translate into fewer cars on the road? The answer is yes, according to some research.
- Driving revenues: Autonomous cars (EDN) — According to Lux Research, software accounts for a large fraction of the revenue opportunity in autonomous cars. Moreover, the car OS could be a differentiating factor for auto manufacturers.
- Autonomous Vehicles Will Bring the Rise of 'Spam Cars' (Motherboard) — Though it would be a long, long time before this ever happened, the idea isn’t as goofy as you might think.
You can find my previous top 12 robo-car articles
here.