On the surface, self driving cars sounds like a godsend. You could sit back and relax during your commutes, safe in the knowledge that your computer driven vehicle will get you to your destination safe and sound. “Safe” being the keyword here, because these vehicles will supposedly be safer than any human driver. The computer will never experience road rage or fatigue, and will have lightning fast reflexes. So far, Google’s test vehicles have driven over 700,000 thousand miles without incident.
However, we don’t really know how safe or effective they’ll be until ordinary drivers begin adopting the technology on a wider basis. Many of the test drives may have been restricted to ideal conditions. In California for instance, Google has painstakingly mapped out 2000 miles of roadway for their testing, ignoring the other 172,000 miles of public roads. They’ve essentially created the perfect lab setting to test the technology, and their self-driving cars may not be prepared for real world conditions.
Don’t tell that to Google though. They’re busy patting themselves on the back for driving 4,000,000 miles in a virtual matrix, and they have the gall to lobby the state in favor of virtual testing over real world experience. According to Google’s safety director:
Computer simulations are actually more valuable, as they allow manufacturers to test their software under far more conditions and stresses than could possibly be achieved on a test track.”He added:
“Google wants to ensure that [the regulation] is interpreted to allow manufacturers to satisfy this requirement through computer-generated simulations.
Unfortunately lab conditions aren’t always reflective of reality are they? History is filled with examples of well meaning people creating terrible disasters, all because what was on paper or what was simulated, did not live up to the real world. Though I doubt this concerns Google very much. If their system fails to prevent an accident, they may not be liable. Car manufacturers may be able to figure out how to shed the financial responsibilities of an accident, by passing it onto you, the consumer.
One of biggest problems with self driving cars, is if you get into an accident, who’s to blame? Especially if we’re talking about a future where everyone is using automated vehicles. If there’s an accident, then the culprit is obvious. The program failed to prevent the accident, thus the programer is liable for the damages.
However, what if the manufacturers of the car and the programers who created the software, could give you a choice? One recent proposition, is to allow you to choose how your vehicle would behave in an accident:
Car makers will obviously want to manage their risk by allowing the user to choose a policy for how the car will behave in an emergency. The user gets to choose how ethically their vehicle will behave in an emergency.
The options are many. You could be:
- Democratic and specify that everyone has equal value
- Pragmatic, so certain categories of person should take precedence, as with the kids on the crossing, for example
- Self-centered and specify that your life should be preserved above all
- Materialistic and choose the action that involves the least property damage or legal liability.
Sounds great at first. You get to choose how your car will respond in an accident, based on your own personal ethics. Remember though, no matter what you choose, you weren’t the one who programmed the car. The manufacturer gave you several generic presets that it designed. You’re still putting your life in the hands of some unseen programmer working for a major corporation with a bottom line.
Yet somehow you’ll still be the one who’s liable. That’s like handing a gun to an untrained adult and walking away. Even though it’s reckless of you to do so, you’re not legally responsible for anyone they shoot. Not so in this case. Unless you choose the most selfless option available, you run the risk of being liable for somebody else’s life or property.
Furthermore, how does insurance play into these choices? Will certain preset car behaviors cause your insurance to go up? If that’s the case, it may force lower income drivers into choosing options that don’t really reflect their interests or ethics. And if say, the insurance companies prefer that you choose the option that causes the least amount of property damage, will that actually amount to less deaths? Will your car choose to drive through a crosswalk rather than crashing into a Lamborghini?
My advice to you? Stick to using your brain to drive, at least for the first few years that these machines are rolled out. To me it looks like they’re going to give you these “choices” as an underhanded way of giving you all the risk and liability as they iron out the kinks in their fancy system. And there will be “kinks”. Never once has a new technology showed up on the scene in perfect working order. Being the first consumer to use this product is tantamount to becoming a guinea pig.
Delivered by The Daily Sheeple
We encourage you to share and republish our reports, analyses, breaking news and videos (Click for details).
Contributed by Joshua Krause of The Daily Sheeple.
Joshua Krause is a reporter, writer and researcher at The Daily Sheeple. He was born and raised in the Bay Area and is a freelance writer and author. You can follow Joshua’s reports at Facebook or on his personal Twitter. Joshua’s website is Strange Danger .