The Future Is Now: Self-driving Cars

self driving car

Imagine your typical Monday. You wake up, shower, dress, and step out the door to head to work. Instead of walking towards your garage, however, you make your way down to the street and check your watch; it’s 7:29am. As the minute ticks over to 7:30am, your car turns the corner and comes to a stop just in front of you, without anyone driving it. You climb in, select your workplace as your destination, and start buttering your breakfast toast as the car begins to pull away. One newspaper article and cup of coffee later, you’ve arrived at the front door of your building. You step out and head inside as your car pilots itself back towards home, waiting for other family members to need a lift. The entire time, the vehicle never requires you to operate any controls, or even be facing the front of the car for that matter.

As you read that story, you probably found yourself experiencing a conflicting mixture of awe, excitement, concern, and fear. The appealing notions of increased productivity or time to relax during your daily commute are tempered by the understandable concern for your own safety. Can I really trust a computer to get me there safely? What if a hacker takes control of my self-driving car? Who’s responsible for an accident when there’s a microchip behind the wheel?

First, with self-driving cars being safer overall, you’re looking at lower accident liabilities and insurance costs, which benefit both firm and client. Accidents will occur with lower frequency, will be less severe, and thus less expensive. With many of these cars driving themselves to go pick up their owner, it stands to reason that crashes would involve fewer passengers in general, further reducing the amount of injury claims. This shiny coin does have a flipside though, in that experts expect to see a drastic change in the way society uses automobiles, and thus expect to see the amount of auto insurance policies plummet. For example, when your car can drive itself to pick you up it becomes much easier for multiple people to share a vehicle, and when one family can easily share one car we’re sure to see a decline in the amount of vehicles out on the road, which means fewer policies to hand out. Delivery truck companies will need fewer trucks, and thus fewer policies. From private ownership to commercial fleets, there will simply be a sharply reduced demand for the amount of automobiles our society needs. There’s even speculation that mandatory auto insurance could be dropped as self-driving cars become more and more integrated. [1]

Driverless cars have ramifications beyond insurance claims, as the same forces that allow shared ownership will also affect delivery vehicles and public transportation. One mining company, the Rio Tinto Group, uses self-driving trucks in its operations, and one study found that “a fleet of 9,000 driverless taxis could serve all of Manhattan at about 40 cents per mile (compared to about $4-6 per mile now).” [2] Why take a train when you can sleep as your car travels? Why take a bus when a fleet of driverless cars can get everyone there more cheaply and cleanly? Having a robot at the wheel means time and energy is never wasted. In the coming decades we are sure to see a dramatic change in the transportation workforce as drivers become unnecessary (or a liability, even), and fewer vehicles are needed to accomplish the same tasks.

To help us understand the effects of this revolution, we must first familiarize ourselves with the technology itself. Many leaders in this field of development, like Google and Tesla, have been working on this tech for years and are gearing up for launches between year 2017-2020. With this new tech so close on the horizon, and so much transparency in the field, we fortunately have a lot of information available to us to help answer our pressing questions.

First, let’s look at the safety on a core level of a computer controlled vehicle. Is the very idea itself inherently more dangerous or safer than having a human behind the wheel? According to accident reports made public by Google, after more than 1.8 million miles of test driving there were 13 minor fender benders, none of which were caused by the self-driving vehicle.[3] In fact, 90% of accidents today are caused by user error [4] and not by mechanical or software failure. Considering how many vehicles on the road currently feature antilock brakes, crash prevention radars, electronic stability control, and self-parking systems, it seems clear that human mistakes are to blame much more often than computer systems. With over 32,000 car crash fatalities in the USA each year [5], that’s 28,800 fatalities that could be prevented if we removed the possibility of human error. Referring again to the accident reports of Google, which took place on public roadways, it would appear that the cars can be programmed safely and reliably, even in these early stages.

Okay, so we can trust the systems in the vehicles, but can we trust other people, specifically hackers? This is one of the biggest fears surrounding the self-driving car. With a human piloted vehicle, there’s no way to hack someone’s brain, or hack a mechanical lever like the accelerator. If everything is controlled by a computer, then the fear is that a hacker could take complete control of your vehicle, turning your car into a missile or a busy highway into utter chaos. Jonathan Petit, a Security Scientist at Security Innovation, used a simple $60 setup to trick a self-driving car into thinking that it was surrounded by obstacles, causing it to stop completely. [6] However, Petit says that there are ways to solve it; systems that can be put in place to crosscheck data and prevent spoofing. Even so, opinions differ wildly on this subject. Some experts think that the strict safety analysis these cars will go through will force manufacturers to have very strong defenses against hacking, while others feel it’s impossible to make these cars “hack-proof”, or close enough to it to be deemed safe. [7]

Ultimately, with big money on the line for manufacturers it’s in their best interest to make their cars as safe as possible. No company wants the public relations nightmare of having their entire fleet of cars hacked at once, or to be held responsible for accidents caused by hacking. If they will be held accountable financially in the case of hacking or malfunction, then they’ll naturally strive to strengthen their product if only for profit’s sake.

This brings me to the question of liability. As we asked ourselves before, who is responsible when a self-driving car gets in an accident that it caused? The precedent being set here is a reassuringly simple one; the law already sides with consumers when a failure occurs and assumes that the vehicle as a whole should be safe, regardless of which component caused the failure. Companies like Volvo, Google, and Mercedes Benz are accepting full liability when their vehicles are in autonomous mode. “If we made a mistake in designing the brakes or writing the software, it is not reasonable to put the liability on the customer,” says Erik Coelingh, senior technical leader for safety and driver support technologies at Volvo. [8] Just like car brakes or airbags, many manufacturers are taking responsibility for their software, and current liability framework already holds them accountable. This obviously does not apply to a situation where another driver caused the accident, as that instance would be handled in the same way as if both cars were piloted by humans. As we see with Google’s accident reports, the computer can keep perfectly accurate records of when it was engaged and what actions it too
k while in control, so it would be very difficult for a drunk driver to blame the autopilot for an accident. Unraveling the cause of a crash would actually become exponentially easier, as a simple download of the data logs would give officials exactly what they needed without human bias.

Still, the fear of being harmed by a machine or not having control of a potentially dangerous situation nags at the back of your mind when you think of being on the highway in a self-driving car. As a human, we want to be in control, and we are naturally afraid of putting that control into the hands of something inanimate. Brad Templeton, Track Chair for Computing, Singularity University says it well in a video I would highly recommend watching ( “The social question is, if you don’t like being killed by robots, you’d rather be killed by drunks, because that’s what’s happening today. 40% of the fatalities on the roads here have drinking involved with them, and robots for better or worse very rarely drink. As a society it’s something you’re very scared of, but is actually much safer.” [9]

With the reality of the self-driving car mere years away from us, it’s important for us to be honest with ourselves as developers of this tech stay transparent. With what we can see now, and the technology will only improve as time goes on, the self-driving car is inherently safer because there isn’t a human in control, and in instances where an accident does happen, there is a legal framework already in place that protects the user and clearly identifies responsible parties. I’m sure that I’ll be taking a deep, steadying breath before I let go of the steering wheel of a self-driving car for the first time, but I imagine that people felt the same thing before they pressed the accelerator in a Model T, and we’ve come so far since then.



Shawn Shawn Steele is a freelance writer for Insured Solutions as well as a musician. He currently lives in Louisville, KY with his wife.


Share This Post