Self-Driving Cars

Self-driving carImagine for a moment what your daily commute would be like if your car could drive itself. You’d get in, tell your car where you want to go, and then sit back and let it take you there. You could read, work, eat, talk, text, or even sleep during the journey. The computer driving your car would automatically choose the best available route, and perfectly pilot the car to maximize fuel economy. And if everyone has a self-driving car, maneuvers that require social negotiation, such as merging or navigating an all-way stop, would be accomplished smoothly and quickly since the computers controlling the cars would follow the same rules, or might even communicate with each other.

It sounds utterly utopian, doesn’t it? Of course, visions like these typically revel in the positives while completely ignoring the possible consequences, but that is often necessary in the early stages in order to capture the imagination. It’s only later that the messy details rise to the surface, and we as a culture have to conduct a frank discussion about decidedly untechnical things like safety, responsibility, and liability.

A case in point is the promotion of Google’s self-driving car prototype. Google released a new video this week that was picked up by a few news outlets, in which they show a legally blind man using the car to get a taco and pick up his dry cleaning. Here’s the video:

Although Google is famous for their April Fool’s jokes, this isn’t one of them. Google has been testing their self-driving car for a while now, and this latest video is an attempt to show one possible use for such a product: restoring mobility and independence to those who can no longer drive. But this is really only the tip of the iceberg. What the creators of  Google’s self-driving car want to do is far more profound. They want to revolutionize transportation for everyone. This video explains:

In many ways, the idea of a driverless transportation system is not really new. Various forms of driverless subways are already in operation in many parts of the world. In the 1970s, the French attempted to build a driverless transportation system that featured individual cars that could join together to form quasi-trains when they reached a major arterial (see Latour’s book Aramis, or the Love of Technology). One can now ride fully-automated “pod” cards between terminals at London’s Heathrow airport. And a few high-end luxury vehicles already feature the ability to parallel park automatically.

While Google’s self-driving car takes this vision much further, there is a basic assumption that underlies all of these projects: humans are fallible, so dangerous things like driving should be given over to computerized automation, which is assumed to be perfect. As the rhetoric goes, computers don’t get tired or distracted, and they always make the logical choice.

But this, of course, assumes that the humans who program those computers and design those automated systems do not make any mistakes either. Computers don’t do things on their own—they follow the explicit instructions given to them by a human programmer. Anyone who has worked in the software industry knows that programmers are just as fallible as anyone else. Programmers get tired, distracted, and make mistakes, just like drivers do. Even when the programmer is concentrating fully, it’s sometimes impossible to see all the ramifications of a small change made to an obscure part of the code. Even if you get all the code right, there’s no guarantee that the connection between the computerized controller and the actual mechanics won’t break down or malfunction. And even if all that is working properly, one still has to worry about purposeful malicious behavior; consider for a minute what would happen if someone managed to hack into a self-driving car’s control system.

When I was in graduate school, I participated in a research network that was investigating ways to make computer-based systems highly dependable. Some researchers reported on ways in which actual systems had failed in practice, helping us learn from our mistakes. Others studied systems that had managed to achieve a remarkable level of dependability, trying to discern what factors in particular led to that achievement. What became obvious rather quickly was that dependability required far more than just good technique and well-engineered artifacts. It also required a highly disciplined social organization to operate that technical infrastructure, keeping it in good repair, and making sure it does what it’s suppose to do.

When I apply this to self-driving cars, it raises a number of questions for me. Who will verify that the control systems are correctly designed and implemented? If problems are detected after manufacture, how will they be updated, and how will those updates be tested? When the system starts to fail, either due to software problems or mechanical issues, will it fail gracefully, and will drivers know how and be ready to resume control? And when the first accident occurs involving a self-driven car, who will be found liable? The driver? The manufacturer? The software developers?

I’m not saying that these problems are insurmountable, only that we will be forced to consider them before any kind of widespread revolution in transport can occur. The airline industry has traveled this road before, and the auto industry will no doubt learn from their mistakes and achievements. In the meantime, buckle up, and watch out for those self-driving cars!

Update: Although the self-driving Prius is real, Google’s April Fool’s day joke this year takes it to a whole new level: a self-driving NASCAR.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s