By Lambert Strether of
Spoiler alert: Pretty badly. Let’s start with the hypester’s hypester, :
Musk starts cranking out TED Talk-ready sound bites. “. We know exactly what to do, and we’ll be there in a few years. We’ll take autonomous cars for granted in quite a short period of time,” he says matter-of-factly. “You’ll be able to tell your car, ‘Take me home,’ go here, go there, or anything, and it’ll just do it. It’ll be an order of magnitude safer than a person. In the distant future, people may outlaw driving cars because it’s too dangerous. You can’t have a person driving a two-ton death machine.”
, co-founder of Lyft:
Autonomous vehicle fleets will quickly become widespread and will account for the majority of Lyft rides within 5 years. , private car ownership will all-but end in major U.S. cities.
, Uber, interviewed by Biz Carson:
[KALANICK:] I think it starts with understanding that the world is going to go self-driving and autonomous. Because, well, a million fewer people are going to die a year. Traffic in all cities will be gone. Significantly reduced pollution and trillions of hours will be given back to people — quality of life goes way up. Once you go, ‘All right, there’s a lot of upsides there’ and you have folks like the folks in Mountain View, [California,] a few different companies working hard on this problem, this thing is going to happen.
[CARSON]: How soon will self-driving cars realistically be a significant portion of Uber’s fleet?
[KALANICK:] That is the trillion-dollar question, and I wish I had an answer for you on that one, but I don’t. Right? I have to make sure that I’m ready when it’s ready or that I’m making it ready. So, I have to be tied for first at the least.
So, three Silicon Valley innovators talking their books. And :
Ford CEO Mark Fields said two weeks ago it will have a fleet of completely autonomous taxis operating in an unnamed city .
Although is slightly more sensible:
[Dan Ammann, GM’s president] sees self-driving cars becoming a reality as “a series of developments, opening up to broader to broader as we go.” He would not name a specific timeline. (“There’s no single point answer to that question,” he said.)
(We’ll see the framework that Ammann’s thinking fits into in a moment.)
Let’s pause for a moment to get the taste of hype out of our mouths. In the last post on self-driving cars (autonomous vehicles), I asked whether they could even be marketed, and thought not; people want others to have cars whose ethical algorithms might sacrifice passengers for the greater good, but want cars that protect passengers at all costs for themselves. In this post, I’m trying to get a reading on whether the tech is really there. In future posts, I’ll look at government regulations, what the business models for selling self-driving cars might be (if indeed they are to be sold, as opposed to being rented), effects on political economy (income inequality, public works, insurance, jobs), incremental approaches (trucks on highways first), and social benefits (for example, lives saved).
Back to the tech. Everybody loves a taxonomy, and as it turns out, there are two of them for self-driving cars: the (NHTSA), and ‘. The NHTSA’s levels are 0-4 (total five) and the SAE’s are 0-5 (total 6). Pleasingly, therefore, if you hear “level 4,” you don’t automatically know which standard is meant. But if you hear you know that the SAE is meant (even though the NHTSA has five levels). In what follows, I’m going to use the SAE’s, because . Here’s a handy chart of the levels ():
And here’s , from of Berkeley’s Partners for Advanced Transportation Technology. I’ve added the SAE level names in bold:
[Driver Assistance]: So systems at Level 1 are widely available on cars today.
[Partial Automation]: Level 2 systems are available on a few high-end cars; they’ll do automatic steering in the lane on a well-marked limited access highway and they’ll do car following. So they’ll follow the speed of the car in front of them. The driver still has to be monitoring the environment for any hazards or for any failures of the system and be prepared to take over immediately.
[Conditional Automation]: Level 3 is now where the technology builds in a few seconds of margin so that the driver may not have to take over immediately but maybe within, say, 5 seconds after a failure has occurred.
So in theory you could be reading a book or you could be playing a video game or surfing the web on your tablet or doing something else like that while driving and then the system has a problem and it beeps you and it says “You need to take over” and you have to now turn your attention back to driving.
That level is somewhat controversial in the industry because there’s real doubt about whether it’s practical for a driver to shift their attention from the other thing that they’re doing to come back to the driving task under what’s potentially an emergency condition.
[High Automation]: Level 4 gets more interesting because this now where the system has enough internal redundancy that it can take over from itself when it has a fault.
So it has multiple layers of capability, and it could allow the driver to, for example, fall asleep while driving on the highway for a long distance trip.
So you’re going up and down I-5 from one end of a state to the other, you could potentially catch up on your sleep as long as you’re still on I-5.
But if you’re going to get off I-5 then you would have to get re-engaged as you get towards your destination.
That could also be a low-speed shuttle that would operate within a confined area, like a retirement community or a resort or shopping complex, where the interactions with other vehicles might be limited so that that helps keep it safe.
[Full Automation]: Level 5 is where you get to the automated taxi that can pick you up from any origin or take you to any destination or they could reposition a shared vehicle. If you’re in a car sharing mode of operation, you want to reposition a vehicle to where somebody needs it. That needs Level 5.
Note that Musk, Zimmer, and Kalanick all treat Level 5 (Full Automation) as a done deal. That’s what they’re selling, Musk “in a few years,” Zimmer “within 5 years,” Kalanick “What I know is that I can’t be wrong,” and the Ford CEO Mark Fields “by 2021.” :
Contrary to Musk and many of the most prominent advocates of autonomous cars, Shladover insists that so-called Level 5 vehicles—robocars that require no human input—are not on the horizon. “. And I say the same thing to students,” he says. “Merely dealing with lighting conditions, weather conditions, and traffic conditions is immensely complicated. The software requirements are extremely daunting. Nobody even has the ability to verify and validate the software. I estimate that the challenge of fully automated cars is 10 orders of magnitude more complicated than [fully automated] commercial aviation.”
, director of the Carnegie-Mellon University Robotics Institute, :
With autonomous cars, you see these videos from Google and Uber showing a car driving around, but people have not taken it past 80 percent. It’s one of those problems where it’s easy to get to the first 80 percent, but it’s incredibly difficult to solve the last 20 percent. If you have a good GPS, nicely marked roads like in California, and nice weather without snow or rain, it’s actually not that hard. But guess what? To solve the real problem, for you or me to buy a car that can drive autonomously from point A to point B—it’s . There are fundamental problems that need to be solved.
Finally, I advise readers to read on March 15, 2016 in full (like Shladover and Herman, Cummings has devoted to tacking significant problems in robotics). Here’s Cummings on some of the technical issue:
While I enthusiastically support the research, development, and testing of self-driving cars, as human limitations and the propensity for distraction are real threats on the road, I am decidedly less optimistic about what I perceive to be that are absolutely not ready for widespread deployment, and certainly not ready for humans to be completely taken out of the driver’s seat.
Here are a few scenarios that highlight limitations of current self-driving car technologies: The first is operation in bad weather including standing water on roadways, drizzling rain, sudden downpours , and snow. These limitations will be especially problematic when coupled with the inability of self-driving cars to follow a traffic policeman’s gestures.
Another major problem with self-driving cars is their vulnerability to malevolent or even prankster intent. Self-driving car cyberphysical security issues are real, and will have to be addressed before any widespread deployment of this technology occurs. For example, it is relatively easy to spoof the GPS (Global Positioning System) of self-driving vehicles, which involves hacking into their systems and guiding them off course. Without proper security systems in place, it is feasible that people could commandeer self-driving vehicles (both in the air and on the ground) to do their bidding, which could be malicious or simply just for the thrill and sport of it.
And while such hacking represents a worst-case scenario, there are many other potentially disruptive problems to be considered. It is not uncommon in many parts of the country for people to drive with GPS jammers in their trunks to make sure no one knows where they are, which is very disruptive to other nearby cars relying on GPS . Additionally, recent research has shown that a $60 laser device can trick self-driving cars into seeing objects that aren’t there. Moreover, we know that people, including bicyclists, pedestrians and other drivers, could and will attempt to game self-driving cars, in effect trying to elicit or prevent various behaviors in attempts to get ahead of the cars or simply to have fun. Lastly, privacy and control of personal data is also going to be a major point of contention. These cars carry cameras that look both in and outside the car, and will transmit these images and telemetry data in real time, including where you are going and your driving habits. Who has access to this data, whether it is secure, and whether it can be used for other commercial or government purposes has yet to be addressed.
(I really like the way Cumming’s mind works; several nasty twists of thought.) And then there’s the reality of insufficient testing beneath the hype:
In my opinion, the self-driving car community is woefully deficient in its testing and evaluation programs (or at least in the dissemination of their test plans and data), with no leadership that notionally should be provided by NHTSA (National Highway Traffic Safety Administration). Google X has advertised that its cars have driven 2 million miles accident free, and while I applaud this achievement, New York taxi cabs drive two million miles in a day an a half. This 2 million mile assertion is indicative of a larger problem in robotics, especially in self-driving cars and drones, where demonstrations are substituted for rigorous testing.
And the reality of testing results that are not disclosed:
But there are many known knowns in self-driving cars that we are absolutely aware of that are not being addressed or tested (or test results published) in a principled and rigorous manner that would be expected in similar transportation settings. For example, the FAA (Federal Aviation Administration) has clear certification processes for aircraft software, and we would never let commercial aircraft execute automatic landings without verifiable test evidence, approved by the FAA. To this end, any certification of self-driving cars should not be possible until manufacturers provide greater transparency and disclose how they are testing their cars. Moreover, they should make such data publicly available for expert validation.
Verdict: Self-driving cars are not ready. Nowhere near ready, despite what Musk, Zimmer, and Kalanick are saying.
Returning, then, to the question of the question of the NHSTA/SAE levels. :
Numbered levels strongly suggest an ordering or hierarchy to a technology that almost surely will not evolve in the manner laid out. The levels create an expectation of evolution in this direction and also an expectation that each level is a superset of the one below it. Regulators, press and the public are led to expect this progression by the levels, and may even write rules that demand it.
The levels, in other words, imply a teleology, with inevitable progress to Level 5. But as we’ve seen, it ain’t so. What is far more likely is incremental progress in automotive computing (which is not at all the same as building an autonomous vehicle. From :
As self-driving technologies mature, they are gradually assimilated into auto production. Adaptive cruise control and parking assist are commercially available now. In the 2020s, cars will have traffic jam assistance and virtual valet, a feature that enables cars to park themselves via a smartphone app.
“Features of automated self-driving cars will appear , with vehicles eventually driving themselves. This will make the cars affordable and encourage public adoption,” says Raj Rajkumar, a professor of electrical and computer engineering and the co-director of the GM Collaborative Research Lab at Carnegie Mellon.
And the incremental approach, as we have seen, is the one GM, as opposed to Ford, is taking. One last point. From Mary Cummings and Jason Ryan, :
The majority of the promises and benefits will likely only be realized when all cars are equipped with these advanced technologies, enabling NHTSA’s Level 4 [SEA’s Level 5] of fully autonomous driving.
In other words, Kalanick’s “millions of lives” won’t get saved in our lifetimes, because Level 5 isn’t happening any time soon; Shladover’s Level 4 “low-speed shuttles” sound like a solid little product, but not exactly what Kalanick has in mind, eh? What that implies is that self-driving car technology companies should be valued a lot more like GM, or Delco, rather than at the stratospheric prices of Silicon Valley unicorns.
 The nice thing about standards is that there are so many of them!