When I rode in an autonomous vehicle four years ago, I was struck by two things.
The first was the pit-of-your-stomach gut flutter I felt when the packed sedan propelled itself onto a busy San Jose, Calif., thoroughfare. It was the same out-of-control feeling I get when a roller-coaster I’d just been locked into jerks into motion, lurching skyward.
And the second thing was just how quickly we passengers went from petrified of the tech to bored with it, in search of conversations and other more interesting things to occupy our attention.
We were literally driven to distraction. Which is a distinctly human shortcoming.
Computers, on the other hand, aren’t cursed with that affliction. They are uniquely qualified to scan the road unendingly, with the same piercingly efficient intensity. No matter how long and uneventful the ride, no matter how infrequent the required response might be, computers are always ready. More important, they’re always the same amount of ready. They don’t get tired. They don’t get distracted.
And yet, for all their unwavering capacity for focus, self-driving vehicles mischaracterize oncoming traffic conditions with a regularity concerning enough to prompt a federal investigation. In August, an office of the National Highway Traffic Safety Administration decided to investigate 11 crashes where Tesla
TSLA,
vehicles drove themselves into first-responder scenes, piling a new emergency onto one already being cleaned up.
Had humans been driving during any of those crashes, they would have instantly characterized the situation, and responded accordingly. Provided, of course, that they were paying attention.
This week the NHTSA demanded the same data Tesla must provide from the U.S. offices of a dozen more automakers with vehicles that “control both steering and braking/accelerating simultaneously under some circumstances.” The automakers: BMW
BMW,
General Motors
GM,
Honda
HMC,
Hyundai , Kia, Mercedes-Benz
DAI,
Nissan, Stellantis
STLA,
Toyota
TM,
and Volkswagen
VOW3,
On the surface, the data would help the feds gauge how Tesla’s Autopilot compares with the rest of the industry. Or the requests could signal the start of a broader investigation into the state of self-driving technology.
I hope it’s the latter. Self-driving technology continues to improve. But it will never be as good as we are at some things. Because the models are built on flawed goals. So the technology needs a reset before companies drive AI into new areas, like the potentially uber-profitable robo-taxi business.
Never hire a human to do the work of a computer — or vice versa
At first blush, combining AI’s boundless attention spans with our ability to quickly and accurately assess new situations has the makings of a better, safer next-generation driving experience.
Except that the industry is headed in the opposite direction. Rather than marry our strengths, developers are determined to teach AI how to better spot hazards — and demand that we pay more attention.
See, developers are approaching autonomy with a brute-force mindset that if enough vehicles packed with mini deep-learning datacenters drive enough miles on enough roads, eventually the technology will be ready to recognize anything. Adding to this impossibly Sisyphean task is the fact that everyone isn’t working together on it. Rather, there are multiple duplicative efforts underway, each driven by a competitive desire to dominate road smarts.
But alone or together, they won’t succeed — can’t succeed — for the painfully obvious reason that there will always be some previously unlogged combination of flashlight beam, police tape reflection, siren and moon phase that AI might mistake for an all-clear intersection. And those in the autonomous driving world know that. Because their goal isn’t to eliminate accidents, but to avoid them at least as often as humans do.
That’s no way to run a disruption.
Perhaps the most ludicrous cog in the whole autonomy gearbox is that the law of last clear chance still rests on our shoulders. Why don’t you give us responsibility for activating the airbags on impact while you’re at it? We’re not very good at keeping our eyes on the road when we’re actually operating the vehicle. Take that away and you’ve effectively plopped our brains into the passenger seat.
We have this head-scratching arrangement because, by industry standards, Tesla’s Autopilot is not advanced enough to assume full responsibility. On the spectrum between manual navigation and Level 5 — industry parlance for full-on autonomy — Autopilot is only Level 2 technology, which is defined as a system with more than one advanced driver assistance feature, like adaptive cruise control, brake assist and lane centering.
Given its place in the hierarchy, Level 2 implies that Autopilot is a long way from autonomous. But when you get in a car that can speed up, slow down and stay in its lane, it sure feels like it’s doing the driving. Plus, it’s called Autopilot, right? Right?
Rethinking autonomous vehicles
With a few tweaks to the decision-making hierarchy, autonomous-vehicle technology and humans have the potential to make for a potent driving combination. Cede control to the AI for long, boring straightaways. Don’t force us to keep our hands on the wheel. Let us chat, laugh — even watch videos or to keep us alert and entertained.
AI has the potential to outperform us on boring tasks where the constellation of possible hazards is far more manageable, like stop-and-go traffic and parallel parking. With the autos directly in front and behind serving as situational guardrails, it’s hard to imagine, for example, that AI would ever take us through a team of first responders.
All of this is to say that developers should let the autonomy manage the monotony. And engage us at the slightest change in the landscape. Slow down or even pull over until the human has focused and taken control — which, by the way, shouldn’t take long because we’re typically at our best when something kicks us into self-preservation mode.
We’ll drive until we’ve safely navigated past the uncertainty. Then the AI can take over again. And we can get back to our Twitter feed.
With this arrangement, AI has the potential to make motoring for long stretches far more palatable, because we’d only be called on to drive for short intervals. Expect a lot of false positives — that is, situations for which the AI engaged us for something that turned out to be nothing, like a crumpled sign that vaguely resembles a tall, thin bent-over pedestrian. That’s fine, because at the same time we’d be closing the door on false negatives.
Such a construct would obliterate many robo-taxi business models, which must eliminate human drivers to slash costs. Companies and municipalities could potentially limit risk of solo AI drivers by restricting robo-taxi service to, say, specific streets and routes on sunny days. But while you can limit the potential for hazard, you can’t eliminate it. Because as long as AI is assessing driving hazards, it will miss some things that we would’ve caught.
AI or no, there always will be accidents. But that’s no reason to accept technology that introduces a whole new class of crashes as it cuts down on others.
And this isn’t some unproven theory. There’s precedent for leveraging AI and human strengths to advance the state of the art in myriad other areas, from radiology to the Forest Service.
Hopefully, developers of autonomous driving technology are paying attention. But it wouldn’t surprise me if they’re not. They are human, after all. And attention spans are not our best attribute. Not by a long shot.
Mike Feibus is president and principal analyst of FeibusTech, a market research and consulting firm. Reach him at [email protected]. Follow him on Twitter @MikeFeibus. He does not directly own shares of any companies mentioned in this column.