Deadly Hand-off to Human Drivers in AI Self-Driving Cars, Gonna Hurt
Dr. Lance B. Eliot, AI Insider
For AI self-driving cars that are less than fully autonomous, the human driver is considered co-sharing the driving task with the AI system. Some human drivers misunderstand this co-sharing arrangement. You’ve likely seen some human drivers that take their hands and feet away from the driving controls, maybe even sticking their head out the window, all of which means that if the AI suddenly opts to hand the driving back over to the human, it could be curtains for all.
Why would it be deadly? Because the time needed for you to put your head back into the game, along with positioning your hands and feet to drive the car, and then figure out what is happening, including discovering on your own why the AI has surprisingly tossed the controls back over to you, well, you won’t likely have any time left to decide what to do, let alone take the required driving action to avert disaster.
The odds are that in those precious few seconds of getting your body and noggin into the traffic situation will usurp the time that you would have had to do something useful to avoid a pending accident. Instead, the self-driving car is going to be proceeding along like a bullet or missile, heading towards an untoward situation that the AI cannot handle. If the AI could not handle the matter, it is probably a scary moment and one that requires great driving prowess by a human driver.
Except that the human driver wasn’t likely paying attention to the roadway, the human driver wasn’t in a posture to immediately take over the controls, the human driver might not be informed by the AI of what the predicament consists of, and the human driver has now gotten the proverbial hot potato.
A real hot potato. A multi-ton car, barreling into trouble, with the chances of injuring or killing the human driver, injuring or killing the passengers in the car, and injuring or killing any innocent bystanders such as pedestrians or drivers or occupants of other cars.
We are just now seeing the emergence of Level 3 self-driving cars. The Tesla and its Autopilot capability are considered a Level 2. Level 3 is more advanced automation, often referred to as Advanced Driver Assistance Systems (ADAS), and yet still requires that a licensed human driver be in the driving position, meaning that the Level 3 is not truly a self-driving car. Only when you get to a Level 5 can you say that a self-driving car is really self-driving (there is a scale established by the Society of Automotive Engineers, SAE, delineating the various levels).
The increased capabilities of a Level 3 over a Level 2 are perhaps to be heralded, except that it is almost like quicksand. With a Level 2, the human driver tends to realize that the self-driving car is not truly a self-driving car, and therefore the human driver pretty much keeps themselves in-the-loop. For the Level 3, human drivers are going to be teased into believing that the self-driving car is a true self-driving car. This means that those human drivers are going to let their guard further down, become complacent, and get caught in the pickle I’ve mentioned above.
Example of a Child Darting Into The Street
Let’s consider an driving example to illustrate the matter.
You are driving your car and suddenly a child darts into the street from the sidewalk. You see the child in the corner of your eye, your mental processes calculate that the car could hit the child, and you then realize you should make an evasive move.
Your mind races as you try to decide whether you should slam on the brakes, or swerve away, or both, or maybe instead try to speed-up and get past the child before your car intersects with him. As your mind weighs each option, your hands seemingly grab the steering wheel with a death-like grip and your foot hoovers above the accelerator and brake pedal, awaiting a command from your mind. Finally, after what seems like an eternity, you push mightily on the brakes and come to halt within inches of the child. Everyone is Okay, but it was scary for driver and child.
How long did the above scenario take to play out?
Though it took several sentences to describe and thus might seem like it took forever, the reality is that the whole situation took just a few seconds of time. Terrifying time. Crucial time.
If you had been distracted, perhaps holding your cellphone in your hand and trying to text a message to order a pizza for dinner, you would have had even less time to react. Driving a car involves lots of relatively boring time, such as cruising on the freeway when there is no other traffic, but it also involves moments of sheer terror and second-by-second split-second decision making and hand-foot coordination.
This ability for a human to react to a driving situation is an essential element of AI-based self-driving cars that are not fully autonomous, i.e., self-driving cars that are relying upon or co-sharing the driving with human drivers. For self-driving cars that expect the human driver to be ready to take over the controls, the developers of such self-driving cars had better be thinking clearly about the Human Computer Interaction (HCI) or the Human Machine Interface (HMI) factors involved in the boundary between human drivers and AI-automation driving the car.
Suppose that an AI-automation was driving the car in the above child-darts-into-street scenario.
Perhaps the AI-automation is “smart” enough to make a decision and avoid hitting the child. But, suppose the AI-automation determines that it is unable to find a solution that avoids hitting the child, and so it then opts to hand over the controls to the human driver. Depending upon how much time the AI-automation has already consumed, the time leftover for the human driver to comprehend the situation and then react might be below, maybe even far below, the amount of time needed for the human mental calculations and hand-foot processes to be performed.
An informative study by Alexander Eriksson and Neville Stanton at the University of Southampton sheds light on what kinds of reaction times we’re talking about (their study was published in the Human Factors: The Journal of the Human Factors and Ergonomics Society). They undertook a study using a car simulator, and had 26 participants (10 female, 16 male; ranging in ages from 20 to 52, with an average record of 10.57 years of normal driving experience) try to serve as a human driver for a self-driving car.
In this capacity, the experiment’s subjects sat awaiting the self-driving car to hand over control to them, and they then had to react accordingly. The simulation pretended that the car was going 70 miles per hour, meaning that for every second of reaction time that the car would move ahead by about 102 feet.
They setup the scenario with two situations, one wherein the human driver was focused on the self-driving car and the roadway, and in the second situation they asked the human driver to read passages from the National Geographic (now that’s rather dry reading!).
In the case of the non-distracted situation, the humans had a median reaction time of 4.56 seconds, while in the distracted situation it was 6.06 seconds. Though it is expected that the reaction time for the distracted situation would be longer, it is also somewhat misleading to focus solely on the reaction times. I say this because the reaction time was how long it took for them to take back control of the car. Meanwhile, the time it took for them to take some kind of action ranged from 1.9 seconds to 25.7 seconds.
Let me repeat that last important point.
Taking back control of a self-driving car might be relatively quick, but taking the right action might take a lot longer. Regardless though about the right action, notice that it took about 5–6 seconds to even take over manual control of the car. That’s precious seconds that could spell life-or-death (and a distance of roughly 500–600 feet at the 70 mph speed), since either a collision or incident might happen in that time frame (or distance), or it might mean that the time now leftover prior to a collision or incident is beyond your ability to avert the danger. We should also keep in mind that this was only in a simulated car.
The participants were likely much more attentive than they would be in a real car. They knew they were there for a driving test of some kind, and so they were also on-alert in a manner that the everyday driver is likely not. All in all, the odds are that any similar study of driving on real roads would discover a much longer reaction time, I’d be willing to bet.
Let’s consider some of the salient aspects of the HCI and HMI involved with a self-driving car and a human driver:
No Viable Solution. If the AI-based system of the self-driving car cannot arrive at a solution to the driving problem, it could mean that there just isn’t any viable solution at all. Thus, handing the car driving over to the human is like saying, here, have at it, good luck pal. This is a no-win circumstance. The human driver is not really being given an option and instead simply being passed the buck.
Hidden Problem. The AI-based system might “know” that a child is darting from the sidewalk, but when it hands control over to the human the question arises as to how the human will know this. Yes, the human driver is supposed to be paying attention, but it could be that the human driver cannot see the child at all (suppose the AI-based system used a radar capability, but that visually the child is unseen by the human). In essence, these self-driving cars are not giving any hints or clues to the human driver about what has caused the urgency, and it is up to the human driver to be omniscient and figure it out.
Cognition Dissonance. This is similar to the “Hidden Problem,” in that the context of the problem is not known by the human, but suppose the human makes an assumption that the reason the self-driving car is handing over the control is because there is a trash truck up ahead that needs to be avoided, and meanwhile it is actually because the car is about to hit the child. There is a gap, or dissonance, between what the human is aware of and what the AI-based system is aware of.
Reaction Time. We’ve covered this one already, namely, the amount of time needed for the human to regain control of the car, plus the amount of time needed for the human to then take proper action. The AI-based system has to hand-over control with some semblance of realizing how much time a human might take to figure out what is going on and also have time to still be able to take needed action.
Controls Access. A human driver might have put their feet aside of the brake and accelerator, or might have their hands reaching behind the passenger seat to grab a candy bar. Thus, even if they are mentally aware that the self-driving car is telling them to take the controls, their physical appendages are not able to readily do so. This is a controls access issue and one that should be considered for the design of self-driving cars in terms of the steering wheel and the pedals.
False Reaction. This is one aspect that not many researchers have considered and certainly none or seemingly none of the self-driving car makers seem to have been contemplating. Here’s the case.
You are a human driver, you get comfortable with a self-driving car, but you also know that at some random moment, often when you least expect it, the AI-based system is going to shove the controls back to you. As such, for some drivers, they will potentially be on the edge of their seat and anxious for that moment to arise.
This could also then cause eager-beaver drivers to take back control when the AI-based system has not alerted them, and the human might make a sudden maneuver because they think the car is headed towards danger. The human is falsely reacting to an unannounced and non-issue. The human could dangerously swerve off the road or flip the car, doing so because they thought it was time to take sudden action.
Overall, the rush toward self-driving cars is more so focused on getting the self-driving car to drive, rather than also focusing on the balance between the human driver and the AI-based system. There needs to be a carefully thought through and choreographed interplay between the two.
When a takeover request is lobbed over to the human (these are called a TOR for Take Over Request, in self-driving parlance), there needs to be a proper allocation of TORLT (TOR Lead Time).
Without getting the whole human-computer equation appropriately developed, we’re going to have self-driving cars that slam into people and the accusatory finger will be pointed at a human driver, which, might be unfair in that the human might have actually been attentive and willing to help, but for whom the self-driving car provided no reasonable way to immerse the human in helping out.
We can’t let the AI toss a live hand grenade to a human. Humans and their alignment with the AI-based computer factors will be vital for our joint success. Think about this the next time you are the human driver in a self-driving car, and be a keen co-sharing driving buddy.
For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website
The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.
More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru
To follow Lance Eliot on Twitter: @LanceEliot
Copyright © 2019 Dr. Lance B. Eliot.