Resolving the risks of human to machine handover in semi-autonomous vehicles, investigated by Susan Kuchinskas.
Some think that it's completely unrealistic to expect a human driver to be ready and able to take control from an autonomously driving vehicle when the system encounters something it's not able to handle. There's more and more evidence that the human might need twenty or thirty seconds – or even longer – to get up to speed.
As we've seen in recent autonomous-mode crashes, even the most interruptive alerts might not be enough to jolt a sleepy brain into drive mode. That's why Waymo is going for completely autonomous, purpose-designed vehicles with no steering wheels and it is why designers of semi-autonomous systems are looking at real-time and/or aggregated data to figure out who should be in charge at any moment.
Not a handoff but a dance
The latest thinking positions semi-autonomous systems and the human brain as partners in what Matt Johnson, a research scientist at the Florida Institute for Human & Machine Cognition, calls an "interdependent system". Johnson doesn't think it makes sense to attempt to provide fully automated driving on an on/off basis. Instead, he sees driver assistance systems as the right approach. However, to really be effective, they need to work in sync with each other and with the driver. "Unfortunately," he says, "we tend to build technology in siloed solutions, as though lane changing didn't take place along with all other aspects of driving."
Take lane changing as an example. ADAS may provide one or more warnings that a change of lanes is unsafe but, in most cases, the driver still can ignore the warnings and change lanes.
"The human still has the ability to make inputs but automation is providing a second opinion," he says. "Balancing these is quite challenging. That’s why we have to do some work on lane changing; there is automatic and manual lane changing. Driver assistance has to work in parallel with the human who is still involved in driving."
Moment to moment
However, exactly how involved the driver is changes constantly. Some suppliers already provide computer-vision systems to gauge driver attention. These multi-sensor aftermarket devices, deployed in fleet vehicles, simultaneously observes both the driver and the outside environment and provide alerts and safety information.
When it comes to ADAS, the systems could detect whether the driver had enough situational awareness to take over if necessary, according to Jennifer Haroon, vice-president of strategy and business operations at Nauto. "There are technologies like forward collision warning that are super important but that's a handful of seconds before a collision might happen. Distraction may be a very early sign of potential danger like a collision," she notes. The company is building a database of driving data across its network of fleets that can reveal how real people drive. Analysis of this date could uncover trends, challenges, and edge cases across our network of fleets. Moreover, Haroon says, the data can be used to model and train autonomous systems.
While autonomous driving systems already are trained on road data, Haroon adds: "There's likely to be a long transition period where the roads will include cars that are 100% human-driven, cars with all the assistive driving technologies and fully self-driving cars. So, the technology will also have to understand human driving behaviour."
The company’s research shows that it could take up to 200Bn miles of driving experience necessary to train autonomous systems to interact with all those variables and vehicles. Its data resides in an open platform but Haroon thinks no one company will alone be able to get there.
"There needs to be a database of driving situations real life that companies can test and train their autonomous technology against," she says.
Driver attention isn't the only element of the handoff equation. There will, inevitably, be some driving situations in which humans are better equipped to deal, while the machines will do better in others and which should take control may be influenced by a plethora of variables. A machine may do a better job of keeping in-lane on long stretches of boring highway – until there's a snowstorm.
IBM thinks there needs to be an arbitrator– a "third intelligence," that is, an additional AI that constantly makes decisions make about whether controls in the car should be handled by human or machine. Its researchers have patented such a machine-learning system that can dynamically shift control of an autonomous vehicle between a human driver and a vehicle control processor.
According to James Kozloski, manager of computational neuroscience and multiscale brain modelling, IBM Research, and co-inventor of the patent, this arbitrator will use a variety of inputs to derive a level of confidence about the self-driving artificial intelligence's ability to make decisions in the current situation. "When confidence goes below the threshold, the intelligence can ask whether there is an opportunity to augment safety by turning control over to the human," Kozloski says.
At the same time, this arbitrator needs to evaluate the state of attention of the human. When the driver is distracted, the situation may be worsened if he or she is asked to take control.
These decisions will be based on a mix of current information, such as road conditions and driver attention, as well as historical data, such as how well the self-driving system has performed in the past during torrential rain on similar roads.
"Both humans and autonomous systems can fail, and they seem to fail at different things," he says. "So, we'll train a cognitive computing system to estimate and quantify the risk of errors based on the current state of driver or machine, and transition to a less risky mode of driving gradually based on alerting and a handoff." Kozloski isn't talking about a last-minute handoff; the idea is to use all available data to make a decision before an accident is imminent.
While the State of California requires the makers of autonomous test vehicles to turn over driving data, it's less likely that carmakers would be willing to share data from production vehicles. Kozloski thinks that an entity like the National Transportation Safety Board in the US, which already collects information about accidents, would be able to gather enough information from actual events and their contexts that they could create statistical models that could be used to inform IBM's arbitrator AI.
There are plenty of other initiatives to create open data pools for driving. In addition to Nauto, Lyft, Baidu and Udacity provide or plan to offer open data platforms, while BMW and IBM have piloted CarData, which aims to eventually include data from other automakers, available for license.