Driverless tech will solve mobility’s biggest problems, Karl Iagnemma of nuTonomy tells Louis Bedigian.
Driving is one of the most complex tasks a human can perform but it is often reduced to a simplistic act that a computer will eventually replace. To be certain, driverless cars are coming. While they may be fully capable of navigating the road and the dangers it can bring, how will they handle the many nuances of driving? How will autonomous vehicles interpret a human’s signals and subtle gestures for intent?
“The ability for driverless cars to detect subtle cues is very different from humans,” said Karl Iagnemma, co-founder and CEO of nuTonomy. “What humans do, we’re able to detect the gestures and make eye contact. Driverless cars can’t do that but what they can do is measure what other cars are doing on the road – their positions and their velocities.”
Iagnemma thinks that autonomous cars could replace a human’s traditional visual cues with indications that are introduced by the vehicle instead of a driver. He explained: “In the absence of having a driver behind the wheel, with their eyes and ears and other senses understanding what other people are going to do, you have to rely a little bit more on what you can measure about the surroundings. So when you want to merge in a zone where maybe there’s no space available, you’ll have a motion of the car that’s going to suggest you’ll move in and that might create space in a crowded traffic situation – and vice versa. You might understand by the motion of another car that they have some intention.”
Better than humans
Driverless cars might have an in initial (if not long-term) challenge involving how vehicles express intent to each other but Iagnemma thinks they could surpass a human driver’s ability in some circumstances. “When you have a combination of these various sensors they’re going to have on these cars – radars, cameras, LiDAR – in some situations their ability to perceive the environment might be better than a human,” he said. “In particular, they will always be capturing a really rich 360-degree view of their surroundings. We humans tend to look at what’s in front of us. Occasionally we can glance off to the side but the 360-degree view is very powerful when it comes to moving in traffic.”
Human drivers are also plagued by being overly cautious in some situations while driving recklessly in others. Short of assuming that every intersection is a death trap, it is impossible to predict every possible road hazard. For example, most drivers know they shouldn’t cut each other off but some do it anyway. Taking this one step further, what about the two innocent drivers who collide while attempting to merge into the same lane at the same time? How can technology prevent this from happening? Iagnemma said that’s where AVs come in.
“I would say a smart driverless car probably wouldn’t have gotten into that situation in the first place,” he said. “This is because A) the car would have had that surrounding view and might have seen something that the driver didn’t or B) it would have reasoned about the likelihood that there may have been another car out there and decided it’s too unsafe to change lanes.”
From a technological perspective, audio and visual displays could play an important role in helping autonomous vehicles understand potential obstacles. Iagnemma noted that there has been a little bit of work in this area but it has yet to become a primary focus for most AV developers. “It’s a really important topic, this notion of conveying intent,” he said. “The jury is still out on what’s the best way to do that.”
Without a doubt, self-driving cars show a lot of promise but before they can achieve their full potential, those who are developing them must recognise the challenges ahead. Said Iagnemma: “When I look industry-wide, I think there’s an under appreciation of the complexity of scaling technology from one city to the next. I think there’s a tendency to adjust your software to work well in a particular set of driving conditions. That’s human nature. It’s what engineers do, right? The challenge and reality is that if your software is not sufficiently robust, flexible and configurable to adapt to different driving conditions, you may realise you’ve gone down a technical path that, inherently, is not very scalable.”
Iagnemma warned that developing technology to work in one city will not guarantee it will perform equally as well in another environment. There could be an adaptation period as the technology attempts to learn about the differences between each city. If the cities are very similar in structure, traffic and other key elements, this process could allow an AV solution to scale quickly. When entering a city that is vastly different from the place in which it was developed, the car could need more time to adapt to its surroundings.
14 May 2018 - 17 May 2018, Santa Clara, USA
From vehicle electrification and infrastructure to the evolution of ADAS and vehicle automation to enhanced connectivity and new mobility models, no rock will be left unturned.