The AI, machine and deep learning influence on mobility explored by Louis Bedigian. [Mob.Bedigian.2016.12.19]
Automakers, start-ups and suppliers are eager to discuss the importance of technology, but they aren’t always in agreement with regard to how it should be used. Artificial intelligence is one such area, encompassing a vast array of elements (including deep learning and machine learning) that stand to revolutionise the cars of the future. It is believed to be an essential component of autonomous development. However, unlike other, more easily understood technologies, the exact role of AI is still being determined.
“I think machine learning and AI are being more or less used in the same context,” said Sachin Lulla, IBM’s global automotive lead for Watson IoT. “You have predefined statistical models and algorithms. With deep learning and neural networks, the idea is software rewriting. It’s all software, meaning once you find the initial model, the more data you feed it, the more it grows as it continues to learn based on that data set.”
Kal Mos, vice-president of Mercedes-Benz R&D North America, said that machine learning is preferred to the old, dated model. “In the old days, when we wanted to make the computer do something for us, we sat down and wrote a program,” said Mos.
Others compared machine learning to the way a robot, which is at least partially pre-programmed, behaves. “A typical robot, it gets some rules from the programming guys,” said Malgorzata Wiklinska, manager of ZF Denkfabrik. “Like when I say ‘A,’ you turn right. And when I say ‘B,’ you turn left. I focus more on deep learning, which is usually used when you start looking for anomalies – something that has some kind of a rule in the back, that’s why we call it a self-learning algorithm. It starts, it is already trained, it identifies who you are and where you go. It can predict a road and, therefore, it can predict a potential collision.”
Holger Weiss, CEO of German Autolabs, defined deep learning as a “self-learning type of system” that “does things you haven’t taught it.” “That’s something that I think many people haven’t really understood,” said Weiss. “AI is a little bit of an overstretched term at the moment. Not everything of a learning platform is immediately AI. To develop certain self-learning elements in a machine, you need a lot of data points. That is a huge challenge to an industry like the car industry.”
Most cars still lack proper connectivity features. Consequently, Weiss said that automakers are often forced to rely on third-party suppliers to acquire the data they need. He believes this is another issue that needs to be addressed.
A whole other layer
Frank Lubischer, senior vice-president and CTO of Nexteer, characterises deep learning as another layer of processing information and comparing it with other info over time. Said Lubischer: “For us, AI means that we understand driver behaviour but drivers today differ significantly. You have the 80-year-old man that’s driving in one fashion and the 16-year-old girl that’s driving in another. If you compare them, they are all drivers in the loop and they all work differently, have their deficiencies and, I guess, advantages.”
By exploiting the power of deep learning, Lubischer said it is possible to gain greater insight into who’s behind the wheel and it’s not just about driver patterns – their level of engagement also comes into play. Are they focused on the road or distracted by something else?
To that end, Clarion president Paul Lachner said that one of the biggest challenges facing AI development is that drivers still make most of the decisions within the vehicle. But that is starting to change with the arrival of AEB, lane keep assist and other safety features. “We’re now nudging people back into the lane, so the vehicle is taking more and more control,” said Lachner. “The challenge is, how do we make a vehicle think like a person?”
Lyden Foust, CEO of Spatial.ai, is searching for a similar answer. He wants AI that can think more like a human to help process data in parallel. Said Foust: “The other challenge is, you can have great data science but, if it’s just the purest data and it doesn’t give you relevant information, the purity doesn’t matter. What matters is that people are able to take action on the information that comes out of deep learning. I think people forget how much human logic is actually behind these things.”
While the exact definitions may differ depending on who you talk to, everyone agreed that all of these technologies offer tremendous potential for the auto industry. “You can have a deep neural network that is trained to recognise objects which are basically just a group of pixels,” said Danny Shapiro, senior director of automotive at NVIDIA. “And through a series of different ways of analysing and grouping those pixels, you can detect edges, curves, object elements and put them all together to realise, ‘Okay, that’s a face’ – or a car or a dog.”
With even more data the system is then trained to understand all the different types of dogs or models of a particular vehicle, or the difference between a truck and an ambulance. Shapiro added: “Other sensors might not be able to discern that. A radar of a delivery truck will look just like the radar signature of an ambulance but when you combine information from a camera and deep learning, you can then know exactly what type of vehicle it is.”
This element of deep learning may be less significant once all vehicles are able to flawlessly communicate with and identify each other but it could be several years before that occurs. John Waraniak, vice-president of vehicle technology for the Specialty Equipment Market Association (SEMA), explained: “When you think about where vehicles are, there’s about 15 to 16 million new vehicles a year. When you think about the critical mass of vehicles you need that can communicate with each other, from a connected car perspective, it’ll take probably seven to maybe 20 years to where there’s enough vehicles that can talk to one another.”
Voice recognition has come a long way since it first appeared on Windows PCs. Using new technology from Amazon (Alexa), Apple (Siri) and a handful of others, consumers are able to communicate with their devices in new and unprecedented ways. However, these tools have been built with very specific commands, limiting their use.
IBM is attempting to change that with its AI-powered Watson, which was brought into the Olli autonomous concept from Local Motors. Instead of having to recite a familiar command, passengers are able to speak to Watson as they would another human. Said Lulla: “It’s not just speech or voice recognition, it’s actually an intelligent conversation that you can have with Watson once you are in Olli.”
With traditional voice recognition technology, a consumer is required to call out a specific request. One might say, “Siri, take me to the London Eye.” With Watson, the request could be much simpler: “What is the name of the famous wheel in the UK?” Watson will ask to confirm if you are looking for the London Eye.
This level of interactivity is going to fundamentally change the way consumers communicate with their devices inside their cars and homes. However, it also presents new challenges in protecting the privacy and security of end users. While IBM said that it would not store or use any of the data it collects from any of its customers, all of these devices must constantly listen for commands in order to function.
“From a consumer standpoint, that is a major risk,” said Lulla, broadly referring to all voice recognition devices. “At least give me the choice of what data I’d like to share and what data I’d like not to share. Today that option doesn’t exist. All that data is being captured and shared.”
Teaching AI new tricks
Autonomous vehicles would have to learn to recognise an endless list of objects in order to match the vision of a human driver. While this isn’t necessary from day one, as evidenced by all the road tests conducted worldwide, autonomous vehicles will need to improve over time.
Suppose a car is really good at identifying a ball that is bouncing in the street. What happens if another round object, such as a tumbleweed, rolls in its path? Will it confuse the object for a ball or learn to identify it as something else? “The identifying will not be the big problem,” said Wiklinska. “We have LiDAR and radar. At the end of the day, the critical point is, is there something coming after the ball – like someone running after it? Because if it’s just a ball and the car recognises it as a ball, the vehicle stops. This is what you have already. The cameras are so good and the LiDAR and radar are so good and trained already that it potentially identifies a risky situation and stops.”
Instead of learning through experience after the fact, Shapiro expects autonomous vehicles to learn in advance of their deployment. “That’s something that we spend a lot of time on, training these systems,” he said. “We’re training on not just physically what something looks like but also the behaviour of it. What are the physics of it? And a person moves differently from a bicycle, which moves differently from a motorcycle or a car.”
Other advancements may also be on the horizon. Said Foust: “I feel like we are on the forefront of what we call human-driven machine-assisted analysis, so it’s not all machine doing what we’re doing. We have to train it on human logic first, then allow it to start making decisions as a human would. I think as much as we can make AI a human-driven thing that supports humans, the better it’s going to be and of course the better it’s going to be for our business.”
Once a vehicle is able to identify all the potential hazards on the road, there will still be one challenge it needs to overcome: how to start and stop safely. “One of the things they’re going to have to work on at the American Centre for Mobility is, how do we make computer-driven cars behave more naturally, like they were driven by humans?” said Richard Wallace, director of transportation systems analysis at the centre for Automotive Research. The American Centre for Mobility is a new autonomous test track being built at Willow Run in Ypsilanti Township, Michigan.
Wallace added: “I think it’s critical not just for that interactive effect but for the passengers as well. Imagine if these things drove like roller coaster rides – accelerate, stop hard! People would be getting sick. It would be very uncomfortable, so they have to behave smoothly, naturally.” To improve the safety of those around the autonomous vehicle, Wiklinska wants V2V communications that alert other vehicles of an impending obstacle. “It doesn’t make sense that I stop and then the car behind me doesn’t stop,” she said. “Then we will have an accident.”
In looking ahead to when AI will bring significant changes or advancements to automobiles, Foust said that it will happen “faster than you think.” He said: “It’s on an exponential growth curve, and right now AI is a buzzword. But it’s not going away like, say, 3D printing. That was a big buzzword and it moved really slow because it was atoms. AI is not confined to a physical atom. I think the symbiosis of how AI helps humans will change the way we think and move around the world.”
Steve Carlisle, president and managing director of General Motors Canada, has high hopes for the auto industry’s ongoing evolution. “It takes a while to commercialize these things and ready them for the consumer,” said Carlisle. “I don’t know that the first idea the industry is working on will be the idea we have 10 years from now. Deep learning itself is still relatively new. I would expect it to continue to evolve and get better and more effective solutions as time goes on. It’s the very beginning. It’s like we’re changing from horses to cars.”
07 Jun 2017 - 08 Jun 2017, NOVI, USA
The Undisputed Home of the Connected, Autonomous Car. With 150 speakers, 200 booths and 3000 attendees it's the world's biggest conference & exhibition dedicated to automotive tech. innovation.