Sensor technology is simple compared to machine decision on the road, reports Shamik Ghosh.

It is no longer appropriate to call vehicle autonomy futuristic, especially as we see the automotive industry and the corresponding IT industry rapidly approach Level 4 autonomous capabilities.

Most of what we see today as ‘assisted’ will constitute the ‘autonomous’ of tomorrow. The technology is already there and the cost will come down as the economies of scale kick in. At least, they won’t be as high as the $70,000 (£56,300) LiDAR operated in Google’s driverless prototype but that’s not where it ends. Are the available technologies universally applicable for both the developed and emerging regions?

The general notion is that, owing to the better availability of sensor technology and its extensive testing through field trials, autonomous driving could be deployed sooner in developed economies. “The differences will be very obvious in software since the infrastructure and behaviour can vary significantly in developed versus emerging economies,“ said Michael Paulin, director of product management at LedderTech, a Canadian supplier of solid-state LiDAR sensor technology. Regarding the sensors, it is highly desirable to have the same hardware to maximise volume and reduce part costs and also optimise the investment in engineering and qualification of these parts.”

However, Paulin believes countries with a large local manufacturing base could be exceptions to this. For example, in China and India the vehicles may feature a different sensor set based on a cost set against performance trade-off than those vehicles commercialised in developed economies.

Fusion is the future

General Motors’ technical fellow and manager of automated driving, Bakhtiar Litkouhi said in a 2012-media release: “No sensor working alone provides all the needed information. That’s why multiple sensors and positioning technologies need to work together synergistically and seamlessly and sensor fusion will help facilitate that.” 

A Few years down the line, the industry seems to realise the importance of fusion. No matter how superior each sensor is, the failure of one or all of the sensors can have devastating results such as that seen in Tesla Autopilot road fatalities. Yet, simply adding more sensors would not solve the problem.

This brings us to an important question: how should an ideal ‘fused’ sensor system function? Siamak Akhlaghi, a business development manager of autonomous systems at the Alberta Centre for Advanced MNT Products (ACAMP) listed seven important yardsticks for a ‘fused’ sensor-suite: availability, precision, accuracy, reliability, robustness, redundancy and security. “It is essential to have all the parameters to ensure continuous situational awareness in all conditions,” Akhlaghi added.

Another important factor, according to Paulin, is “the dynamic weighting of the sensors regarding decision making and functional safety with redundancy. Sensor fusion must provide sufficient redundancy in decision making which is likely to be the most challenging part, especially in extreme weather conditions.” For example, higher reliance could be given to LiDAR over camera in challenging lighting conditions for cameras.

Sensor-based localisation

A vehicle traversing a route autonomously must know its exact position relative to the lanes, streets, traffic signals and nearby vehicles and that is where the localisation part comes in. Traditionally, the manufacturers have been relying on GPS data to achieve localisation but the dependency is now shifting to on-board sensors.

“The on-board sensors in a vehicle, such as odometer, camera, radar, ultrasonic, LiDAR can provide information about the relative motion of the vehicle compared to its surrounding,” said Akhlaghi. “If the aforementioned sensor suite is correctly linked with the GPS information, every moving vehicle could become a mobile mapper.”

With full connectivity between the vehicle and the manufacturers’ back-end servers, live-updating maps can be produced and the vehicles can use the information in real-time. This could be a logical step as GPS data alone may not be sufficient in dynamic and unpredictable urban environments.

Paulin puts an interesting perspective to this: “Vehicles need to be as capable as possible to navigate using on-board sensors and minimise reliance on maps. However, the data collected by these sensors should be used to improve mapping data and use these eyes on the roads to improve navigation.” For example, Mobileye’s Road Experience Management (REM) system uses the front camera to create user-generated maps to augment crowd-sourced probe data coming from on-board car sensors. GM, Volkswagen, Nissan and BMW are already using this strategy.

Making AI and machine learning work

In its latest report, Automotive Electronics Roadmap, IHS Automotive estimated that the total unit shipment of AI systems in the automotive sector is expected to be 122M by 2025, up from 7M today. So, where can this have the maximum impact?

Akhlaghi explained that currently, automakers are taking a dual approach for machine learning: a) sense and understand; b) store and align. “In the first approach, the vehicle relies on the information received from the sensor suite to perceive its surroundings, and using on-board computer, comprehends the information and provides projection to vehicle. In the second approach, vehicle has already travelled through a designated route and is familiar with the environment.”

The latter is particularly important to cater any changes in the environment, such as an updated lane marking or a newly built road, which the system can map against the information gathered from the former approach in the memory and update it accordingly. “Depending upon the complexity of driving environments any or both the methods can be used,” concluded Akhlaghi. 

Paulin believes understanding the behaviour of dynamic objects i.e. moving vehicles could be another area of improvement. “Since object types and behaviour can vary geographically, machine learning can improve adaptation to local realities,” he said.

Machine learning can dramatically improve the accuracy of pinpointing objects but a core problem, for example, is profiling of an object: is it another vehicle, a pedestrian, a bicycle, an animal? The solution could be an AI-based pattern recognition algorithm that is fed with many images containing objects until the system veraciously classifies all the images. 

TU-Automotive Detroit 2017

07 Jun 2017 - 08 Jun 2017, NOVI, USA

The Undisputed Home of the Connected, Autonomous Car. With 150 speakers, 200 booths and 3000 attendees it's the world's biggest conference & exhibition dedicated to automotive tech. innovation.