Exploring ways combine the strengths of all sensor technology, investigated by Simmi Sinha.

At the moment most of vehicle autonomy and ADAS are dependent on a combination of three key sensor technologies: radar, LiDAR and cameras. Seeing the growing importance of perception sensor technologies, it is believed that the combination of sensors fused together would provide truly robust automation.

Combination of sensors needed for a robust automation

As sensing technologies continue to increase in resolution, they surpass simple detection and ranging functions and instead take on true “vision” functions in terms of classification and mapping. CEO and co-founder at Innoviz Technologies, Omer David Keilaf, answers a very crucial question: How, through sensor fusion, the combined view of objects from different sensor types enables the vehicle to progress from sensing to ‘perception’?

Keilaf explains: “It is clearly evident that sensor fusion combined of radar, camera and LiDAR is essential to guarantee a vehicle to have full orientation of its surroundings and provide for Levels 3 to 5 of autonomous driving. Each one of these sensors has unique strengths and weaknesses. While radar provides range at lower resolution but is resilient to weather conditions, camera has better resolution, has ability to sense colours but is limited by 2D vision and lighting conditions. LiDAR compensates for the latter shortcomings – it is able to generate high res 3D mapping of the vehicle surroundings. It is resilient to different ambient conditions. Only the combination of these three sensors generates an accurate mapping of the vehicle's surrounding to provide for the necessary safety standards required by the OEMs. Currently, the lack of a high performance LiDAR solution at a mass market price, remains the main obstacle in the way of enabling sensor fusion, and thus enabling fully autonomous driving.”

Improving the car safety

The ability of a vehicle to process complex visual data will yield far more accurate detection, classification and localisation abilities, Keilaf reasons. He said: “Processing complex visual data causes latency in driving decisions. Being able to reduce the latency is another major part in improving the car safety and adding orthogonal data from different sources can reduce the required computational power that is needed in order to make driving decision. High resolution 3D data from a LiDAR can simplify tasks that are currently computational savvy when only relying on cameras and radars.” For example, cases where road marking and shadows from trees are sometimes considered as objects on the road.

Sensor fusion: A critical step on the road to autonomous vehicles

According to Dr Alain Dunoyer, head of autonomous car, SBD, said: “In recent years OEMs have introduced several sensor fusion solutions, mainly radar or LiDAR combined with camera, to improve the robustness of SAE Level 1 and 2 features. To move towards Level 3 this trend is only going to increase with more sensors feeding sensor fusion ECUs. Whether one single centralised ECU or a distributed approach is going to prevail is still being debated as each solution offers distinct advantages and disadvantages.

“New sensors are going to be needed as the current ones will not meet the performance requirements, especially at higher speed in terms of range. Object detection and classification also need to be improved to reach higher levels of automation. Pedestrians, cyclists and debris on the road all pose challenges that are far from being fully resolved yet. Software solutions to improve detection rates, yet maintaining very low missed or false detections, are being developed. Recently, AI approaches have made the headlines and certainly have their place in the overall software strategies but it is worth remembering that such techniques are typically ‘black box’ and, therefore, their robustness and stability cannot be readily demonstrated.

“Finally, another critical aspect is going to be the management of changing weather and lighting conditions that will affect sensors’ ability to sense and understand the environment. Combining different type of sensor technologies (Lidar/Radar/Camera/Ultrasonic) can help as they will not be affected in the same way, thus providing the ability to manage graceful degradation strategies,” he concludes.

Sensors driving autonomous

Advances in resolution in automotive sensors, especially radar, could be broadening the notion of what ‘vision’ will mean for automation.

Keilaf said: “Sensors and, more specifically, LiDAR play pivotal role in providing the necessary infrastructure which then enables for the development of advanced software, computer vision applications and AI. The stronger this baseline is, the more advanced the software capabilities will be. Therefore, enabling vision has a broader definition then just generating raw output of the vehicle's surrounding. It encompasses the ability to generate high res 3D mapping which is interfaced with advanced software capabilities to enable object detection and classification, accurate mapping and localisation and more.”

Keilaf concluded that sensor fusion is a growing necessity in driving forward autonomous vehicle technology. He said: “Once the baseline is determined, there are many ways the algorithmic software stack can be designed. Either by low level fusion of the different sensors or by generating insights from each sensor and fusing at the driving decision layer. Each architecture has different trade-offs but the need for redundancy of different types of sensor is critical.”

[Auto.Sinha.2017.05.30]

TU-Automotive ADAS & Autonomous USA 2017

02 Oct 2017 - 03 Oct 2017, NOVI, USA

The most focused forum on the here and now of self-driving technology. As these technologies storm the headlines, we focus on the current challenges and unite players from research labs, automakers, tier 1’s and the complete supply chain to plan for the imminent future.