Vehicle simulator start-up wants to drive autonomous car development without hitting the road, Danny Atsmon tells Louis Bedigian.

Automakers are expected to test their autonomous vehicles for a long time before they can be deployed to the public. Rand Corporation estimates that it could take 11Bn miles to prove that a driverless vehicle is ready to hit the road. At that rate, few companies may be successful in releasing a self-driving automobile within the proposed time frame of 2020 or 2021.

Cognata, a start-up specialising in autonomous vehicle simulations, hopes to change that. The company has built a deep learning-based system that allows manufacturers to reduce the expensive and lengthy process of conducting real road tests. “In California, the DMV has released a report stating that all the autonomous companies together drove 700,000 miles last year,” said Danny Atsmon, founder and CEO of Cognata. “So it would take them hundreds of years to get to the market.”

The company is focused on re-recreating a number of detailed and highly realistic environments, including some of the world’s largest cities. Said Atsmon: “We built all the lanes, the buildings and trees – everything. That’s the static layer. We add to that a dynamic layer, which is the cars unique to each place in the world. Drivers in San Francisco drive completely differently from drivers in Bangalore, or those in New York or Tel Aviv.”

Weather conditions are another element that Atsmon considered. Snow, in particular, has been one of the biggest hurdles faced by autonomous car developers. If a stop sign is covered in a white, sticky mess, how is a vehicle supposed to respond? Its sensors may not be able to overcome these and other obstacles in all situations. “We can re-create each edge case,” Atsmon added. “We’re building a virtual car with all the virtual sensors that you need for autonomous driving: 10 cameras, four LiDARs, five radars, GPS, gyroscope, all of this. In reality those sensors are not perfect. Radar gives false positives. Camera has situation problems. Sometimes it just can’t see for some reason. We added deep learning algorithms to learn the errors and we put it all together.”

This allows automakers to drive millions of simulated miles in just a few hours. “The experience is really close to real life,” he said.

Unexpected results

Atsmon wasn’t exaggerating when he spoke about the errors associated with real autonomous driving technology. He gave an example of a surprising scenario in which a consumer passenger vehicle could not correctly identify the large truck that was driving in front. The truck’s back end was covered in an advertisement that showed a large rock formation, as well as a bright blue sky. The autonomous car interpreted those things as both a building and a street sign. It also thought that the metal sides of the truck were streetlights. “It becomes a big problem,” said Atsmon. “The camera is now dysfunctional.”

With the current technology, automakers cannot use one camera or sensor alone they must rely on multiple technologies to produce a safe autonomous vehicle. Said Atsmon: “When everything is okay 99% of the time, you’re okay. Nothing extreme will happen but, in a lot of cases, something pops up and if I only had a camera-based solution I would have an accident.”

If you think that’s bad, just wait until you hear what happens when an autonomous car comes face-to-face with rain. “The hardest edge case I found was driving at night on a wet road,” said Atsmon. “The car in front of you has a reflection on the road and then the system is confused. It sees two cars – which is the right one?”

These are some of the many challenges that all companies, including tech giants like Google, are trying to overcome with their vehicles. “How do you react when you have a pedestrian run into the street?” Atsmon questioned. “Recently, when I was in the middle of a junction, the traffic light went dead. I didn’t know what to do and I stopped but what would an autonomous car do?”

Required technology

Many tech companies are pouring their resources into LiDAR but that not be the best approach. At the very least it should be viewed as a complementary tool for other autonomous technologies. Said Atsmon: “Cameras and radars are a must. This is the beginning. You can add a GPS, you can add to that LiDAR. It depends on each company how they would like to teach each system, how they would like to do sensor fusion, which is one of the most important things in autonomous driving.”

Unfortunately, some driverless car developers may be tempted to stick with technology they’re familiar with, either owing to the expense of switching or fear of the unknown. “It really depends on historical reasons,” said Atsmon. “I don’t think LiDAR is better, I think they all add up together but the beginning of each autonomous solution is cameras and radars.”

[Auto.Bedigian.2017.06.12]

TU-Automotive Europe 2017

06 Nov 2017 - 07 Nov 2017, MUNICH, GERMANY

Europe's Largest Conference & Exhibition Dedicated to the Future of Auto Mobility. For 15 years the TU-Automotive Europe Conference & Exhibition has been the central meeting place for those at the forefront of connected car innovation and business strategy.

Related Reads