Monday, January 17

Guiding the autonomous car just with cameras? Tesla’s controversial choice

To guide the future autonomous car, Tesla has a strong opinion on the issue and is now betting everything on cameras, leaving certain specialists in driving assistance systems that also use radar and laser Lidar sensors doubtful.

At the big tech show CES in Las Vegas, the manufacturer of Lidars Luminar installed a whole device in a parking lot to prove the superiority of its product, making two cars roll side by side at about 50 km / h before tumbling down on the track the silhouette of a child.

The vehicle equipped with its product brakes in time when the other car, a Tesla, pushes the dummy dummy.

The conditions of the experiment are not validated by an external person. “But we didn’t just want to show a Powerpoint or a nice video,” Aaron Jefferson, head of product development at Luminar, told AFP.

“In perfect driving conditions, on a sunny day, the cameras can do a lot,” he said. “The problem is atypical situations”, he admits: blind spots, fog, a plastic bag, the particular light at sunset, etc.

Most manufacturers of autonomous driving systems have chosen to combine cameras with radars and / or lidars, instruments which allow distance to be measured respectively via radio waves or by laser.

Tesla made the choice last year to drop radars and rely solely on cameras for its driver assistance system. According to Elon Musk, with technological advancements, an “artificial brain” running on cameras is able to match the capacities of a human brain analyzing its environment with its two eyes.

“It’s a pretty reasonable strategy,” says Kilian Weinberger, a professor at Cornell University who has worked on object detection in autonomous driving systems.

Officially, Tesla offers, for the moment, only driving assistance systems, but hopes to arrive, in fine, with a completely autonomous driving system.

– Predicting is complicated –

The manufacturer chose, several years ago, to install cameras and radars by default on all its cars and was thus able to recover a significant amount of information on how motorists drive in real conditions.

“Tesla made the bet that by collecting a lot of data, they can train an algorithm as efficient as an algorithm using much more expensive sensors with less data,” Weinberger explains.

The robo-taxis of Waymo, the autonomous driving subsidiary of Google, are for example encumbered with sensors but only run under specific conditions.

Autonomous driving systems have four main functions, notes Sam Abuelsamid of Guidehouse Insights: perceive the environment, predict what’s going to happen, plan what the car is going to do, and execute.

“Predicting turned out to be a lot more complicated than engineers thought, especially with pedestrians and cyclists,” he says.

And the progress engineers thought they could make on software that ran solely on cameras through artificial intelligence and machine learning has leveled off.

– Regulators’ requirements –

The problem is that “Elon Musk dangled his autonomous driving system by ensuring that the equipment already installed on the cars would suffice,” says Mr. Abuelsamid. “Tesla can no longer go back as hundreds of thousands of people have already paid money” to access it.

For the boss of the French equipment manufacturer Valeo, which is presenting its third generation of lidar at CES, “cameras alone, whatever the amount of data stored, are not enough”.

“Understanding, analyzing what is happening around the car, what we see and what we do not see, day and night, is absolutely key”, says Jacques Aschenbroich at the AFP. And the environment is dynamic, he adds, referring to the traffic on Place de l’Etoile in Paris.

“Our absolute conviction is that Lidars” are needed to achieve more advanced levels of autonomy, concludes Aschenbroich.

“All sensors have their advantages and their drawbacks,” says Marko Bertogna, professor at Italian University Unimore and head of a team running an unmanned vehicle during a self-driving car race in Las Vegas on Friday. .

“In the current state of knowledge”, the cameras alone still make too many mistakes, he also believes.

For now, “the more systems you have operating in parallel, the more you manage to merge different types of sensors, the more you will probably be among the first to meet the safety requirements that will be demanded by the regulators”, predicts the specialist.

Reference-www.rtl.be

Leave a Reply

Your email address will not be published. Required fields are marked *