Monday, January 17

Presidential election: Deepfake, deepvoice, leaks … These threats hanging over the campaign

Will cyberattacks be able to weaken candidates and tip the ballot for the 2022 presidential election in France? While the interference of pirate groups in the 2016 US elections publicized these risks, what should we fear between now and the first round?

easier deepfakes

Completely made thanks to software based on artificial intelligence, deepfakes can lead to giving words and an attitude to a candidate. In February 2020, a candidate in a local election in Delhi, India, broadcast a faked video where he was speaking in a language he did not speak. This deepfake quickly went viral and spread through WhatsApp groups. The idea here was to promote the candidate to a certain part of the Indian population who did not speak English, but a more local dialect.

From there to a video being diverted for less avowed purposes, there is only one step, explains to CNEWS Dominique Ango, general manager of the southern Europe region of the company Pindrop, expert in cybersecurity and in the analysis of deepfakes. “Deepfakes are no longer imaginary, it’s a reality. Nicolas Canteloup’s show on TF1, for example, does deepfake every evening. This technology is not illegal, it is the use that is made of it that can be when it is put in dishonest hands. And we must fight against the chances of rigging the elections, ”he warns. Especially since the software available to create a faked video is more and more affordable, which makes it possible for anyone to manufacture them. However, this threat seems less credible in the immediate future than that posed by deepvoices.

more credible deepvoices

Last October, the theft of $ 35 million brought to light the “deepvoices”, since this sum was stolen thanks to a phone call where an AI usurped the voice of a company manager to make a transfer. As with faked videos, these tools rely on artificial intelligence and deep learning to mimic a voice after isolating audio tracks from the targeted person. Cyber ​​attackers can then type text into the software and the AI ​​will reproduce the voice to deceive listeners. “By ear, you are completely unable to distinguish the true from the false because the sound frequencies are such that the machine can go further than the human ear”, explains Dominique Ango.

A technology which again is perfectly legal and which is illustrated in particular in Val, a documentary devoted to the actor Val Kilmer suffering from throat cancer. “Deepfake tools were used to make the actor speak in certain sequences with a synthetic voice”, underlines Dominique Ango, of which the Pindrop company was the only one capable of detecting to the nearest second the three sequences used for this purpose in the film. Using such tools could make it possible to lend comments to a candidate and propagate them on social networks. Pindrop also warns against the use of other spoofing tools, which can even be downloaded for free from smartphone application stores, where “it is possible to call a person by presenting the number. from someone else. Because by making a call with this borrowed number you can pretend to be another, ”he warns.

Leaks highlighting personal data

At the same time, the elections of 2022 could be particularly targeted by cyberattackers in order to recover personal information on the candidates, but also their relatives, their political advisers and even the members of their party.

“As we saw previously in the United States and even with the MacronLeaks during the 2017 presidential election, the real risks would come from interference by third parties, such as rival countries which will use their intelligence services to seek information on candidates for example. It is difficult to imagine that this type of scenario does not occur, since that is the raison d’être of the intelligence services. But it is above all the political parties that could be targeted, since it is estimated that some may not have sufficiently effective structures to prevent attacks, ”said Ivan Kwiatkowski, senior researcher in the cybersecurity specialist. Kaspersky.

Leave a Reply

Your email address will not be published. Required fields are marked *