US Congress Authorises $5 Mln Prize Competition to Help Pentagon Tackle Deepfakes

The US Congress has approved an annual defence policy bill authorising up to $5 million in cash prizes for a competition to find a way to automatically detect deepfake technology, which allows fabricated images, sounds and videos to look like real ones, the intelligence defence publication website C4ISRNET reports as quoted by Sputnik.

The bill, which has already been signed into law by President Trump, stipulates that the Intelligence Advanced Research Projects Activity (IARPA), an organisation within the Office of the US Director of National Intelligence, initiates the competition “as a way to stimulate the research, development, or commercialisation of technologies” helping to grapple with deepfakes, according to C4ISRNET.

Additionally, the document obliges the Director of National Intelligence to issue a report on possible national security repercussions from deepfakes, including those related to foreign governments’ capabilities to this effect.

On top of that, the law urges IARPA’s director to notify Congress about any credible attempt by a foreign entity to deploy deepfakes in a bid to meddle in US elections.

The law’s signing comes after Pentagon Joint Artificial Intelligence Centre director Lt. Gen. Jack Shanahan pointed out the potential “national security risk” caused by deepfakes, calling for more resources to be given to the US military to tackle the problem.

“As a department, at least speaking for the Defence Department, we’re saying it’s a national security problem as well. We have to invest a lot in it. A lot of commercial companies are doing these every day. The level of sophistication seems to be exponential,” he told a conference dedicated to AI at the John Hopkins Applied Physics Laboratory in September.

Shanahan pointed to the Defence Advanced Research Project Agency (DARPA)’s Media Forensics programme as one way that the military is already tackling the issue.

“It’s coming up with ways to tag and call out disinformation. Once completed, the DARPA project is expected to allow the military to detect manipulation to images and video and even find out how they were created,” he said.

Deepfakes first came to prominence in 2017, raising grave concerns about the ability to use the manipulations to create fake news and videos specifically featuring politicians or celebrities, as well as other malicious content.

In October, a study conducted by the cybersecurity company Deeptrace revealed that about 96 percent of deepfakes being circulated online are pornographic. Even so, many are concerned that this new technology may affect other areas, including the 2020 US presidential election.

Be the first to comment

Leave a Reply

Your email address will not be published.


*