Deepfake AI Systems

Fake News
Fake News

AI is enabling increasingly realistic photo, audio, and video forgeries, or “deep fakes,” that adversaries could deploy as part of their information operations. Indeed, deep fake technology could be used against the United States and U.S. allies to generate false news reports, influence public discourse, erode public trust, and attempt to blackmail diplomats. Although most previous deep fakes have been detectable by experts, the sophistication of the technology is progressing to the point that it may soon be capable of fooling forensic analysis tools. In order to combat deep fake technologies, DARPA has launched the Media Forensics (MediFor) project, which seeks to “automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media.” MediFor has developed some initial tools for identifying AI-produced forgeries, but as one analyst has noted, “a key problem … is that machine-learning systems can be trained to outmaneuver forensics tools.” For this reason, DARPA plans to host follow-on contests to
ensure that forensic tools keep pace with deep fake technologies.

Artificial Intelligence and National Security

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s