Facebook AI on detecting Deepfake images and videos

Spawn

Administrator
Verified
Staff member
Jan 8, 2011
21,053
Facebook has developed a model to tell when a video is using a deepfake – and can even tell which algorithm was used to create it.

The term “deepfake” refers to a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not.

Notable examples of deepfakes include a manipulated video of Richard Nixon’s Apollo 11 presidential address and Barack Obama insulting Donald Trump – and although they are relatively benign now, experts suggest that they could be the most dangerous crime of the future.

Detecting a deepfake relies on telling whether an image is real or not, but the amount of information available to researchers to do so can be limited – relying on potential input-output pairs or rely on hardware information that might not be available in the real world.

Facebook’s new process relies in detecting the unique patterns behind an artificially-intelligent model that could generate a deepfake. The video or image is run through a network to detect ‘fingerprints’ left on the image - imperfections when the deepfake was made, such as noisy pixels or asymmetrical features – that can be used to find its ‘hyperparameters’.

Read more at Facebook AI Blog:
 
Top