Facebook has developed a model to tell when a video is using a deepfake – and can even tell which algorithm was used to create it.
The term “deepfake” refers to a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not.
Notable examples of deepfakes include a manipulated video of Richard Nixon’s Apollo 11 presidential address and Barack Obama insulting Donald Trump – and although they are relatively benign now, experts suggest that they could be the most dangerous crime of the future.
Detecting a deepfake relies on telling whether an image is real or not, but the amount of information available to researchers to do so can be limited – relying on potential input-output pairs or rely on hardware information that might not be available in the real world.
Facebook’s new process relies in detecting the unique patterns behind an artificially-intelligent model that could generate a deepfake. The video or image is run through a network to detect ‘fingerprints’ left on the image - imperfections when the deepfake was made, such as noisy pixels or asymmetrical features – that can be used to find its ‘hyperparameters’.
Read more at Facebook AI Blog:
Reverse engineering generative models from a single deepfake image
Our AI researchers have partnered with @MichiganStateU to develop a method for reverse engineering deepfakes to detect what model they came from and whether multiple deepfakes are potentially coming from the same model.
ai.facebook.com