Deepfakes can make epic memes or put Nicholas Cage in every movie, but they can also undermine elections. As threats of election interference mount, two teams of AI researchers have recently introduced novel approaches to identifying deepfakes by watching for evidence of heartbeats.
Existing deepfake detection models focus on traditional media forensics methods, like tracking unnatural eyelid movements or distortions at the edge of the face. The first study for detection of unique GAN fingerprints was introduced in 2018. But photoplethysmography (PPG) translates visual cues such as how blood flow causes slight changes in skin color into a human heartbeat. Remote PPG applications are being explored in areas like health care, but PPG is also being used to identify deepfakes because generative models are not currently known to be able to mimic human blood movements.
In work released last week, Binghamton University and Intel researchers introduced AI that goes beyond deepfake detection to recognize which deepfake model made a doctored video. The researchers found that deepfake model videos leave behind unique biological and generative noise signals — what they call “deepfake heartbeats.” The detection approach looks for residual biological signals from 32 different spots in a person’s face, which the researchers call PPG cells.