Before there were deepfakes, there were cheapfakes; now there are both. Last week, social media was suddenly awash in videos pushed by unscrupulous Republican accounts edited to play up stereotypes about President Joe Biden’s age. Media outlets, too, promoted the clips, with the New York Post recently claiming to have footage showing Biden wandering off in a daze during the G7 summit in Italy. In reality, Biden was congratulating a skydiver who had just landed but was not visible in the frame. A week later, the Post published a similarly deceptively edited video claiming to show Biden frozen onstage at a fundraiser. The full video shows this did not happen.
All forms of fakes are being used to influence elections both here in the U.S. and abroad. But while some rules and regulations are being developed to fight deepfakes, we may be less prepared to mitigate the risks of their easier-to-create and harder-to-detect cousins.
Deepfakes encompass images and videos that are created or modified almost entirely by AI-powered technologies. Cheapfakes are real images or videos that are simply misattributed or deceptively edited. Their power is in their simplicity.
As rumors swirled that former President Donald Trump was falling asleep at his criminal trial in New York City, photos circulated purportedly showing him asleep in the courtroom. Many of those images appeared repurposed from a different setting. Although second-hand reports confirm that Trump had looked sleepy at his trial, the photo was still deceptive, and highlights why cheapfakes can pose a larger challenge than deepfakes.
I have spent the past 25 years as an academic researcher developing techniques to detect all forms of deceptive content, from Photoshop manipulation to AI generation. I subjected the Trump courtroom photo to several forensic techniques to determine if it was authentic or not. Each technique confidently classified the image as real. I only realized this was likely a cheapfake after an observant member of my team noticed that Trump’s chair didn’t match the shape or color of contemporaneous photos we knew to have come from the courtroom.
The first challenge of cheapfakes is that they are more difficult to detect as obviously deceptive, since there are no misshapen hands or gravity-defying background objects.
The first challenge of cheapfakes is that they are more difficult to detect as obviously deceptive, since there are no misshapen hands or gravity-defying background objects — telltale signs often found in deepfakes. The second challenge is that cheapfakes can be easier to create: In the case of the G7 photos and videos of Biden, a simple crop can make it appear as if Biden is staring off into a void. And the third challenge is that social media platforms have what can, at best, be described as incoherent policies when it comes to these types of deceptive posts.
After a deceptive cheapfake video circulated on Facebook claiming to show Biden inappropriately touching his adult granddaughter, Facebook refused to take the video down claiming it didn’t violate their “manipulated media” policy which placed limits on deepfakes but not cheapfakes.
In February of this year, Meta’s oversight board stated that this media policy “is lacking in persuasive justification, is incoherent and confusing to users, and fails to clearly specify the harms it is seeking to prevent,” and suggested that Meta update its policies to be consistent regardless of how deceptive content was created. Meta said “it will respond publicly to their recommendations within 60 days in accordance with the bylaws,” but Mark Zuckerberg’s company is under no obligation to follow the board’s recommendations.








