Let’s get one thing straight: if you’re looking for a simple checklist to spot AI-generated videos, you’re about five minutes too late. The era of AI Will Smith grotesquely eating spaghetti is over. We’ve rocketed past the uncanny valley and landed in a strange new territory where AI can spin up photorealistic videos that are convincing enough to dupe politicians, go viral on TikTok, and even threaten to drain your bank account. The game has changed, and our best defense isn’t a magic detection app—it’s a healthy dose of digital literacy and good old-fashioned critical thinking.
Key Takeaways
- Detection is now about literacy, not just artifacts. The obvious “tells” of AI generation, like mangled hands or weird blinking, are rapidly being fixed. The most reliable way to spot a fake is to develop “AI literacy”—a critical mindset that questions the context, source, and plausibility of a video.
- The threat is real and has high-stakes consequences. AI-generated fakes have moved beyond online pranks to become tools for political disinformation, as seen in a recent deepfake of Senator Amy Klobuchar, and are fueling a sharp rise in sophisticated financial fraud.
- Platforms and regulators are playing catch-up. While tech companies are developing invisible watermarks like Google’s SynthID and the C2PA standard, their enforcement of anti-fake policies is inconsistent. This has led to lawmakers pushing for stricter regulations, like the TAKE IT DOWN Act.
- Generative AI is a new weapon in global information warfare. Foreign adversaries are leveraging AI to create influence campaigns at an unprecedented scale, speed, and level of sophistication, making it exponentially harder for the average person to separate fact from fiction, as reported by Axios.
The Uncanny Valley Is Disappearing
Not long ago, you could spot an AI video from a mile away. There was a certain liquid, morphing quality to faces and objects that screamed “fake.” But as AI experts will tell you, those days are fading fast. “There’s no fundamental obstacle to getting higher quality data,” Siwei Lyu, a computer science professor at the University at Buffalo SUNY, told Mashable.
Models like Google’s Veo 3 can now generate video and sound from a simple text prompt, creating convincing clips of everything from emotional support kangaroos to street interviews. The key to navigating this new reality, according to Northwestern University AI researcher Negar Kamali, is training your brain to sense when something feels off, “even if I don’t find the artifact, I cannot say for sure that it’s real, and that’s what we want.”
Your New Toolkit: Spotting a Fake in the Wild
So, if we can’t rely on glitchy artifacts anymore, what can we do? It helps to break down AI videos into two main categories: imposter videos that mimic real people and text-to-video creations that invent scenes from scratch.
Imposter Videos: The Deepfake Dilemma
This is the stuff that keeps politicians and celebrities up at night. Imposter videos use face-swapping or lip-syncing technology to make it look like someone said or did something they never did. The tech is getting alarmingly accessible, but there are still a few things to watch for.
UC Berkeley forensics expert Hany Farid suggests paying attention to the edges. “You typically see artifacts when the head moves obliquely to camera,” he explained to Mashable. Look for glitches or blurring around the face, especially when another object (like a hand) moves in front of it. Also, watch the body. If the person is unnaturally still, with their arms locked to their sides, it might be a fake. With lip-syncs, Lyu advises focusing on the mouth, where you might see “irregularly shaped teeth” or a “wobbling of the lower half” of the face, giving it a rubbery look.
Text-to-Video: The Surreal Stuff
While imposter videos are getting regulators’ attention, text-to-video generators like OpenAI’s Sora and Luma are exploding in popularity, fueling a wave of content that ranges from the absurd—like a cat acing an Olympic dive—to the dangerously misleading, such as fake videos of hurricane damage.
Here, the clues are less about technical flaws and more about common sense. Look for what experts call “sociocultural implausibilities”—think of the viral image of the Pope in a Balenciaga puffer jacket. It wasn’t technically flawed, but it was contextually bizarre. Also, watch the background for “temporal inconsistencies,” as Farid calls them. “The building added a story, or the car changed colors, things that are physically not possible,” he says. These errors often happen away from the main focus of the video.
When Fakes Get Real: Politics, Power, and Fraud
This isn’t just an academic exercise. In August 2025, Senator Amy Klobuchar found herself the subject of a vulgar deepfake video that spread on social media. In a New York Times op-ed, she described her frustration when X (formerly Twitter) refused to take the video down, telling her to try and get a “community note” added. Meanwhile, she noted that TikTok removed the video and Meta labeled it as AI-generated.
Klobuchar’s experience highlights the core of the problem: a regulatory gray zone where platforms inconsistently enforce their own rules. “Why should tech companies’ profits rule over our rights to our own images and voices?” she wrote. It’s this frustration that’s fueling a bipartisan push for laws like the TAKE IT DOWN Act, which aims to force companies to create processes for removing nonconsensual fakes.
The threat extends to your wallet, too. In the financial world, identity verification company Incode recently acquired rival AuthenticID to bolster its defenses against a 300% year-over-year increase in account-opening fraud, which it attributes directly to AI-generated synthetic deepfakes. This is a clear signal that the private sector is scrambling to build a technical arms race against AI-powered criminals.
Why It Matters
The rise of convincing AI video is more than just a weird internet trend; it’s a fundamental shift in our information ecosystem with serious consequences.
The problem is now operating on a global scale. Foreign adversaries are using generative AI to supercharge their disinformation campaigns. According to a report from Vanderbilt University, a China-based tech company named GoLaxy is allegedly using AI to create highly adaptive synthetic personas to influence public opinion in Taiwan and Hong Kong. Gen. Paul Nakasone, former head of the NSA, warned that AI allows these operations to be delivered at a “speed and a scale that we’ve never seen before.”
This tidal wave of synthetic content is crashing into our institutions, from government to education. As generative AI tools become embedded in everything we do, organizations are being forced to develop responsible AI frameworks to manage the risk. According to education software executive Stephan Geering, the goal for institutions is to “distinguish between practical, impactful applications and those driven by hype,” focusing on low-risk uses while building trust and AI literacy among users.
Conclusion
Relying on a specific set of tells to spot AI videos is a losing battle. The artifacts we look for today will be patched by tomorrow’s models. The real, durable skill is building a habit of verification. As Professor Lyu says, “A deepfake only looks real from one angle.” When you see a video that raises an eyebrow, check other sources. See if reputable news outlets are reporting it. Do a reverse image search.
Ultimately, our best defense against a flood of AI-generated content is a three-pronged approach: our own critical judgment, better technological guardrails from the industry, and sensible regulation that holds platforms accountable. The internet is getting weirder and more confusing, and trusting your gut—backed by a healthy dose of media literacy—is more crucial than ever.
Sources
- Mashable: How to identify AI-generated videos online
- USA Today: Sen. Amy Klobuchar said she was surprised when she heard her voice in a clip on X criticizing Sydney Sweeney’s American Eagle ad campaign…
- GovTech: Opinion: Cutting Through the Hype for GenAI in Higher Education
- The New York Times: What I Didn’t Say About Sydney Sweeney
- Axios: The new world of AI disinformation
- Fintech Futures: Incode bolsters AI fraud defence with AuthenticID acquisition