How to Spot AI-Generated Videos: The Surprising Number One Warning Sign, According to Experts

Ethan Collins
How to Spot AI-Generated Videos
How to Spot AI-Generated Videos: The Surprising Number One Warning Sign, According to Experts © ogichobanov – iStock

Editorial Note: Talk Android may contain affiliate links on some articles. If you make a purchase through these links, we will earn a commission at no extra cost to you. Learn more.

Your social media feed is drowning in low-quality, AI-generated videos these days. But what if there were a simple clue to help you spot them? Here’s what you need to know—before those rabbit-trampoline videos fool you again.

The Grainy Trap: When Bad Video Is a Red Flag

It’s happened. You’ve probably already fallen for it at least once (no shame, we’re all there). Over the past six months, AI video generators have become so incredibly advanced that our sense of what’s real, at least on camera, is on shaky ground. In fact, here’s the best-case scenario: you’ll get duped again and again, until you’re so exasperated you begin to question absolutely everything you see. Welcome to the future!

But for now, there are still a few warning signs you can watch out for. One in particular stands out: if you see a video with grainy, blurry, low-quality images, be careful—you might just be looking at artificial intelligence in action.

“That’s one of the first things we look for,” says Hany Farid, a computer science professor at the University of California, Berkeley, pioneer in digital forensics and founder of deepfake detection company GetReal Security.

Of course, it’s only a matter of time before AI video tools get better and this handy tip goes the way of the VCR. It could be months—it could be years. Who knows! Sorry. But if you take a minute to explore the nuances of this idea, this tip might just save you (at least for now) from the clutches of less sophisticated AIs, as your very perception of truth is being remade.

Blurry Proof? Not Exactly

Let’s be clear: this is not definitive evidence. AI-generated videos aren’t automatically more likely to be low-quality. The best AI tools can create gorgeous, well-produced clips. Likewise, grainy, compressed clips aren’t always the handiwork of AI.

“If you see something that’s really poor quality, that doesn’t mean it’s fake. That doesn’t mean there’s anything malicious going on,” explains Matthew Stamm, professor and director of the Multimedia and Information Security Lab at Drexel University (USA).

Still, right now, it’s those blurry, pixelated videos created by AI that are most likely to fool you. It’s a sign you should look a little more closely.

“The leading text-to-video generators, like Google’s Veo or OpenAI’s Sora, still show slight inconsistencies,” says Farid. “But we’re not talking about six fingers or unreadable text. It’s more subtle than that.”

Even today’s most advanced AI models regularly struggle with issues like eerily smooth skin textures, odd or changing patterns in hair or clothing, or background objects that drift around in ways that make physics teachers weep. These flaws can be easy to miss, but the sharper the image, the easier it is to spot them.

And that’s what makes low-quality video so sneaky. When you ask an AI to generate a video that looks like it was filmed on an old phone or a cheap security camera, the blur can help mask the tell-tale glitches that would otherwise raise eyebrows.

The Viral Hall of Fame: AI Fakes That Fooled Millions

In recent months, several widely viewed AI videos have taken the internet by storm, fooling countless viewers. Funny enough, they all shared something in common. There’s the fake (but adorable) video of wild rabbits bouncing on a trampoline that raked in over 240 million views on TikTok. Millions more ‘liked’ a romantically staged video of two people falling in love on the New York City subway—only to be let down when it turned out to be a hoax. And yes, I’ll admit it: I was completely taken in by a viral video supposedly showing an American preacher in a conservative church delivering a surprisingly left-wing sermon.

“Billionaires are the only minority we should be afraid of,” he declares with a Southern accent. “They have the power to destroy this country!”

I was stunned. Had our political boundaries really gotten that blurry? Nope. Just another day at the office for AI.

Each of these videos looked like it was shot on an ancient cell phone. The AI bunnies? Presented as low-light security camera footage. The subway couple? Pixelated as heck. The imaginary preacher? The shot looked like someone had hit the zoom in one too many times. And, as it turns out, these clips also contained other subtle AI tells hiding in plain sight.

The Three Essential Clues: Resolution, Quality, and Duration

“The three key criteria are resolution, quality, and duration,” explains Farid.

Duration is the easiest to check.

“Most AI-generated videos are very short—even shorter than usual TikTok or Instagram videos, which tend to be 30 to 60 seconds. Almost all the videos I get asked to inspect run six, eight, or ten seconds.”

Why? Generating video with AI is still expensive (in computing power and sometimes actual cash), so most tools are limited to bite-sized clips. The longer the video, the higher the chance AI will slip up.

“You can stitch together several AI-generated videos, but there’s usually a cut every eight seconds or so.”

Resolution and quality are related but not identical. Resolution refers to the number or size of pixels in an image. Compression, on the other hand, is a process that reduces file size by discarding detail, often giving a picture those tell-tale pixel blocks and blurry outlines.

In practice, Farid says the low-quality fakes can be so convincing that bad actors degrade them on purpose.

“If I want to fool someone, here’s my recipe: I generate my fake video, then lower the resolution so it’s just clear enough to make out, but too fuzzy for people to spot all the little details. Next, I add another layer of compression to hide any artifacts that might give me away,”

Farid explains.

“It’s a common technique.”

But here’s the catch: as you read this, tech giants are pouring billions of dollars into making AI-generated videos even more realistic.

“I have bad news. If these visual clues exist now, they’re not going to be there for much longer,” says Stamm. “I think these signs will disappear from video within two years—at least the obvious ones—because they’ve almost entirely vanished from AI-generated images already. You can’t trust your eyes anymore.”

Can the Truth Keep Up With AI?

This doesn’t mean all hope is lost. When researchers like Farid and Stamm check suspicious content, they use advanced techniques to spot the difference.

“Generating or editing a video leaves imperceptible statistical traces, like fingerprints at a crime scene,” says Stamm. “We’re seeing new techniques evolve that can detect and reveal those traces.”

For example, the arrangement of pixels in a fake video sometimes differs from that in an authentic one, but these clues on their own aren’t definitive proof.

Tech companies are also working on new digital authentication standards. In theory, cameras could add hidden data at the time of recording to certify an image’s authenticity. Likewise, AI tools could embed similar info in their videos and images to flag them as synthetic. According to Stamm and other experts, such initiatives could help.

Yet, the real answer may lie elsewhere. Mike Caulfield, a digital literacy expert, puts it like this: hunting for AI clues in videos isn’t a winning strategy, since the clues keep shifting. Instead, he says, we need to let go of the idea that videos or images mean anything on their own, without context.

“In my view, video will, over time, become more and more like text, where provenance [the origin of the video], not surface features, is crucial, and we’d better get prepared for that,” Caulfield says.

We never just read text and assume it’s true because, well, it’s written. If in doubt, we always cross-check the source. Videos and images used to be different—harder to fake, harder to manipulate. Not anymore. Now, what matters is where a piece of content came from, who made it, the context, and reliable verification. The real question is when (or if) we’ll all learn to think that way.

“If I may stress the point,” Stamm adds, “I believe this is the greatest information security challenge of the 21st century. But it’s a recent problem. The number of people working to solve it is still small, but growing fast. We’re going to need a mix of solutions, training, smart policy and technological approaches, all cooperating together. I’m not ready to give up hope.”

Thomas Germain is a technology reporter for the BBC, covering artificial intelligence, privacy, and the internet’s weirder corners for almost a decade. You can find him on X and TikTok: @thomasgermain.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
Google Fi’s Latest Deal Halves Unlimited Plan Prices For a Year 4

Google Fi’s Latest Deal Halves Unlimited Plan Prices For a Year

Next Post
Prime Video Is Getting More Expensive As Amazon Rolls Out A New Ultra Tier 5

Prime Video Is Getting More Expensive As Amazon Rolls Out A New Ultra Tier