When AIs Start Quoting Each Other: What’s Really Happening Behind the Screen

Ethan Collins
When AIs Start Quoting Each Other
When AIs Start Quoting Each Other: What’s Really Happening Behind the Screen © Khanchit Khirisutchalual – iStock

Editorial Note: Talk Android may contain affiliate links on some articles. If you make a purchase through these links, we will earn a commission at no extra cost to you. Learn more.

If you thought artificial intelligence was content with trawling through the endless archives of human knowledge, think again. The latest version of ChatGPT, OpenAI's headline-making AI, has been spotted citing information not from Wikipedia, but from its high-profile competitor: Elon Musk's encyclopedia, Grokipedia. The catch? Grokipedia isn't written—or even edited—by humans, but is instead an encyclopedia built entirely by another AI. Welcome to the slightly dizzying future, where AIs reference the output of their algorithmic cousins. The circle of (artificial) life, if you will.

The Anatomy of a Modern Information Chain

According to reporting by The Guardian, ChatGPT version 5.2 cited Grokipedia in response to over a dozen international geopolitics questions. Nine times out of those interactions, Grokipedia was the source. For anyone with a passing interest in truth, accuracy, or just not tripping over wildly unreliable info, this trend sets off some seriously loud alarm bells. The reason is simple enough: OpenAI’s chatbot is drawing not from collective human curation, but from a highly contested AI-generated encyclopedia whose reliability is already under question.

  • Wikipedia, founded in 2001, is a volunteer-driven, collaborative effort, open for editing and improvement by anyone online. Its strengths and flaws stem from its vast, crowd-sourced roots.
  • Grokipedia, on the other hand, is the brainchild of Elon Musk—a project where only AI gets to write or update pages. No human tampering allowed. If you want to edit Grokipedia, you’ll have to convince an algorithm, not a moderator.

Should we worry? Some would say so. For years, Elon Musk has publicly attacked Wikipedia, accusing it of being “controlled by far-left activists” and urging people to stop supporting it financially. He’s also made similar claims about OpenAI. Meanwhile, Wikipedia’s own founder, Jimmy Wales, called these allegations “factually incorrect,” while conceding that even Wikipedia can always be improved.

Disputed Sources, Disputed Truths

The deeper you look into Grokipedia, the murkier it gets. The encyclopedia is crammed with thousands of what researchers have labeled “questionable” and “problematic” sources. U.S. experts, especially Harold Triedman and Alexios Mantzarlis of Cornell Tech, have raised red flags over Grokipedia’s inability to safeguard against unreliable information on crucial topics like Iranian politics or Holocaust denial. “It’s clear that source safeguards have been widely bypassed on Grokipedia,” the Cornell Tech team concluded, expressing their concern about the predominance of such problematic material.

Interestingly, ChatGPT doesn't always default to Grokipedia. According to The Guardian, it only references Grokipedia on select topics, and with no visible transparency about how sources are chosen. For instance, in conversations about possible links between telecom giant MTN and the Iranian government, Grokipedia pops up. But when ChatGPT is prompted to repeat blatantly false claims—about events like various insurrections, media bias towards Donald Trump, or the history of the HIV/AIDS epidemic, areas where Grokipedia had previously been flagged for misinformation—it pointedly does not cite Grokipedia as a source.

What’s at Stake When AIs Become Each Other’s Librarians?

The growing use of AI-generated sources by other AIs stirs up a host of practical and philosophical worries. How do we ensure factual accuracy when algorithms start echoing each other? Without traditional forms of human oversight—collaborative editing, debate, correction—are we constructing echo chambers made of code?

Even more, these shifting practices are already sparking changes in how major players operate. OpenAI and emerging startups are developing niche services like ChatGPT Health, but these are arriving against the chorus of warnings from health professionals about unreliable, potentially dangerous replies.

  • Questions about the psychological impact of AI interactions are also surfacing, as OpenAI faces scrutiny over tragic incidents involving young users.
  • Meanwhile, the arms race continues, with Google, Anthropic, xAI, Meta, and leading Chinese firms all chasing OpenAI’s innovations.

It’s not all doom and gloom, of course. Some thinkers urge us to step back and consider that AI isn’t just a mindless parrot repeating data. As Aymeric Roucher and Thibaut Giraud argue, understanding AI’s strengths and limitations calls for critical, even creative, engagement. The real challenge? Ensuring the machines don’t just cite each other until all nuance vanishes into the algorithmic ether.

The takeaway: As the lines blur between man-made and machine-spun knowledge, it’s more crucial than ever to keep asking the tough questions—about our sources, our safeguards, and about who (or what) is really holding the keys to the world’s information.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
Samsung Galaxy Store adds rewards program with coupons and prizes 3

Samsung Galaxy Store adds rewards program with coupons and prizes

Next Post
You won’t sleep after these 10 spine-chilling serial killer series

You won’t sleep after these 10 spine-chilling serial killer series