AI Search Engines Are Failing News Citation—Here’s Why That Matters


Español
Chipset AI
Chipset AI
Peach/Vecteezy

Redacción HC
04/05/2025

As AI-powered search engines continue to transform how people access information, a growing number of users—nearly 1 in 4 Americans—are turning to these tools as primary sources of knowledge. Yet beneath their impressive capabilities lies a troubling flaw: they’re routinely failing to cite the original news articles they summarize or reference.

A recent investigative benchmark study by Klaudia Jaźwińska and Aisvarya Chandrasekar from the Tow Center for Digital Journalism at Columbia Journalism Review reveals the extent of this problem. After evaluating eight of the most popular AI-integrated search engines—including ChatGPT Search, Perplexity, Grok, Copilot, and Gemini—the authors concluded that none of them reliably cite original news sources. In fact, citation accuracy was the exception, not the rule.

This failure not only undermines the verifiability of information but also threatens the economic sustainability of journalism itself.

Generative AI and the News: A Dangerous Disconnect

The Rise of Generative Search Tools

AI search tools differ significantly from traditional engines like Google. Rather than listing sources, they often generate direct answers, integrating information into seamless narratives without clearly indicating where it came from.

“The issue isn’t just about accuracy,” the authors note, “it’s about traceability and transparency—fundamentals of credible journalism.”

When users are shown answers without original links, news publishers lose traffic, and readers lose the ability to verify or dig deeper.

A Closer Look at the Study

Methodology and Scope

The researchers evaluated eight AI search engines, testing them with multiple news-related prompts. The goal was to assess whether the tools could:

  • Identify when an article exists
  • Correctly cite the headline, publication, date, and URL
  • Avoid fabricated or misleading links

They also tracked whether tools acknowledged uncertainty or simply “hallucinated” responses when the correct information was unavailable.

What They Found: Citation Chaos

High Error Rates Across the Board

The results are alarming:

  • Over 60% of AI-generated search results failed to cite properly.
  • Many tools linked to homepage URLs or syndicated versions, rather than original articles.
  • Some engines, notably Grok, had error rates as high as 94%.
  • Even Perplexity, the top performer, had a 37% citation failure rate.
“AI search engines are not just unreliable—they’re confidently wrong,” the authors write.

This means readers could be presented with information that sounds credible but lacks any accessible proof.

Real-World Consequences for Journalism

A Threat to the Business Model

News organizations rely heavily on traffic-driven advertising revenue. When AI tools summarize their content without redirecting users to the original source, it siphons away visits, ad impressions, and subscriptions.

This could be especially damaging for regional and independent outlets, whose survival depends on consistent readership.

A Blow to Media Literacy

For users, the lack of traceable sources leads to a bigger issue: the erosion of verification. If readers can't see or access original articles, how can they evaluate the credibility of what they’re being told?

“It’s a perfect storm for misinformation,” say the authors. “Generative AI answers, delivered with confidence, but detached from origin.”

What's Being Done—and What Needs to Change

Recommendations from the Authors

The report offers several key suggestions to tackle this problem:

  1. Build in source-awareness: AI systems should prioritize accurate citation and acknowledge when they lack access to the correct source.
  2. Revenue-sharing models: Tech companies should partner with news publishers to share traffic or revenue when AI outputs summarize their content.
  3. Transparent benchmarking: Tools should be regularly audited for citation accuracy by third-party researchers.
  4. Policy frameworks: Regulators may need to treat citation transparency as a form of digital accountability.
“If AI can’t cite its sources,” the report concludes, “it shouldn’t summarize news.”

Conclusion: Citation Is Not Optional

This study highlights a fundamental issue at the intersection of journalism, technology, and public trust. In a world where information is increasingly synthesized by machines, citation is more than a technical detail—it’s a moral and civic responsibility.

As readers, we must demand tools that respect journalistic labor and promote verifiable knowledge. And as developers and regulators, it’s time to build systems that don’t just inform—but inform responsibly.


Topics of interest

Technology

Referencia: Jaźwińska K, Chandrasekar A. AI Search Has A Citation Problem: We Compared Eight AI Search Engines. They’re All Bad at Citing News. Columbia Journalism Review [Internet]. 2025 Mar 6 [cited 2025 Jun 26]. Available from: https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php

License

Creative Commons license 4.0. Read our license terms and conditions
Beneficios de publicar

Latest Updates

Figure.
When Animals Disappear, Forests Lose Their Power to Capture Carbon
Figure.
Sixteen Weeks That Moved Needles: How Nutrition Education Improved Diet and Child Hemoglobin in a Peruvian Amazon Community
Figure.
When Plastics Meet Pesticides: How Nanoplastics Boost Contaminant Uptake in Lettuce