Redacción HC
04/05/2025
As AI-powered search engines continue to transform how people access information, a growing number of users—nearly 1 in 4 Americans—are turning to these tools as primary sources of knowledge. Yet beneath their impressive capabilities lies a troubling flaw: they’re routinely failing to cite the original news articles they summarize or reference.
A recent investigative benchmark study by Klaudia Jaźwińska and Aisvarya Chandrasekar from the Tow Center for Digital Journalism at Columbia Journalism Review reveals the extent of this problem. After evaluating eight of the most popular AI-integrated search engines—including ChatGPT Search, Perplexity, Grok, Copilot, and Gemini—the authors concluded that none of them reliably cite original news sources. In fact, citation accuracy was the exception, not the rule.
This failure not only undermines the verifiability of information but also threatens the economic sustainability of journalism itself.
AI search tools differ significantly from traditional engines like Google. Rather than listing sources, they often generate direct answers, integrating information into seamless narratives without clearly indicating where it came from.
“The issue isn’t just about accuracy,” the authors note, “it’s about traceability and transparency—fundamentals of credible journalism.”
When users are shown answers without original links, news publishers lose traffic, and readers lose the ability to verify or dig deeper.
The researchers evaluated eight AI search engines, testing them with multiple news-related prompts. The goal was to assess whether the tools could:
They also tracked whether tools acknowledged uncertainty or simply “hallucinated” responses when the correct information was unavailable.
The results are alarming:
“AI search engines are not just unreliable—they’re confidently wrong,” the authors write.
This means readers could be presented with information that sounds credible but lacks any accessible proof.
News organizations rely heavily on traffic-driven advertising revenue. When AI tools summarize their content without redirecting users to the original source, it siphons away visits, ad impressions, and subscriptions.
This could be especially damaging for regional and independent outlets, whose survival depends on consistent readership.
For users, the lack of traceable sources leads to a bigger issue: the erosion of verification. If readers can't see or access original articles, how can they evaluate the credibility of what they’re being told?
“It’s a perfect storm for misinformation,” say the authors. “Generative AI answers, delivered with confidence, but detached from origin.”
The report offers several key suggestions to tackle this problem:
“If AI can’t cite its sources,” the report concludes, “it shouldn’t summarize news.”
This study highlights a fundamental issue at the intersection of journalism, technology, and public trust. In a world where information is increasingly synthesized by machines, citation is more than a technical detail—it’s a moral and civic responsibility.
As readers, we must demand tools that respect journalistic labor and promote verifiable knowledge. And as developers and regulators, it’s time to build systems that don’t just inform—but inform responsibly.
Topics of interest
TechnologyReferencia: Jaźwińska K, Chandrasekar A. AI Search Has A Citation Problem: We Compared Eight AI Search Engines. They’re All Bad at Citing News. Columbia Journalism Review [Internet]. 2025 Mar 6 [cited 2025 Jun 26]. Available from: https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php