“AI Is Fundamentally Lazy”. But That Will Change.

A recent event highlighted that a third of AI-generated answers come from outdated sources, showcasing artificial intelligence’s display of complacence and reliance on age-old materials. But as LLMs improve, this will not stay the same.

Artificial intelligence is transforming the way people find and evaluate information. The growing use of AI-generated answers, which deliver information directly without the need to consult multiple sources, is reshaping the “zero-click” experience.  As a result, large language models such as ChatGPT, Claude, and Google AI are playing an increasingly prominent role in how the public accesses and interprets knowledge.

 

But at the heart of this shift lies a critical weakness: AI is fundamentally lazy.

Michael Macfarlane Associates AI

Today’s AI comes with several limitations, and a mainstream bias

Current Large Language Models (LLMs) don’t meaningfully interrogate the world, and instead, aggregate and recycle information they perceive to be at hand. Rather than tapping into the full diversity of global knowledge, AI systems repeatedly return to a narrow pool of “safe” sources

 

At a recent GEO/SEO event, experts highlighted that earned media plays a vital role in shaping what AI learns. However, research revealed that AI is fundamentally lazy, repeatedly drawing from the same familiar sources. Despite analysing over one million citations in just three months, a third of all references came from the same outdated materials.

 

This laziness manifests in several ways. For instance, AI places more weight on legacy media and peer-to-peer platforms such as Reddit and Wikipedia, where as users will know, information is highly editable.

 

However, legacy media outlets are not preferred for their originality or accuracy, but are valued instead for their longstanding credibility and perceived SEO advantage. The major downside to this is that if legacy media outlets misframe or misreport, AI amplifies those errors at scale. Worryingly, even editorial corrections which are routinely made on these sites are not always taken into consideration. AI has a single track mind when it comes to information, and the more updated correction doesn’t always overwrite the original outdated, or fundamentally erroneous, information.

 

It is generally understood that LLM’s factual accuracy, even at their best, plateau at around the 85-90%, with ChatGPT, one of the most popular platforms, being more likely to cite low quality domains that appear to be marked for their longevity rather than accuracy.




Diversity of thought is not favoured by AI, and that’s a problem

This means AI-driven visibility is disproportionately tilted towards those who are established, mainstream players within the knowledge economy worldwide. While this means that more authoritative platforms are seen to have better credentials, it can also mean that there is bias against diversity of thought.

Independent, regional and specialist voices risk invisibility, or worse, face the threat of being informationally phased out from public access, unless specific instructions that fight this bias are coded into the LLM systems.

The legacy model for the monetisation of media made no sense.

AI’s return to legacy sources isn’t accidental. It reflects some structural choices platforms make to err on the side of caution, from its own perspective.

 

There is safety first in choosing legacy media outlets for at first glance answers to things people search for. Citing The Guardian or BBC is less risky for AI providers than pulling from an unverified blog, for example, even if this practice discounts local journalism titles that are often the original sources of information which are then distributed widely (and not always accurately or within context) to larger media outlets. This promotes lazy journalism, otherwise known as “churnalism”.



Moreover, publishers have long mastered structured data, making them easier to ingest and rank, presenting a so-called “aura of trust” to unsuspecting LLMs. This series of errors produces what we might call a legacy loop: a cycle where authority is repeatedly reinforced rather than critically examined.

 

Watch out though, for the next phase, as AI transforms its languidity into a more active, critical form that goes above and beyond

The next generation of AI will be more active, analytical, and proprietary. We are already seeing signs that newer models will analyse video, audio, and imagery with the same ease as written content. AI will not just cite a YouTube speech, it will measure tone, surface inconsistencies, and compare it against past records.

 

There will be a tendency for improved AI to draw on proprietary datasets. Licensed archives, first-party content, and subscription-based sources will supplement the open web,




reducing reliance on legacy media. Instead of simply aggregating data, AI will go above and beyond and test claims against data, history, and geospatial analysis.

 

By diversifying inputs, including regional and non-English-language outlets, next-gen models will shift from “who published this?” to “is this provable and contextualised?” The legacy loop we discussed earlier might actually be broken by this precaution.




Looking at the implications for newer media, there is hope

As AI evolves, the rules of influence will change and depth and originality will matter more than legacy. Trust will be tested, not assumed. 

 

Legacy outlets will no longer enjoy automatic preference; credibility will need to be demonstrated in real time. A podcast, webinar clip, or interactive dataset could carry as much weight as a newspaper column. Multimedia presence will become non-negotiable. AI is mining video and audio content for insight, not just text.

 

For instance, Microsoft’s Azure AI Video Indexer analyses media to extract insights such as spoken words, emotions, and visual elements, enabling users to index and search content efficiently. Similarly, Google Cloud Video Intelligence can recognise objects, places, and actions in videos, generating rich metadata to facilitate content discovery and management, demonstrating how AI is moving beyond text to unlock meaning from multimedia.

For communicators, this is both a risk and an opportunity. The risk lies in continuing to optimise for a lazy AI ecosystem that is already narrowing the flow of information. The opportunity lies in preparing now for the active AI era – investing in rich, multi-format, evidence-backed content that future models will actively seek out.

 

The AI revolution is not just about technology – it is about who gets heard, trusted, and amplified. Today’s AI may be lazy, but it won’t stay that way. Those who adapt early, creating layered, credible and multimedia-rich narratives will no doubt define tomorrow’s information landscape.