Home United States USA — software Traditional fake news detection fails against AI-generated content

Traditional fake news detection fails against AI-generated content

184
0
SHARE

As generative AI produces increasingly convincing text, Dutch researchers are exploring how linguistic cues, model bias, and transparency tools can help detect fake news.
As generative AI produces increasingly convincing text, Dutch researchers are exploring how linguistic cues, model bias, and transparency tools can help detect fake news.
Large language models (LLMs) are capable of generating text that is grammatically flawless, stylistically convincing and semantically rich. While this technological leap has brought efficiency gains to journalism, education and business communication, it has also complicated the detection of misinformation. How do you identify fake news when even experts struggle to distinguish artificial intelligence (AI)-generated content from human-authored text?
This question was central to a recent symposium in Amsterdam on disinformation and LLMs, hosted by CWI, the research institute for mathematics and computer science in the Netherlands, and co-organised with Utrecht University and the University of Groningen. International researchers gathered to explore how misinformation is evolving and what new tools and approaches are needed to counter it.
Among the organisers was CWI researcher Davide Ceolin, whose work focuses on information quality, bias in AI models and the explainability of automated assessments. The warning signs that once helped identify misinformation – grammatical errors, awkward phrasing and linguistic inconsistencies – are rapidly becoming obsolete as AI-generated content becomes indistinguishable from human writing.
This evolution represents more than just a technical challenge. The World Economic Forum has identified misinformation as the most significant short-term risk globally for the second consecutive year, with the Netherlands ranking it among its top five concerns through 2027. The sophistication of AI-generated content is a key factor driving this heightened concern, presenting a fundamental challenge for organisations and individuals alike.
For years, Ceolin’s team developed tools and methods to identify fake news through linguistic and reputation patterns, detecting the telltale signs of content that characterised much of the early misinformation.
Their methods make use of natural language processing (NLP), with colleagues from the Vrije Universiteit Amsterdam; logical reasoning, with colleagues from the University of Milan; and human computation (crowdsourcing, with colleagues from the University of Udine, University of Queensland, and Royal Melbourne Institute of Technology), and help identify suspicious pieces of text and check their veracity. Game changer
The game has fundamentally changed. “LLMs are starting to write more linguistically correct texts,” said Ceolin. “The credibility and factuality are not necessarily aligned – that’s the issue.

Continue reading...