Spain’s regional elections are still nearly four months away, but Irene Larraz and her team at Newtral are already braced for impact. Each morning, half of Larraz’s team at the Madrid-based media company sets a schedule of political speeches and debates, preparing to fact-check politicians’ statements. The other half, which debunks disinformation, scans the web for viral falsehoods and works to infiltrate groups spreading lies. Once the May elections are out of the way, a national election has to be called before the end of the year, which will likely prompt a rush of online falsehoods. “It’s going to be quite hard,” Larraz says. “We are already getting prepared.”
The proliferation of online misinformation and propaganda has meant an uphill battle for fact-checkers worldwide, who have to sift through and verify vast quantities of information during complex or fast-moving situations, such as the Russian invasion of Ukraine, the Covid-19 pandemic, or election campaigns. That task has become even harder with the advent of chatbots using large language models, such as OpenAI’s ChatGPT, which can produce natural-sounding text at the click of a button, essentially automating the production of misinformation.
Faced with this asymmetry, fact-checking organizations are having to build their own AI-driven tools to help automate and accelerate their work. It’s far from a complete solution, but fact-checkers hope these new tools will at least keep the gap between them and their adversaries from widening too fast, at a moment when social media companies are scaling back their own moderation operations.
“The race between fact-checkers and those they are checking on is an unequal one,” says Tim Gordon, cofounder of Best Practice AI, an artificial intelligence strategy and governance advisory firm, and a trustee of a UK fact-checking charity.
“Fact-checkers are often tiny organizations compared to those producing disinformation,” Gordon says. “And the scale of what generative AI can produce, and the pace at which it can do so, means that this race is only going to get harder.”
Newtral began developing its multilingual AI language model, ClaimHunter, in 2020, funded by the profits from its TV wing, which produces a show fact-checking politicians, and documentaries for HBO and Netflix.
Using Microsoft’s BERT language model, ClaimHunter’s developers used 10,000 statements to train the system to recognize sentences that appear to include declarations of fact, such as data, numbers, or comparisons. “We were teaching the machine to play the role of a fact-checker,” says Newtral’s chief technology officer, Rubén Míguez.
Simply identifying claims made by political figures and social media accounts that need to be checked is an arduous task. ClaimHunter automatically detects political claims made on Twitter, while another application transcribes video and audio coverage of politicians into text. Both identify and highlight statements that contain a claim relevant to public life that can be proved or disproved—as in, statements that aren’t ambiguous, questions, or opinions—and flag them to Newtral’s fact-checkers for review.
The system isn’t perfect, and occasionally flags opinions as facts, but its mistakes help users to continually retrain the algorithm. It has cut the time it takes to identify statements worth checking by 70 to 80 percent, Míguez says.
#FactCheckers #Scrambling #Fight #Disinformation