As journalists and professional fact-checkers struggle to cope with the deluge of misinformation online, fact-checking sites that rely on loosely coordinated contributions from volunteers, like Wikipedia, can help fill the gaps, according to Cornell research.
In a new study, Andy Zhao, an information science doctoral student based at Cornell Tech, compared professional fact-checking articles to articles on Cofacts, a community fact-checking platform in Taiwan. He found that the crowdsourced site often responded to queries faster than professionals and handled a different range of issues across platforms.
“Fact-checking is a critical part of being able to use our information ecosystem in a way that supports reliable information,” the lead author said. Mor Naaman, professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science. “Places of knowledge production, such as Wikipedia and Cofacts, have so far proven to be the most resistant to disinformation campaigns. »
The study, “Insights from a Comparative Study on the Variety, Speed, Veracity, and Viability of Crowdsourced and Professional Fact-Checking Services», published on September 21 in the Journal of Online Trust and Safety.
The researchers focused on Cofacts because it is a crowdsourced fact-checking model that had not been well-studied. The Taiwanese government, civil organizations, and tech community created Cofacts in 2017 to address the challenges of malicious and innocent disinformation – partly in response to the Chinese government’s efforts to use disinformation to create more pro-China public opinion in Taiwan. Just like Wikipedia, anyone on Cofacts can be an editor and post answers, submit questions, and vote up or down answers. Cofacts also has a bot that verifies claims in a popular messaging app.
Starting with more than 60,000 crowdsourced fact-checks and 2,641 professional fact-checks, Zhao used natural language processing to match answers posted on Cofacts with articles addressing the same questions on two professional fact-checking sites . It looked at how quickly sites published responses to queries, the accuracy and persuasiveness of responses, and the range of topics covered.
He found that Cofacts users often responded more quickly than journalists, but mainly because they could “stand on the shoulders of giants” and reuse existing articles written by professionals. Cofacts thus acts as an information distributor. “They pass these stories across language, across the country or across time, to that specific moment to answer people’s questions,” Zhao said.
Importantly, Zhao found that Cofacts publications were just as accurate as professional sources. And according to seven graduate students of Taiwanese descent who served as evaluators, the journalists’ articles were more convincing, but those from Cofacts were often clearer.
Further analysis showed that the participatory site covered a slightly different range of topics than those covered by professionals. Articles on Cofacts were more likely to address recent, local issues – such as regional politics and small scams – while journalists were more likely to write about topics requiring expertise, including health claims and international affairs.
“We can harness the power of crowds to counter misinformation,” Zhao concluded. “Misinformation comes from everywhere, and we need this battle to be fought everywhere. »
The need for fact-checking will likely continue to grow. Although it is not yet clear how generative artificial intelligence (AI) models, such as ChatGPT or Midjourney, will impact the information landscape, Naaman and Zhao said it is possible that AI programs that generate text and fake images make it even easier to create and spread misinformation online.
However, despite Cofacts’ success in Taiwan, Zhao and Naaman caution that the same approach may not be applied to other countries. “Cofacts was built on Taiwan’s user habits, cultures, context, and political and social structures, and that’s how they succeeded,” Zhao said.
But understanding the success of Cofacts can help design other fact-checking systems, especially in non-English-speaking regions that have access to few or no fact-checking resources.
“Understanding how well this type of model works in different contexts could hopefully provide inspiration and guidelines for people wanting to run similar efforts in other places,” Naaman said.
The study received partial support from the National Science Foundation.
Patricia Waldron is an editor at the Cornell Ann S. Bowers College of Computing and Information Science.