Srijan Kumar and Munmun De Choudhury

Professors Seek to Provide Fact-checkers with New Tool to Identify Misinformation

Professional factcheckers will soon be armed with a new tool to help them combat the false information that tends to spread across social media platforms.

Course Correct is a tool that helps journalists and fact checkers identify misinformation online by using a computational approach. The tool removes much of the time-consuming manual labor associated with fact checking.

College of Computing faculty members Srijan Kumar and Munmun De Choudhury are working with researchers from the University of Wisconsin-Madison’s School of Journalism and Mass Communication to develop the new tool.

With the sheer volume of misinformation that exists online, there aren’t enough fact checkers to go around, and the process of finding where the misinformation originated and strategically countering it with correct information is lengthy.

“We want to create a system that can identify misinformation that is already trending, falsehoods that will trend in the future, and also what is emerging because you want to nip them in the bud,” said Kumar, an assistant professor with the School of Computational Science and Engineering.

After receiving $750,000 from the National Science Foundation’s Convergence Accelerator program for a phase one award, the team of researchers got the greenlight in September for phase two and an additional $5 million from the NSF.

Kumar and De Choudhury received $200,000 of the phase one funding to examine how misinformation is fact checked and study the obstacles that prevent accurate and efficient correction while earning public trust. Now they are receiving $1.35 million of the phase two $5 million in funding to build Course Correct and test it in the real world.

Using algorithms that help quickly identify trending misinformation, De Choudhury said fact checkers will be able to do their jobs with confidence, which is essential given how much impact that misinformation regarding issues like elections and the Covid-19 pandemic can have on society.

“One of the takeaways from phase one of this award is that we need to empower the fact checkers better, and that empowerment can come through technology and computational approaches,” said De Choudhury, an associate professor in the School of Interactive Computing. “If we can somehow automate the process of detecting what could potentially be misinformation, that information could be passed on to the fact checker to use their expert judgment for the next step.”

Kumar said the team interviewed a pool of professional fact checkers about the biggest hurdles they face.

“The goal of this project was to understand, ‘What are the pain points of these professional fact checkers who essentially form the first line of defense against misinformation?’” he said. “We found that some of the biggest pain points for them are how to identify misinformation in real time and how to effectively deliver corrections.”

Identifying misinformation is just the first step. Time is of the essence, and fact checkers need a tool that will help them slow the spread of misinformation and minimize the damage.

“Speed is also an important one here, which is that it’s not just a lot of information, but it spreads fast,” De Choudhury said. “Oftentimes, misinformation is written in a way to overly sensationalize things and to create feelings of anger, hatred, or trauma. Those things spread even more quickly than credible information online.”

One way to keep up the pace is to have a system that identifies social media accounts that frequently fuel and spread misinformation as well as the innumerable accounts that consume and regurgitate it.

“Which actors are spreading this misinformation? Who are the ones consuming it? We want to create a taxonomy of the different types of consumers of misinformation and the spreaders of the misinformation and so on, so that fact checkers can take appropriate measures,” Kumar said.

But Kumar added that it can often be challenging to identify frequent perpetrators because many of them are aware of the existing artificial intelligence tools and avoid them.

Kumar and De Choudhury said they are looking at a tool that can universally work across all social media platforms, but the first version of Course Correct will specifically focus on Twitter. De Choudhury said they will have to be adaptable with the first version since Twitter recently underwent a change in ownership. Within a week of Elon Musk assuming ownership of the platform, a conspiracy theory regarding Paul Pelosi, the husband of Speaker of the House Nancy Pelosi, spread unchecked through conservative circles after Paul Pelosi was attacked in his home.

“After the acquisition of Twitter, Elon Musk is already making sweeping changes to the platform, ranging from letting top executives go to proposing the formation of a content moderation council.” De Choudhury said. “Course Correct will likely have to navigate around these complexities, including what posts it gets to evaluation for false information as well as how this could be communicated to journalists to intervene.”

The targeted completion for the first version is September 2024, just before the next presidential election. De Choudhury said the team is looking forward to the real-world testing between now and then, which will inform how the algorithms within course correct can be improved and identify what are the strengths of the algorithms and what are the strengths of the humans performing the fact checks.

“I think just for reflection, it might be useful for us to see what are the types of misinformation that these algorithms and computational techniques are better at detecting versus others, and how does that complement what fact checkers can do so that we can think about future collaborative systems where the AI and the human are working together and they’re each drawing from their own expertise,” she said.