Misinfo

Researchers Create Winning Strategy to Combat Vaccine Misinformation on X

A new in-depth analysis shows that users who reply to misinformation about the Covid-19 vaccine on X, formerly known as Twitter, with a positive attitude, politeness, and strong evidence are more likely to encourage others to disbelieve the incorrect information.

Researchers from three Georgia Tech schools found the most effective way to confront vaccine misinformation on the X platform. 

They also created a predictive tool to show users whether their reply will succeed in changing minds or backfire and reinforce the misinformation. It can also pinpoint well-meaning replies meant to contradict misinformation but that interfere with social correction. 

A research paper with the full findings will be presented this week at the ACM Web Science Conference in Stuttgart, Germany.

Like white blood cells attacking a virus, social media users have been known to band together and debunk online misinformation being spread online in a phenomenon researchers call social correction. 

The success rate of social correction on most social media sites has not been determined. However, researchers now have a clearer picture of how successful user input can be on X. 

Their method uses a blend of artificial intelligence with a dataset of 1.5 million tweets containing misinformation about the Covid-19 vaccine. The researchers then studied user replies to misinformation as well as the consequences of those replies. 

In the paper, the researchers write that their data set pre-dates the rollout of X’s community notes feature, which allows users to submit corrections to posts on the platform. They point out that this system restricts users from responding to fact-checking text and labels and does not reflect the large flow of information on the site. 

As one of the first taxonomies of user social correction on the X platform, the researchers hope will aid future fact-checking efforts. While the paper only focused on text posts in the English language, it is a framework that can be expanded to address the growing threat of misinformation online. 

Corrective or Backfire: Characterizing and Predicting User Response to Social Correction was co-authored by Ph.D. students Bing He and Yingchen (Eric) Ma and their advisors Regents’ Entrepreneur Mustaque Ahamad, a professor with jointly appointments in the School of Cybersecurity and Privacy and the School of Computer Science, and School of Computational Science and Engineering Assistant Professor Srijan Kumar