Research Can Help to Tackle AI-generated Disinformation
In an article published today in Nature Human Behaviour, Srijan Kumar and his colleagues describe why new behavioral science interventions are needed to tackle AI-generated disinformation.
Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.
In March 2023, images of former president Donald Trump ostensibly getting arrested circulated on social media. Former president Trump, however, did not get arrested in March. The images were fabricated using generative AI technology. Although the phenomenon of fabricated or altered content is not new, recent advances in generative AI technology have made it easy to produce fabricated content that is increasingly realistic, which makes it harder for people to distinguish what is real.
Generative AI tools can be used to create original content, such as text, images, audio and video. Although most applications of these tools are benign, there is substantial concern about the potential for increased proliferation of disinformation (which we refer to broadly as content spread with the intent to deceive, including propaganda and fake news). Because the content generated appears highly realistic, some of the strategies presently used for detecting manipulative accounts and content are rendered ineffective by AI-generated disinformation.
How AI disinformation differs
What makes AI-generated disinformation different from traditional, human-generated disinformation? Here, we highlight four potentially differentiating factors: scale, speed, ease of use and personalization. First, generative AI tools make it possible to mass-produce content for disinformation campaigns.
One example of the scale of AI-generated disinformation is the use of generative AI tools to produce dozens of different fake images showing Pope Francis in haute fashion across different postures and backgrounds. In particular, AI tools can be used to create multiple variations of the same false stories, translate them into different languages, mimic conversational dialogues and more.
Second, compared to the manual generation of content, AI technology allows disinformation to be produced very rapidly. For example, fake images can be created with tools such as Midjourney in seconds, whereas without generative AI the creation of similar images would take hours or days. These first two factors — scale and speed — are challenges for fact-checkers, who will be flooded with disinformation but still need substantial amounts of time for debunking.
Continue reading Research Can Help to Tackle AI-generated Disinformation.
Contrary to reports, @OpenAI probably isn’t building humanity-threatening #AI@GeorgiaTech professor @mark_riedl gives a good overview of the problem and expert context. https://t.co/GnM3VvsiBe pic.twitter.com/9v9nF1Wszm— Georgia Tech Computing (@gtcomputing) November 29, 2023
A wrongful arrest. A “racist robot.” A call for new laws.— Georgia Tech Computing (@gtcomputing) November 10, 2023
A @GeorgiaTech experiment trained a robot to seemingly act out racist behavior, to prove bias can exist in #AI. @MatthewGombolay opens up his lab to show where research can help address tough social issues. https://t.co/21F7IV0vbH pic.twitter.com/P3GD29lth1