Improving Safety and Well-being on the Web and in Society
Most of our social interactions, most of our information needs, and most of our daily tasks and decision making is happening online.
With high-stakes decisions being made via the web, the ways in which malicious users engage with us online can have a profoundly negative impact on our lives and on society as a whole.
For Srijan Kumar, a new assistant professor in Georgia Tech’s School of Computational Science and Engineering, this is a concept that transcends social media, encompassing most, if not all, of the web and society. His research group, CLAWS (the Computational Lab for the Web and Society), was established with the goal of improving the safety and well-being of people world-wide. This is achieved by ridding the user experience of digital abuse and disinformation pitfalls, and using the online social signals to forecast harmful real-world events, such as mass shootings.
“Broadly, my group’s research is in data science and applied machine learning and we create the next generation of algorithms to understand and improve how users behave online and how it impacts the society,” said Kumar.
Understanding and Impacting Online Behavior
These next-generation algorithms that Kumar references are used to understand and forecast deceptive behavior that attempts to manipulate and misinform users. Instances in which these behaviors occur are vast, and can, according to Kumar, be categorized based on the three areas of use that they impact.
“There are three major things people do online: interact with one another, consume information, and act on the recommendations they are shown. A way to unify and transform the user experience is to develop the user models, which are deep-learning and network-based models”, he said.
Of course, this is more simply said than done. As one stride is made to improve user interactions, there are bad actors on the other side that are continuously attempting to manipulate user sessions in all three categories: trolls harass others, disinformation misleads and radicalizes people, and recommender systems are manipulated for financial, political, and ideological gains. A key challenge being solved at CLAWS is how to create algorithms that can forecast how malicious agents will behave in the future and develop algorithms that are robust to the creative attacks of malicious agents.
According to the Pew Research Center, 41 percent of the population report being harassed online at some time, making it easily the most recognizable form of online abuse that Kumar’s research attempts to address.
More than Harassment
However, the applications of Kumar’s work stretch far beyond harassment and his anti-abuse algorithms have been used by the likes of Flipkart, India’s largest E-commerce platform, and Wikipedia.
According to Pageviews Analysis, Wikipedia has aggregated over 420 billion views since July 2015 and deletes approximately 1,000 pages each day. These staggering numbers show the magnitude of the online encyclopedia giant and the reach of its platform despite wavering credibility of some pages. Compounded with the fact that younger audiences largely get their information from the web rather than traditional news outlets, this platform’s content and reach arguably impacts society.
“[Digital abuse] is a huge issue because everyone uses web platforms, such as Wikipedia and YouTube, even my nephew, who is six years old. And there are malicious actors on these platforms that are trying to manipulate the information,” Kumar said.
In an effort to find these malicious users and prevent misinformation, Wikipedia recruited the help of Kumar to detect fabricated articles using a machine learning model that could help identify the hoaxes.
“The surprising part about the study was that when respondents were asked to identify which were fake and which were real, people only had 66 percent accuracy – and that was after we told them that one was fake. So, the numbers for recognizing the fake without the context would likely be much different. Whereas, the machine learning models that we built had 86 percent accuracy of identifying the fake articles,” he said.
A Digital Native’s Inspiration
For Kumar, who grew up in the age of technology and social media, his passion for this field began with a frustration many of us have encountered: Buying an item off of the internet to find it was nothing like what we were promised.
“I had a first-hand experience of being misinformed and this made me become interested in pursuing it as a researcher because I realized that it affects millions of people.”
Now, after joining Georgia Tech in January 2020, Kumar is establishing the new CLAWS lab at the institute in an effort to continue growing this field and prevent more instances of online abuse from occurring in the future. Some applications of their work include:
- Health, such as detecting and countering health misinformation,
- Security, such as predicting mass shootings,
- Finance, such as predicting fraud and money laundering, and
- Social media, such as preventing disinformation and hate.
Kumar said, “We need new methods and new techniques to improve the interactions between users online. Right now, we are at the perfect scientific time to create these new models. And the reason is because earlier we were looking at the basics of how and what people were doing. But today, with deep learning and with new models available, we are able to create and transform these user experiences and fuel real-time and personalized systems.”
Prior to Georgia Tech, Kumar was visiting research scientist at Google AI, and a postdoctoral researcher at Stanford University. He is the recipient of the 2018 ACM SIGKDD Doctoral Dissertation Award runner-up, WWW 2017 Best Paper Award runner-up, Larry S. Davis Doctoral Dissertation Award 2017, and Dr. BC Roy Gold Medal.
Contrary to reports, @OpenAI probably isn’t building humanity-threatening #AI@GeorgiaTech professor @mark_riedl gives a good overview of the problem and expert context. https://t.co/GnM3VvsiBe pic.twitter.com/9v9nF1Wszm— Georgia Tech Computing (@gtcomputing) November 29, 2023
A wrongful arrest. A “racist robot.” A call for new laws.— Georgia Tech Computing (@gtcomputing) November 10, 2023
A @GeorgiaTech experiment trained a robot to seemingly act out racist behavior, to prove bias can exist in #AI. @MatthewGombolay opens up his lab to show where research can help address tough social issues. https://t.co/21F7IV0vbH pic.twitter.com/P3GD29lth1