New Professor is Taking the Measure of Cybersecurity’s Toughest Problems
Cybercrime is one of the most pressing security threats today, but it’s almost impossible to detect. Yet new Assistant Professor Paul Pearce thinks measuring attacks is one of the first ways to better understand them and shutdown future threats.
“If we want to figure out how to effectively mitigate these threats, really understanding how they work is a core facet of that,” Pearce said. “That’s where my work is: designing methods and systems, and conducting studies to really understand these threats.”
Pearce’s research in this area earned a Special Interest Group on Security, Audit and Control Doctoral Dissertation Award Runner-Up honor at the Association for Computing Machinery Conference on Computer and Communications Security in London last month.
Practical computer science
Pearce has always been interested in computers. Although he never had a chance to take classes in high school, he won several hacking competitions at community colleges he attended before transferring to University of California, Berkeley to study electrical engineering and computer science.
During his studies, he preferred research with practical applications, eventually discovering an interest in cybersecurity during his Ph.D. at Berkeley. Under his advisor, Professor Vern Paxson, Pearce joined the Center for Evidence-based Security Research, a research center that focuses on the economic and social motivations behind cybercrime.
“I gravitate toward stuff that has direct measurable impact in problems that are still important,” he said.
During his Ph.D., Pearce tackled some of the largest and most nebulous cybersecurity problems: cybercrime and censorship.
Cybercrime covers everything from denial of service attacks to malware. Despite its prevalence, cybercrime is difficult to recognize because the attackers’ goal is to make money as quickly as possible rather than using advanced tactics that are easier to trace.
Pearce’s research has focused on advertising abuse, such as bots that click on ads to make money for companies. Applying new hybrid tools to real-world situations, Pearce identified weak links in underground advertising abuse structures. Working with law enforcement and Microsoft, Pearce mitigated fraud in the network and helped take down one of the most prominent bots.
Censorship is an entirely different problem, but equally as complex as cybercrimes.
“In the cybercrime case, you know where to start,” Pearce said. “With censorship, though, how do you even know what to measure and where do measure it from?”
Compounding these issues, censorship is even more challenging to measure remotely, and even if there is one consistent source, that data may not be accurate.
To measure censorship, Pearce developed methods and layers of the network stack to be able to remotely infer censorship. By using common cybersecurity concepts such as side channels and checking manipulation of the Domain Name Servers (DNS) layer, Pearce was able to obtain measurements.
In the future, Pearce plans to continue this work here. With its strong cybersecurity department, Georgia Tech is the ideal place to pursue this research for Pearce, who joined the school in fall 2019.
Contrary to reports, @OpenAI probably isn’t building humanity-threatening #AI@GeorgiaTech professor @mark_riedl gives a good overview of the problem and expert context. https://t.co/GnM3VvsiBe pic.twitter.com/9v9nF1Wszm— Georgia Tech Computing (@gtcomputing) November 29, 2023
A wrongful arrest. A “racist robot.” A call for new laws.— Georgia Tech Computing (@gtcomputing) November 10, 2023
A @GeorgiaTech experiment trained a robot to seemingly act out racist behavior, to prove bias can exist in #AI. @MatthewGombolay opens up his lab to show where research can help address tough social issues. https://t.co/21F7IV0vbH pic.twitter.com/P3GD29lth1