College of Computing News

Georgia Tech and Intel Selected for Multimillion-Dollar DARPA Award

Researchers from Georgia Tech and Intel are working together to strengthen cybersecurity defenses for machine learning (ML) models designed for vision systems.

Bolstered by a new four-year, multimillion-dollar Defense Advanced Research Projects Agency (DARPA) grant, the team will create deception-resistant ML technologies with an emphasis on object detectors for the Guaranteeing AI Robustness against Deception (GARD) program.

Object detectors are a type of technology used to identify objects within an image or video using labels and bounding boxes. While no known real-world attacks have been made on these systems, a team of researchers first identified security vulnerabilities in object detectors in 2018 with a project known as ShapeShifter.

Led by School of Computational Science and Engineering (CSE) Associate Professor Polo Chau at Georgia Tech’s Intel Science and Technology Center for Adversary-Resilient Security Analytics (ISTC-ARSA), the ShapeShifter project exposed adversarial machine learning techniques that were able to mislead object detectors and even erase stop signs from autonomous vehicle detection.

Remote video URL

ShapeShifter's adversarial perturbation creates a fictional stop sign while reclassifying it as a person.

“As ML technologies have developed, researchers used to think that attacking object detectors would be difficult. ShapeShifter showed us that was not true, they can be affected, and we can attack them in a way to have objects disappear completely or be labeled as anything we want,” said Chau, who serves as the lead investigator from Georgia Tech on the GARD program.

“The reason we study vulnerabilities in ML systems is to get into the mindset of the bad guy in order to develop the best defenses. The GARD program provides us with an excellent opportunity for this,” he said.

GARD is a DARPA-funded program that aims to establish theoretical ML foundations to identify system vulnerabilities in real-world applications. Intel and Georgia Tech are leading a program team together under this platform with Intel serving as the prime awardee and Georgia Tech’s funding totaling $1.3 million.

The four-year program is divided into three phases with the first phase focused on enhancing object detection technologies through spatial, temporal, and semantic coherence for both still images and videos. These three defining qualities of object detectors look for contextual clues to determine if a possible anomaly or attack is occurring. 

“Our research develops novel coherence-based techniques to protect AI from attacks. We want to inject common sense into the AI that humans take for granted when they look at something. Even the most sophisticated AI today doesn’t ask, ‘Does it make sense that there are all these people floating in the air and are overlapping in odd ways?’ Whereas we would think it’s unnatural,” said Chau. “That is what spatial coherence attempts to address – does it make sense in a relative position?”

This idea of applying common sense to AI object recognition extends to other coherence-based techniques, such as temporal coherence, which checks for suspicious objects’ disappearance or reappearance over time. The team’s UnMask semantic coherence technique, which is based on meaning, looks to identify the parts of an object rather than just the whole, and verifies that those parts indeed make sense.

In terms of defenses, the goal of all three coherence-based techniques is to force attackers to adhere to all categories’ laws created for continuity in the AI. This multi-perspective approach thwarts any future attempts by adversarial ML that do not meet the complex rules, causing any security breach to be flagged.

As AI models with image recognition software are increasingly implemented and used in daily applications, the need to understand and thwart attacks in such programs is critical across fields. The GARD program aims to develop effective defenses across broad ranges of attacks, with Georgia Tech and Intel helping lead the way.

[RELATED CONTENT: Intel Joins Georgia Tech in DARPA Program to Mitigate Machine Learning Deception Attacks]