matthew gombolay

CoRL 2023: Friendly Hacking Helps Robots Boost Defense Strategies

Multi-agent robotic communication systems used by first responders, traffic and air-traffic control, and search and rescue teams are getting a security boost thanks to a Georgia Tech research team.

Matthew Gombolay, an assistant professor in the School of Interactive Computing, said all those systems are vulnerable to hacking. And the best way to demonstrate it is by hacking them himself.

Friendly hacking allows researchers like Gombolay to develop new training models with enhanced hacking defense strategies. He believes the best way to build more secure multi-agent communication systems is to predict how an adversary might hack them.

Gombolay and his team of researchers simulate attacks on systems that require multi-agent robot learning (MARL). This helps identify vulnerabilities so researchers can build in safeguards to protect from real-world malicious actors.

“I don’t think anyone has looked at an outside attack in quite the same way we have,” Gombolay said. “If we allow robots access to previous experiences with attackers, they could learn a response strategy. They can adjust their communication scheme or alter their behavior.”

Gombolay said his research focuses on how robot teams might be attacked after they have been deployed instead of the developmental stage. He and his team operate on the premise that an attacker cannot sabotage the learning model used to train the robots.

“We can’t give them bad data, and we don’t have access to the copy of the neural networks controlling these robots,” he said. “Instead, we must figure out how to interfere with their ability to accomplish their mission or be able to control them to work against their mission and do so from the outside in.”

To gain control, Gombolay’s attacking agent first observes the targeted team from a distance to learn patterns and essential communication functions.

Using computer vision and machine learning analysis of the video gathered by the attacking agent, Gombolay then creates an algorithm to predict the future behavior of the targeted team. He then embeds erroneous messages into the frequency used by the team.

“We came up with communication messages that we could counterfeit and broadcast out in the direction of certain agents of the team to tell them to do actions that are counterproductive,” Gombolay said. “Those agents would need some kind of robust mechanism to identify those messages as counterfeit.”

If robots can be hacked through these communication systems, then so can larger networks. Gombolay said society has reached an alarming level of vulnerability for current and future critical infrastructures, and the amount of multi-agent systems using open channels that are vulnerable to hacking is staggering.

“Encryption is expensive, and a lot of walkie talkies, radio systems, and drones are susceptible to being taken over,” he said. “If we’re going to develop these systems, as world leaders, we need to be aware of the weaknesses. Limitations need to be pointed out so that we can fix it now before we’re fixing it postproduction.”

For future applications, Gombolay points to self-driving vehicles. A future network of self-driving vehicles will require communication between millions of agents.

“For a network of autonomous vehicles that communicate with each other to drive close at high speeds, you could listen to the communication protocol amongst the cars and figure out where they’re trying to go,” he said.

Gombolay said it’s important to start thinking about these scenarios now and that defense training is an essential part of the design process.

With the support of the Naval Research Laboratory, Gombolay wrote the paper Hijacking Robot Teams Through Adversarial Communications alongside graduate research assistants Zixuan Wu, Sean Ye, and Byeolyi Han.

Gombolay will make an oral presentation next week at the 2023 Conference on Robotic Learning (CoRL). Georgia Tech is hosting the conference at the Starling Hotel in Midtown Atlanta.

Recent Stories