Researchers Help New Intel Cloud Computing Technology to Stay One Step Ahead
Georgia Tech researchers played a key role in developing a new technology designed to enhance security for cloud-based software services operating on unsafe computers and keep them from mistakenly sharing sensitive data.
A new feature of Intel’s Software Guard Extensions (Intel SGX) processor chip will increase security for software operating on untrustworthy computer systems. Georgia Tech and Intel researchers collaborated on the feature that prevents malicious operating systems from spying on sensitive information.
The work done by computing Ph.D. student Xiang Cheng and the research team resulted in a central processing unit (CPU) feature they named AEX-Notify. This new technology fences off parts of a cloud computing network into protected bubbles known as enclaves while remaining secure, even in unsafe environments.
Cheng assisted in tackling this security challenge during his time as an Intel intern in 2022. With the guidance of his Georgia Tech faculty advisor, Professor Taesoo Kim, he collaborated closely with Intel engineers and researchers from Katholieke Universiteit (KU) Leuven in Belgium and the Israeli Institute of Technology (Techion). The result was AEX-Notify.
The journey toward the new feature began in 2017 when researchers Kim, Ming-Wei, and Sangho Lee from the Systems Software and Security Lab at Georgia Tech proposed an initial solution called T-SGX. This early concept used special event notifications to manage interrupts within a protected area, shielding enclaves from prying eyes.
Fast forward to 2023, and Cheng, a fourth-year Ph.D. student in the School of Cybersecurity and Privacy, teamed up with Intel and international researchers to transform T-SGX into the concrete CPU feature now known as AEX-Notify.
“Courses like CS6265 Information Security Lab and the talks from SCP weekly lecture series provided me with background knowledge like symbolic execution and side-channels,” Cheng said. “With this foundation, I designed and evaluated experiments and proofs, so our mitigation feature doesn't introduce new vulnerabilities.”
Intel designed its standard SGX chip to shield these enclaves from malicious software, including viruses, operating systems, and low-level system components. The goal is to allow programs to run securely, even in these potentially untrustworthy environments, without relying on an operating system’s safety level.
However, recent research revealed a chink in the processor chip’s armor: controlled-channel attacks. These sophisticated attacks exploit the system that Intel SGX means to protect. The exploit lets malicious operating systems intentionally trigger certain events and observe an enclave program’s reaction, potentially compromising sensitive information.
"AEX-Notify is a powerful new defense mechanism that adds an extra layer of security to the Intel SGX enclaves,” said Cheng. “It employs a special event notification system to handle interrupts within a protected region, effectively preventing a malicious operating system from spying on sensitive information.”
The team presented the new feature in August at the 32nd USENIX Security Symposium. AEX-Notify: Thwarting Precise Single-Stepping Attacks through Interrupt Awareness for Intel SGX Enclaves with both Cheng and Kim credited for their contributions to the project. Scott Constable, Yuan Xiao, Cedric Xing, Ilya Alexandrovich, and Mona Vij of the Intel Corporation; Jo Van Bulck and Frank Piessens of KU Leuven; and Mark Silberstein of Technion are co-authors of the paper.
Meet the Researchers
Contrary to reports, @OpenAI probably isn’t building humanity-threatening #AI@GeorgiaTech professor @mark_riedl gives a good overview of the problem and expert context. https://t.co/GnM3VvsiBe pic.twitter.com/9v9nF1Wszm— Georgia Tech Computing (@gtcomputing) November 29, 2023
A wrongful arrest. A “racist robot.” A call for new laws.— Georgia Tech Computing (@gtcomputing) November 10, 2023
A @GeorgiaTech experiment trained a robot to seemingly act out racist behavior, to prove bias can exist in #AI. @MatthewGombolay opens up his lab to show where research can help address tough social issues. https://t.co/21F7IV0vbH pic.twitter.com/P3GD29lth1