Upcoming Events

GVU Center Brown Bag: Foley Scholar Award Winners

Foley Scholar Awards Banner

Abstracts:

Nivedita Arora Designing for Sustainability in Computing: Self-Powered Computational Material 
In this era of burgeoning IoT devices, we measure computing progress with speed, power, and reliability improvements but often forego thinking about its environmental impact. A new sustainable way of thinking about computing across the full lifecycle -- including manufacturing, operation, and disposal -- is necessary to meet the present needs without compromising the wellbeing of the future generations. Inspired by this, during my Ph.D. I have built ‘Self-powered Computational Material’ that enables sustainable operation without toxic batteries. I will showcase this with an example of an easy-to retrofit sticky note that can sense human interactions like speech, movement, and touch and provide feedback by harvesting power from the surroundings. Finally, I will chart how designing for sustainability requires a high-interdisciplinary mindset and rethinking the entire computing stack from the material level.

Upol EhsanHuman-Centered Explainable AI: Thinking Outside the Black-Box of AI
As AI systems power critical decisions in our lives, they need to be held accountable to mitigate an unjust AI-powered future. One way to hold AI systems accountable is to make them explainable– to understand the “why?” behind their decisions. Implicit in Explainable AI (XAI), is the question: explainable to whom? The “who” governs the most effective way of describing the “why” behind the decisions. Critical insights into how best to explain AI’s black box lie outside it; because that’s where the humans are.

In this talk about AI, humans will take center stage. I will discuss three aspects of the journey towards Human-Centered Explainable AI (HCXAI), a departure from the algorithm-centered roots of XAI. First, I will share how people’s perceptions of AI agents explaining their actions in plain English shaped the foundations of how we think about who the humans are in Human-centered Explainable AI (HCXAI). Second, I will chart the visions of HCXAI by bridging insights from Critical Theory and Human Computer Interaction (HCI) to question the status quo of XAI design and expose intellectual blind spots. Third, I will apply the HCXAI lenses to highlight an intellectual blind spot in the algorithm-centered narrative of XAI and share how we addressed it by introducing the concept of Social Transparency in AI– a sociotechnically situated concept that expands the boundaries of XAI by incorporating socio-organizational contexts into AI systems. I will share key lessons from this journey towards HCXAI including missed turns and design implications around improving explainability, calibrating trust, and fostering decision-making.

Qiaosi WangMutual Theory of Mind for Human-AI Communication
From navigation systems to smart assistants, we communicate with various types of AI on a daily basis. At the core of such human-AI communication, we convey our understanding of the AI system’s capability to the AI through utterances with different complexities, and the AI conveys its understanding of our needs and goals to us through system outputs. However, this communication process is prone to failures for two reasons: the AI systems might have the wrong understanding of the user and the user might have the wrong understanding of the AI. In my work, I posit the Mutual Theory of Mind framework, inspired by our basic human capability of “Theory of Mind”, to enhance mutual understanding in human-AI communication. My work takes place in the context of online education where AI agents have been widely deployed to offer informational and social support to online students. In this talk, I will discuss the three components of Mutual Theory of Mind in human-AI communication: the construction, recognition, and explanation of AI’s Theory of Mind. I will then describe in detail about one of my studies that leveraged linguistic cues in human-AI dialogues to construct a community’s understanding of an AI agent.

Speaker Bios:

Nivedita Arora is a computer science Ph.D. candidate in the School of Interactive Computing at Georgia Institute of Technology, advised by Prof. Gregory Abowd and Prof. Thad Starner. Her research focuses on re-imagining the future of mobile and ubiquitous computing by embracing an alternative view of computing where the physical surfaces would be covered with self-powered computational material. Her research has won an ACM IMWUT distinguished paper, two best poster awards (UIST, MobiSys),  research highlight (SIGMOBILE GetMobile magazine, Communications of the ACM), and Fast Company Design Innovation Competition (Honoree Winner). In recognition of her work on sustainable computational materials, she was named the winner of the ACM Gaetano Borriello Outstanding UbiComp Student Award and Georgia Tech’s GVU Foley Award for the year 2021. In addition, she was recently part of the 2021 cohort of Rising Stars in EECS at MIT.

Upol Ehsan cares about people first, technology second. He is a doctoral candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining his expertise in AI and background in Philosophy, his work in Explainable AI (XAI) aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. His research has a personal origin story – he was wrongfully detained at an airport due to an automated system’s error, which no one could explain or hold anyone accountable for. Focusing on how our values shape the use and abuse of technology, his work has coined the term Human-centered Explainable AI (a sub-field of XAI) and charted its visions. Actively publishing in top peer-reviewed venues like CHI, his work has received multiple awards and been covered in major media outlets (e.g., MIT Technology Review, Vice, VentureBeat). Bridging industry and academia, he serves in multiple program committees in HCI and AI conferences (e.g., DIS, IUI, NeurIPS) and actively connects these communities (e.g, the widely attended HCXAI workshop at CHI). By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. He graduated summa cum laude, Phi Beta Kappa from Washington & Lee University with dual-degrees in Philosophy (B.A.) and Engineering (B.S.) followed by a MS in Computer Science from Georgia Tech. Outside research, he is an advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor. He is also a social entrepreneur and has co-founded DeshLabs, a social innovation lab focused on fostering grassroots innovations in emerging markets.

Qiaosi (Chelsea) Wang is a 2021 Foley Scholar and a Ph.D. student in Human-Centered Computing in the School of Interactive Computing at Georgia Institute of Technology. Her research lies at the intersection of Human-AI Interaction, Computer Supported Cooperative Work (CSCW), and Cognitive Science. Qiaosi’s dissertation work posits Mutual Theory of Mind as a framework to enhance mutual understanding in human-AI communication in the context of AI-facilitated remote social interaction. Her work has been published and received awards at prestigious venues such as ACM CHI, CSCW, DIS, and Learning@Scale. Qiaosi holds Bachelor of Science degrees in Informatics and Psychology from University of Washington, Seattle.

Watch via BlueJeans Event: https://primetime.bluejeans.com/a2m/live-event/rrfehtbq

Date/Time
-