Major Grant Funds New AI Ethics Network That Will Emphasize Atlanta Voices
Atlanta communities most vulnerable to bias and inequity in artificial intelligence (AI) are the focus of a new Atlanta-based ethics initiative being funded by a $1.3 million Mellon Foundation grant.
The Atlanta Interdisciplinary Artificial Intelligence (AIAI) Network, which is set to formally kick off during an event at Science Gallery Atlanta from 4 to 7 p.m. Oct. 4, brings together computing, humanities, and social justice researchers from Georgia Tech, Clark Atlanta University, Emory University, and community partner DataedX.
Carl DiSalvo, Georgia Tech School of Interactive Computing professor, is an AIAI co-principal investigator (co-PI). Andre Brock, an associate professor in the School of Literature, Media, and Communication serves on the network’s steering committee.
DiSalvo said the idea for the AIAI Network had been in the works for years. However, the researchers now have the needed funding thanks to the Mellon Foundation. The grant allows the network to hire its first graduate students for the 2023-2024 academic year.
“The Mellon grant provides resources that we didn’t have before,” DiSalvo said. “There are students doing work on topics related to AI, computing, humanities, and social justice. They were difficult to fund, but now there’s funding. This has a material impact on supporting graduate students and their research, and that impact is immediate.”
The Mellon award also provides seed money for the network to distribute grants to researchers in the Atlanta community. Brandeis Marshall, CEO of DataedX Group and co-PI, said the network wants to put Atlanta voices at the forefront of conversations about AI bias that aren’t limited to the academic community.
“We want people within Atlanta to connect with it, understand it, and be a part of it,” Marshall said. “We want small businesses and nonprofits to feel like they have a place within these conversations about tech. It’s for the everyday person, not just the academics.”
Lauren Klein, PI and Emory associate professor of English, said the AIAI Network offers a humanistic lens on controversial AI issues. She said it was important that each PI or steering committee member be open to research contributions from the humanities.
“The proposed technical solutions are not coming from people who have expertise in these issues of systemic racism, sexism, and structural oppression,” Klein said. “The people who have expertise with these issues and how they surface in AI are humanities scholars. We want to bring humanities researchers to the table with technical researchers as equal partners.”
Klein said the Mellon Foundation recognized the AIAI Network’s goals aligned with its recent commitment to funding projects centered on social justice.
“It aligns with the work that the Mellon Foundation is trying to do,” Klein said. “They’ve made social justice the top-level concern of all projects they fund. They don’t fund a lot of initiatives, so choosing to invest in us is meaningful.”
One of the biggest challenges the network will face is steering the conversation away from the prominent AI “doomer” narrative and toward existing AI bias. While the former is theorized, the latter continues to impact marginalized and minority communities.
“It’s not just what we are going to do today, but also how what we’re doing today will impact what we are trying to influence for tomorrow,” Marshall said. “How can we dampen the AI hype and AI doom narratives and promote AI reality?”
AIAI Network will take a multifaceted approach to promote a more realistic understand of AI. Some of the tactics the group will use include:
- Humanities-focused research projects
- Public design and education workshops
- Guest speaker series
- Courses taught by principal investigators and steering committee members
- Seed grants for Atlanta researchers doing like-minded work
“We’re going to educate ourselves, those in the community, and those aspiring to be in this field on how we can be more solution based and solution oriented,” Marshall said. “There are courses, research and research projects built on the framework of talking about AI bias in a productive way and not one focused on extreme stances around AI hype or AI doom.”
Contrary to reports, @OpenAI probably isn’t building humanity-threatening #AI@GeorgiaTech professor @mark_riedl gives a good overview of the problem and expert context. https://t.co/GnM3VvsiBe pic.twitter.com/9v9nF1Wszm— Georgia Tech Computing (@gtcomputing) November 29, 2023
A wrongful arrest. A “racist robot.” A call for new laws.— Georgia Tech Computing (@gtcomputing) November 10, 2023
A @GeorgiaTech experiment trained a robot to seemingly act out racist behavior, to prove bias can exist in #AI. @MatthewGombolay opens up his lab to show where research can help address tough social issues. https://t.co/21F7IV0vbH pic.twitter.com/P3GD29lth1