Fairness in Machine Learning Conference Comes to Atlanta
Fairness in machine learning (ML) is becoming one of the most pressing issues in society. This week, more than 500 people are in Atlanta for the Fairness, Accountability, and Transparency (FAT) conference, Jan. 29 through 31, to discuss improving ethics in ML.
As more and more products and services come to rely on artificial intelligence and ML, ethical issues continue to arise. According to School of Computer Science Assistant Professor Jamie Morgenstern, who is one of the conference's program chairs, this is because much of the data used to train these systems is historical and often reflects societal biases of the time.
The FAT conference was established to mitigate these issues by developing awareness of this inherent bias. Morgenstern defines each term as follows:
- Fairness: This can be called predictive equity. Systems should do a similarly good job of improving services for all groups.
- Accountability: Researchers should be able to explain why computational systems behave the way they do.
- Transparency: A system should be understandable to the population it will serve.
Because these issues impact more than just computer science, and ML now touches everything from policy to business, conference attendees include lawyers, policymakers, and a variety of industry representatives.
“If we’re just having this conversation ourselves as computer scientists, we will invariably get it wrong,” Morgenstern said. “We want to promote a broad, diverse population to come together, network, and be externally visible in this field.”
Now in its second year, the FAT conference is affiliated with ACM this year. The program chairs are Morgenstern and Data & Society founder and Microsoft Research Principal Researcher danah boyd [styled lower case]. Local Chairs Professor Deven Desai of Georgia Tech's Scheller College of Business, and Brandeis Marshall of Spelman College have also been critical to the conference’s mission.
Georgia Tech also has a paper at the conference: A Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media by School of Interactive Computing (IC) Ph.D. student Stevie Chancellor, Dr. Michael Birnbaum, University of Rochester Professor Eric Caine, Associate Professor Vincent Silenzio, and IC Assistant Professor Munmun De Choudhury.