Automating Injustice
Register
In this virtual presentation, Dr. Abeba Birhane will address the ways that individuals and groups at the margins of society pay the highest price when AI systems fail, while the most privileged and powerful corporations benefit.
Complex adaptive systems (e.g., human behavior and social systems) are inherently dynamic, messy, ambiguous, incompressible, non-determinable, and non-predictable. Due to their incompressibility, neither datasets nor models can capture complex systems in their entirety. Instead, large-scale datasets and predictive models pick up societal and historical stereotypes and injustices and are marked with various failures. Yet discussions of AI ethics tend to be abstract, far-fetched, sci-fi based, and devoid of current concrete realities.
In this talk, Dr. Birhane will: 1) emphasize the challenges of modeling complex behavior, 2) argue that equitable algorithmic systems need looking beyond technical solutions and require broader structural rethinking, and 3) highlight that visions of alternative realities need to be informed by and grounded in current realities.
Dr. Abeba Birhane is a cognitive scientist researching human behavior, social systems, and responsible and ethical artificial intelligence. Her interdisciplinary research sits at the intersections of embodied cognitive science, complexity science, critical data and algorithm studies, and afro-feminist theories. Her work includes audits of computational models and large-scale datasets. Dr. Birhane is a Senior Advisor for AI Accountability at Mozilla Foundation, as well as Adjunct Assistant Professor at the School of Computer Science and Statistics at Trinity College Dublin, Ireland.
Date and Time:
Monday, September 11, 2023
3 p.m. EST
Location:
Online
This event is open to all, though registration is required. To sign up, please visit the Eventbrite page for the talk.
Be among the first to know when future workshops and talks are announced by signing up for the DHLab’s newsletter.
Register