Algorithmic Bias, Fairness, and Justice: Developing a Working Vocabulary


With Peter Gao (University of Washington)


Abstract

Today's students have plenty of firsthand experience about the potential dangers of algorithms used by technology and social media companies. How can we leverage students'   situated knowledge about algorithms when developing lessons to motivate them to study statistics and data science? The word "algorithm" is overloaded; we may want to distinguish between complex recommendation systems used by services like TikTok and formal algorithms like Newton's method. Similarly, while statistical bias is defined in the context of a clearly-specified estimation problem, algorithmic bias can also refer to less formal notions of inequality. In this session, we will work together to develop a working vocabulary for talking about algorithms, bias, and fairness. In breakout rooms, participants will brainstorm ways to connect the introductory statistics curriculum to the increasingly salient issues of algorithmic abuses and bias. We will conclude by discussing potential activities and examples for instructors to use in their classroom.