< SPSP Main Site

2020 SPSP New Orleans Convention

Home > Programming > Featured Sessions > Presidential Plenary

Presidential Plenary

Bias in the Age of AI and Big Data

This symposium brings together two leading scholars in the science of bias in conversation with two groundbreaking scholars in Data Science to consider bias in the age of Big Data. How do algorithms and AI come to mirror people's biases? How might such reflected biases affect individuals and communities? What role can we, as psychologists, play in this Data Revolution? As society grapples with how to collect, analyze, and synthesize data of unprecedented proportions, social and personality psychologists can play a unique role in this new era, given our expertise not only with data but with the humans that such data often represents.

Speakers

Jennifer A. Richeson headshot
Jennifer A. Richeson

Yale University

"The Mythology of Racial Progress" - Our perceptions of, beliefs about, and solutions for, racial inequality in the United States are shaped, at least in part, by a mythology of racial progress. Central to this mythology is the dominant narrative that American society has achieved substantial gains toward racial equality, and is automatically, perhaps naturally, continuing to make steady, linear progress toward the same. In this talk, I argue that our fidelity to this narrative elicits a persistent pattern of willful ignorance regarding some present-day racial disparities, including the wealth gap between Black and White Americans. I will illuminate some of the psychology that sustains the narrative, consequences of efforts to disrupt it, and then connect its operation to our (blind) faith in technology as a way to disrupt racial and other of societal biases. Implications of the mythology of racial progress for efforts to engender actual racial equity in contemporary society will be discussed.
Sendhil Mullainathan headshot
Sendhil Mullainathan

University of Chicago Booth School of Business

"Algorithms and Bias" - It is growing clear to many that algorithms can be biased. To the psychology community, though it has been clearer much longer that humans are biased. To fully understand the impact algorithms will have on social inequities, we need to simultaneously consider both human and algorithmic biases. I will discuss three empirical projects that attempt to combine these. They illustrate how a naively implemented algorithm can serve to magnify human bias—in one case producing biased health care decisions for tens of millions. They also illustrate how properly implemented algorithms can dramatically reduce human biases—in one case providing a way to drastically reduce incarceration rates of blacks without affecting crime rates. These projects also illustrate the rich potential in a field—even beyond the specific question of bias—that combines computational and psychological tools.
Moritz Hardt headshot
Moritz Hardt

University of California, Berkeley

"Fairness and Machine Learning: Limitations and Opportunities" - This talk will give a bird's eye view of "Fairness and Machine Learning", an emerging interdisciplinary field grappling with some old and some new challenges around decision making in sociotechnical systems. We will begin to systematize attempts at formalizing different fairness criteria, their limitations, and their promises. In doing so, we will see how recent developments in machine learning relate to work in statistics, legal theory, economics, causality, as well as measurement and testing.
Mahzarin Banaji headshot
Mahzarin Banaji (with R. Bhaskar)

Harvard University

“The Fourfold Path to Permitting AI: Fairness, Ethics, Accountability, and Transparency (FEAT)” - Much of natural human behavior erodes the values of fairness, ethics, accountability and transparency (FEAT), values enshrined in the statutes of the civilized world. Beginning with the ideas of Herbert A. Simon, psychology has discovered and established the many parameters of irrational human behavior that even in enlightened humans, can automatically and implicitly subvert FEAT. Current computationally available corpora and algorithms magnify the same subversions of FEAT, often without procedural or consequentialist oversight. We argue that the standards for permissible AI should not only seek to avoid a slavish mimicry of human behavior, but to transcend the behavior of enlightened humans. A computational FEAT needs invention.


2020 Annual Convention
February 27-29, 2020
New Orleans, LA USA