The Meaning and Measurement of Bias
Lessons from Natural Language Processing


Translation Tutorial 

ACM Conference on Fairness, Accountability, and Transparency (FAccT)
January 2020

Relevant material:

Slides from ACM FAccT Tutorial available here  

Video of ACM FAccT Tutorial here

[new version!]
Jacobs
& Wallach, “Measurement and FairnessACM Conference on Fairness, Accountability and Transparency (FAccT). 2021.

Blodgett, Barocas, Daumé, Wallach. “Language (Technology) is Power: A Critical Survey of ‘Bias’ in NLP” Proc. ACL 2020.

Jacobs, Blodgett, Barocas, Daumé, Wallach. Translation tutorial: The meaning and measurement of bias: Lessons from natural language processing.
ACM Conference on Fairness, Accountability and Transparency (FAccT). 2020.
[ACM link]  



About

The recent interest in identifying and mitigating bias in computational systems has introduced a wide range of different---and occasionally incomparable---proposals for what constitutes bias in such systems.
This tutorial aims to introduce the language of measurement modeling from the quantitative social sciences as a framework for understanding fairness in computational systems by examining how social, organizational, and political values enter these systems. We show that this framework helps to clarify the way unobservable theoretical constructs---such as ``creditworthiness," ``risk to society," or ``tweet toxicity"---are implicitly operationalized by measurement models in computational systems.

We also show how systematically assessing the construct validity and reliability of these measurements can be used to detect and characterize fairness-related harms, which often arise from mismatches between constructs and their operationalizations. Through a series of case studies of previous approaches to examining ``bias" in NLP models, ranging from work on embedding spaces to machine translation and hate speech detection, we demonstrate how we apply this framework to identify these approaches' implicit constructs and to critique the measurement models operationalizing them. This process illustrates the limits of current so-called ``debiasing" techniques, which have obscured the specific harms whose measurements they implicitly aim to reduce. By introducing the language of measurement modeling, we provide the FAT* community with a process for making explicit and testing assumptions about unobservable theoretical constructs, thereby making it easier to identify, characterize, and even mitigate fairness-related harms.


What to expect

This tutorial offers measurement modeling as a unifying framework for practitioners, a toolbox for thinking carefully about a variety of harms.

In our 90 minute tutorial, we will introduce the framework of measurement for fairness in computational systems, followed by an in-depth study of how this reveals paths forward in the context of ``debiasing" in NLP.

We will cover:

  • Introduction to measurement modeling and how measurement models are evaluated
  • Applying measurement modeling to fairness: recidivism risk as an unobservable theoretical construct
  • Applying measurement modeling to fairness: fairness as an essentially contested construct
  • Taxonomy of harms
  • 3 case studies of “bias” in NLP: measurement and representational harms in examples from embedding spaces, machine translation, hate speech detection
  • Discussion of bias mitigation, questions from attendees, closing

By introducing the language of measurement modeling to the FAT* community, we aim to bring in perspectives from the quantitative social sciences, which have hitherto been underrepresented in the study of fairness in computational systems. Our primary intended audience is the computer science community; however, we anticipate that the language of measurement modeling will also be useful for researchers and practitioners in other disciplines. Although we anticipate that this language will be familiar to attendees in the quantitative social sciences, we hope to show how this familiarity can support and contribute to discussions about fairness.

The process of examining the construct validity and reliability of a measurement allows us to fully explore the theoretical and practical assumptions---i.e., the framing, values, and potential consequences---built into computational systems. The first part of the tutorial shows how the framework lends clarity to a variety of allocative harms---harms that occur when decision-making procedures allocate opportunities or resources. We then turn to representational harms---harms that occur when systems that ``represent society but don't allocate resources...reinforce the subordination of some groups along the lines of identity" (Crawford, 2017)---and we introduce a taxonomy of such harms. Through a series of case studies of recent NLP papers, each purporting to analyze ``bias" in an NLP model, within which each of these constructs corresponds to a representational harm represented in our taxonomy. This framework shows us how the construct validity and reliability of the proposed measurements of each harm can be assessed.


About the organizers

Abigail Jacobs is an Assistant Professor at the University of Michigan in the School of Information and in Complex Systems. She uses computational social science and measurement modeling to understand social networks, governance, accountability, and fairness in computational systems and organizations.

Su Lin Blodgett is a Ph.D. candidate at the University of Massachusetts Amherst. She has worked at Microsoft Research in New York City on using methods from quantitative social science to  better characterize bias in NLP. More broadly, her research explores connections between the development of equitable NLP systems and of computational sociolinguistics methods for social media text.

Solon Barocas is a Principal Researcher in the New York City lab of Microsoft Research and an Assistant Professor in the Department of Information Science at Cornell University. He is also a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. His current research explores ethical and policy issues in artificial intelligence, particularly fairness in machine learning, methods for bringing accountability to automated decision-making, and the privacy implications of inference.

Hal Daumé III is a professor in Computer Science and Language Science at the University of Maryland, and a principal researcher in the machine learning group and fairness group at Microsoft Research in New York City. He and his wonderful advisees study how to get machines to become more adept at human language, through interactive learning and with sensitivity toward potential harms. He was program co-chair for NAACL 2013, and ICML 2020; was chair of the NAACL executive board; and was inaugural diversity & inclusion co-chair at NeurIPS 2018.

Hanna Wallach is a senior principal researcher at Microsoft Research, NYC, where she is a member of MSR’s FATE group. Her research spans machine learning, computational social science, and issues of fairness, accountability, transparency, and ethics as they relate to machine learning. She is a member of the IMLS Board and the WiML Senior Advisory Board. She is currently the general chair for NeurIPS 2019. She received her Ph.D. from the University of Cambridge in 2008.






Abigail Z. Jacobs she/her/hers
azjacobs + umich + edu