Algorithmic Equity Toolkit

The Team

Project Lead: Mike Katell, University of Washington Information School

Community Engagement Lead: Meg Young, University of Washington Information School

Faculty Advisor: Peaks Krafft, Oxford Information Institute

Data Science Lead: Bernease Herman

DSSG Fellows: Corinne Bintz, Aaron Tam, Vivian Guetler, Daniella Raz

Abstract or executive summary

Community organizations across the U.S. are concerned about surveillance technologies implemented in their communities, and these technologies are profiling and targeting minorities and disadvantaged groups. These community organizations are calling for algorithmic equity (accountability, transparency, fairness) through the implementation of legislation like city surveillance ordinances that manage the acquisition and use of surveillance technology. The City of Seattle, Berkeley, Nashville, Cambridge, and others have implemented ordinances that differ in their scope, process, and power in regulating government technologies. However, most technology policy legislation in the U.S. fails to manage the growing use of Automated Decision Systems such as facial recognition and predictive policing algorithms.

The Algorithmic Equity Toolkit is a vital tool that community civil rights advocates can use to voice their concerns about these technologies during the decision-making process for the acquisition of these technologies.

This toolkit aims to help community civil rights activists, grassroots organizers, and grasstops leaders identify a government surveillance or Automated Decision System (ADS) tool or system, how they work, and what are the potential harms with using these technologies. An Automated Decision System is a computerized implementation of algorithms to assist in decision-making. ADS’s are increasingly used in our society to analyze data and make decisions more quickly and efficiently; however, the increasing use of ADS’s decreases transparency and accountability due to their complexity and the lack of awareness about how they work. We hope that with this toolkit civil rights activists can distinguish between surveillance tools from ADS tools and be empowered to challenge the implementation and expansion of both surveillance and ADS technologies by asking the right questions.

When do I use this toolkit? The toolkit is designed for community members, civil rights organizations, and anyone interested in algorithmic equity by voicing their concerns against the use of harmful ADS technologies within their community.

You can use this toolkit when engaging with policymakers, government representatives, or when you want to learn about ADS technologies and their potential harms.

What is included in this toolkit? Three components comprise this toolkit:

  1. A Surveillance and Automated Decision System Identification Guide to help you determine whether a government technology is a surveillance or ADS tool or system. It will also help you understand the different functions of surveillance and ADS tools and systems.
  2. A questionnaire with sample questions that you can use to inquire about the potential harms of surveillance or ADS technologies when engaging with policymakers and other public officials.
  3. An interactive web demo to explain how facial recognition technology matches faces to identities based on setting a minimum similarity score, which we refer to as the threshold. The demo also helps users understand false positives, illustrate bias against people, especially women, of color, and draw attention to the philosophical problems with employing facial recognition technology, regardless of accuracy rates.