Motivation

Questions

The project seeks to answer questions around algorithmic ethical issues and concerns by designing an Algorithmic Equity Toolkit. The toolkit aims to help civil rights advocates identify and audit algorithmic processes and systems of surveillance tools and automated decision systems in the public sector.

Questions we ask are: what ethical issues should civil rights advocates be concerned with in regards to surveillance and automated decision systems? How are algorithmic systems reinforcing bias and discrimination? What do community members and non-tech experts understand about algorithmic tools and their impacts? What should they know about surveillance and automated decision systems to identify them and know how they work?

The toolkit includes three components 1) a surveillance and automated decision system identification guide that helps civil rights advocates identify them and understand how they work, 2) a questionnaire of red flags for surfacing the social context of a given system, its technical failure modes (i.e., potential for not working correctly, such as false positives), and its social failure modes (i.e. its potential for discrimination when working correctly); and (3) an interactive web demo that illustrates the underlying mechanics, inaccuracies, and potential harms of facial recognition technology.

The project fills a critical gap in non-expert understanding of complex and proprietary algorithmic systems and seeks to help community members ask pointed questions about algorithmic technologies, for their own understanding and to elected officials.

Background

In an era of increasing technological advancement, our society now has more access than ever to information and data analysis tools, and the sphere of privacy has shrunk. Surveillance tools are increasingly used by government to track and monitor people, what they do, and what they say. Automated decision system (ADS) technologies have been introduced to help us process, analyze, and interpret the plethora of data we have captured through surveillance tools. Ultimately, these algorithmic tools help people make decisions; however, their increasing use may replicate, exacerbate, mask, or transfer inequities in our current system. Extensive evidence demonstrates that the harms of surveillance and ADS technologies are significant. Surveillance systems like transit microphones, deep packet sniffing, and stingrays may collect unsolicited conversations and private information (“American Civil Liberties Union”). ADS technologies like automated bail decisions, facial recognition, algorithmically supported hiring decisions have been found to have racial and gender bias (Buolamwini, J., & Gebru, T 2018). The increased use of ADS technologies has compromised public oversight because the public is less able to comprehend these technologies (Friedman, 1996, p. 342). Governments have rapidly expanded their use of surveillance and ADS technologies without considering the harms and biases they may hold.

Community organizations across the U.S. are concerned about surveillance and ADS technologies implemented in their communities, and these technologies are profiling and targeting minorities and disadvantaged groups. These community organizations have pushed for algorithmic equity (accountability, transparency, fairness) through the implementation of legislation like city surveillance ordinances that manage the acquisition and use of surveillance technology. The City of Seattle, Berkeley, Nashville, Cambridge, and others have implemented ordinances that differ in their scope, process, and power in regulating government technologies. The City of Seattle passed a surveillance ordinance in 2013 that requires a master list of government surveillance technologies to be publicly available. In 2017, it was expanded to include software and required impact reports written by a diverse stakeholder taskforce. This year, the ACLU drafted a bill to regulate automated decision systems and facial recognition software (“Washington State Legislature”). These laws did not pass despite there being no state laws currently regulating their use.

Recent ethnographic research with the City of Seattle affirms that even expert technologists within the city did not consider Automated License Plate Readers or any of the devices on the City’s “Master List” of surveillance technologies as algorithmic systems but focused instead on their data collection functions (Young, Katell, & Krafft, 2019). Policy-makers and on-the-ground stakeholders alike find algorithmic systems to be illegible. We identified a need for civil rights advocates to hold public officials accountable regarding their use of surveillance and ADS technologies, so we decided to develop a toolkit to achieve this goal. There are toolkits produced by AI Now, AI BlindSpot, and the Center for Government Excellence that go into great detail about how ADS technologies work and the potential areas where they may cause harm. However, these toolkits are designed for an academic, government, or technical audience and focus primarily on ADS technologies. Our toolkit would be unique in that it would help community non-experts better identify surveillance and ADS technologies and be better equipped to engage with public officials in their regulation.

Stakeholders Stakeholder engagement was a principal component to the development of our toolkit. The American Civil Liberties Union and the members of the Community Centered Tech Coalition, specifically Densho and the Council on American Islamic Relations (CAIR), are the stakeholders central to our project. Densho works to preserve and share the history of the WWII incarceration of Japanese Americans, and CAIR’s mission is to enhance understanding of Islam, protect civil rights, promote justice, and empower American Muslims. We engaged with them throughout the course of the project. We started with face-to-face meetings to gain insight into their mission and work with tech fairness policy. In our preliminary meetings we gained a sense of the methods in which they do their work to guide our design process of the prototypes for our toolkit elements. We further engaged with them through an iterative evaluation process throughout the project to receive feedback on the Algorithmic Equity Toolkit. We took into consideration the concerns they expressed for the impact that algorithmic bias has on their community of interest and strived to strike a balance between addressing their specific needs and keeping the toolkit applicable to the general community.

We envision our stakeholders using our toolkit to better inform their activism efforts in regards to tech fairness policy. We foresee community members and organization leaders using the toolkit to aid their understanding of the different functions of government technologies and the potential biases found in the use of algorithmic systems in both the City of Seattle and society at large.

Ethics

Messaging Algorithmic harm is an issue reflective of systemic and structural inequality. We do not want the messaging of our tool to focus solely on the problem of technical failure modes in automated decision systems. Our concern is broader - is it ethical to increase accuracy, diversify datasets, and thus allow facial recognition software to be even better at identifying marginalized faces? Does facial recognition software have a place in our society at all, whether it is in the criminal justice space or elsewhere?

Stakeholders Our aim is to incorporate as much input as possible from our stakeholders without customizing the tool too much to one specific group. One of our criticisms of existing tools is that they have not gone far enough to engage with stakeholder perspectives beyond academics and industry representatives. As a response to this, we have chosen to focus heavily on the needs of underrepresented populations. However, given both the quantity and diversity of community stakeholders, creating a tool to service such a breadth of users presents a challenge. Thus, one ethical consideration is whether to design this tool with all, several, or one stakeholder(s) in mind.

How we address ethical concerns? Connecting with diverse stakeholders and a coalition of community groups, as well as co-designing with the ACLU of Washington and other stakeholders to ensure our prototype and final product addresses stakeholders’ concerns. Be clear about not only the potential reach and benefits of this demo, but its limitations as well. Diverse Voices Training: elicit detailed feedback about our toolkit from underrepresented populations.