Attorneys at the HLS Cyberlaw Clinic joined representatives from the ACLU of Massachusetts, MIT researchers, and a number of technology experts and policy advocates in calling for the creation of a a Massachusetts state commission to study the use of algorithms, AI, and machine learning in government decision-making. On October 1, 2019, Cyberlaw Clinic Managing Director Christopher Bavitz was among those who testified before the Joint Committee on State Administration and Regulatory Oversight in support of proposed bills H.2701 and S.1876, which would create a commission designed to survey the use of algorithmic and machine learning tools in government decision-making. According to the House version of the bill, presented by Representative Sean Garballey of Arlington, the proposed commission would be tasked with studying and making recommendations related to Massachusetts’ use of “automated decision systems that may affect human welfare, including but not limited to the legal rights and privileges of individuals.” Bavitz’s written submission was joined by a number of researchers and others from the Berkman Klein Center community (all of whom signed on in their individual capacities).
The ACLU of Massachusetts testified that many public entities in the Commonwealth already use so-called “risk assessment instruments” in key government work, including the juvenile probation system and the Disabled Persons Protection Commission. These instruments are currently unregulated — meaning that “people who are impacted by them may be unaware of their existence, and therefore unable to raise questions about their use.”
Despite popular assumptions about the infallibility of technology, supporters of the bills warned, algorithms are as prone to error as the people who design them. “If biased algorithms power the tools, the results will be biased,” noted Suffolk Law professor Gabriel Teninbaum. The Electronic Privacy Information Center, a DC-based think tank, noted in its testimony that many criminal justice algorithms use personal characteristics like age and sex in such a way that “two people accused of the same crime may receive sharply different bail or sentencing outcomes based on inputs beyond their control.”At a time when “more and more technologies that incorporate artificial intelligence, algorithms, and machine learning come to inform or serve as the basis for government decisions in Massachusetts,” Bavitz told lawmakers, the proposals made by H.2701 and S.1876 are “an important — indeed, necessary — step toward ensuring the due process and related rights of citizens of the Commonwealth.”
Many of the bills’ supporters also cautioned lawmakers about a potential pitfall of Commission investigations. Algorithms used by the government are often still developed by private-sector entities, some of which may be unwilling to share key information about how those algorithms work. The Massachusetts Law Reform Institute pointed to one such case, in which Erik Loomis was sentenced to six years in prison as the result of an algorithmic decision. The private company that developed the algorithm refused to provide information about how it was designed; as a result, Loomis argued, his due process rights were violated. Later, investigative journalists at ProPublica determined that the algorithm was “no more accurate than a coin flip,” but Loomis’s appeals were denied. H.2701 and S.1876 empowers the Commission to examine intellectual property and trade secrets claims, but, as Bavitz notes, “the Commission will need to recognize that these kinds of considerations are not merely subjects of the Commission’s work but may impact (or impede) the Commission in carrying out its mandate in the first place.”
Supporters suggested that the Commission incorporate public engagement, especially from those who stand to be directly affected by the use of decisionmaking algorithms, and that experts from a variety of universities and policy groups should be involved in the Commission’s research and development. Those testifying in favor of the bills also noted that a commission would allow for future research and study, which is key in an area where the relevant technology is still rapidly growing and developing. “The ‘look before you leap’ approach is a good one,” said Mark Lemley, Director of the Stanford Program in Law, Science, and Technology at Stanford Law School. “It is likely to encourage innovation and experimentation in a field that is, after all, in its infancy.”
Written submissions included testimony from:
- ACLU of Massachusetts;
- AI Now Insitute; Data for Black Lives; Center for Race, Inequality, and the Law; and the Surveillance Technology Oversight Project;
- Kendra Albert, Amar Ashar, Christopher T. Bavitz, Ryan Budish, Jessica Fjeld, Urs Gasser, Adam Holland, Mason Kortz, Adam Nagy, Sarah Newman, David O’Brien, Hilary Ross, Carolyn Schmitt, and Bruce Schneier;
- John Basl (Northeastern University);
- Dan Calacci (MIT Media Lab);
- Elisa Celis (Yale University).
- Karthik Dinakar (MIT Media Lab);
- the Electronic Privacy Information Center;
- Mark A. Lemley (Stanford Law School);
- Massachusetts Law Reform Institute;
- Gabriel H. Teninbaum (Suffolk University Law School); and
- Nisheeth K. Vishnoi (Yale University).
Elizabeth Strassner is a 2L at Harvard Law School and a student in the Cyberlaw Clinic during the fall semester, 2019.
Image Massachusetts Statehouse.jpg by Hsin Ju HSU, used courtesy of Wikimedia Commons, CC BY-SA-3.0.