New Federal Privacy Bill Would Require Audits of Algorithmic Decision-Making
Senator Maria Cantwell (D-WA) and Democratic colleagues have proposed a sweeping data privacy bill that would require covered entities to audit certain "algorithmic decision-making" systems that use machine learning (ML) and other forms of artificial intelligence (AI) to facilitate important decisions about consumers, such as credit or employment decisions. Unveiled in November, the Consumer Online Privacy Rights Act (COPRA) would force companies to conduct annual impact assessments of any covered AI/ML systems in an effort to mitigate bias and other potentially negative consequences of automated decision-making.
The bill is one of several privacy proposals circulating in Washington, D.C. that extend their reach to AI/ML systems. It is unlikely to pass in the near term, however; the day after its introduction, Republicans introduced a competing draft bill that does not contain an algorithmic auditing provision (although the current debate over potential regulation of AI/ML systems has not followed partisan divides up to this point).
Nonetheless, COPRA’s inclusion of provisions intended to regulate the use of AI/ML systems illustrates how lawmakers are looking more closely at automated, algorithm-driven decision-making and its potential effect on individuals.
COPRA’s Proposed Limitations on the Use of Algorithmic Decision-Making Systems
The bill would broadly prohibit "covered entities" from using "covered data" to engage in discriminatory practices regarding eligibility for housing, education, employment, or credit; to advertise or market for such purposes; or to otherwise impose restrictions on public accommodations.
Specifically, "covered entities" are prohibited from processing or transferring “covered data” on the basis of an individual’s "actual or perceived" race, color, ethnicity, religion, sex, disability, gender or gender-related, or biometric information (and other protected information) to advertise, market, sell or engage in other commercial activities for housing, employment, credit or education.
While such "processing" of data can occur in many circumstances, it appears the purpose of this prohibition is to establish a national non-discrimination standard that would apply to entities using algorithmic decision-making systems (many of which are enabled by AI/ML technology) in housing, employment, credit or education.
COPRA also would require entities using such systems to undertake “annual impact assessments” when an entity engages in – or assists others in – algorithmic decision-making for:
- Making or facilitating advertising for housing, education, employment or credit opportunities;
- Making an eligibility determination for housing, education, employment or credit opportunities; or
- Determining access to, or restrictions on the use of, any place of public accommodation.
COPRA Definitions
The bill defines "algorithmic decision-making" as a "computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques that a covered entity uses to make a decision or facilitate human decision-making with respect to covered data."
- A "covered entity" is any entity subject to the Federal Trade Commission Act that transfers or processes covered data;
- "Covered data" is any information that identifies, or is linked or reasonably linkable to, an individual or a consumer device, including derived data.
- Covered data does not include: (1) de-identified data; (2) employee data; or (3) public records.
Impact Assessment Requirements
The annual impact assessments under COPRA would have to:
- Describe and evaluate the development of the covered entity’s algorithmic decision-making processes, including its design and training data, and any testing for accuracy, fairness and bias/discrimination; and
- Assess whether the algorithmic decision-making system produced discriminatory results on the basis of an individual’s (or class of individuals’) actual or perceived "race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability."
The bill would permit covered entities to use independent, external examiners or auditors for this process. There would also be a duty to disclose algorithmic decision-making system impact assessments to the Federal Trade Commission (FTC), upon request. Covered entities can redact any trade secrets to avoid their public disclosure.
COPRA also would require the FTC to issue a report within three years regarding the use of algorithmic decision-making to facilitate decisions in finance, housing, education and employment. It also mandates the creation of a new FTC bureau dedicated to enforcement of federal laws addressing privacy and data security. That new bureau likely would have some authority to enforce the algorithmic-decision making provisions of the bill.
Finally, aggrieved parties (states and individuals) would have a private right of action, including recovery of penalties up to $1,000 and attorneys’ fees.
COPRA leaves unanswered many important questions. For example, in what detail must an entity "describe" its algorithm development or algorithmic decision-making process? What would be the threshold for determining whether there are "discriminatory results"? How would entities share responsibility when they contribute to a single decision, such as when a company uses a vendor’s software to make a decision?
COPRA Follows Prior Congressional Proposal to Impose Audit and Impact Assessments on Algorithmic Decision-making Systems
COPRA follows the same regulatory framework as another federal bill proposed on April 10, 2019 by Senator Cory Booker (D-NJ) and Democratic colleagues called the Algorithmic Accountability Act (Accountability Act). The Accountability Act would authorize the FTC to create regulations requiring covered entities that use, store, or share personal information to conduct impact assessments (i.e., audits) of new and existing AI/ML "high-risk" automated decision systems (“ADS”) and information systems.
The Accountability Act would apply to companies that:
- Have more than $50 million in gross receipts;
- Possess or control personal information of at least one million people or devices; or
- Are data brokers.
And it defines "high-risk" systems to include, in part, those posing a significant risk to the privacy or security of consumers’ personal information and/or involving personal information like race, color, national origin, political opinions, religion, trade union membership, gender, gender identity, sexuality, and sexual orientation. Like COPRA, the Accountability Act’s assessments would require companies to review their use of ADS for "impacts on accuracy, fairness, bias, discrimination, privacy, and security."
The required AI assessments would include, among other things:
- A detailed description of the ADS, its design, training data, and its purpose;
- An assessment of “the relative benefits and costs” of the ADS, taking into account relevant factors like data minimization practices, the duration for which personal information is stored, consumer access to the results, and the recipients of the results of the ADS;
- An assessment of the risks posed by the ADS to the privacy or security of consumers’ personal information and the risks that the ADS may result in or contribute to inaccurate, unfair, or biased/discriminatory decisions impacting consumers; and
- The risk-minimizing measures the covered entity will employ.
The Accountability Act would require covered entities to conduct impact assessments in consultation with external auditors, if possible, and to address the results of the impact assessments in a timely manner. Covered entities could, at their discretion, make the AI assessment public.
Failure to comply with the FTC’s regulations would be treated as "an unfair or deceptive act or practice” by the FTC, as laid out in the Federal Trade Commission Act. State attorneys general, and other authorized state officers, would be empowered to bring a civil action on behalf of citizens in their state if they have a “reasonable belief" that such individuals are being “threatened” or adversely affected. Finally, the Accountability Act specifically provides that it would not preempt state laws.
A Sign of Things to Come?
While Congress is unlikely to pass the Accountability Act and COPRA in their current forms, the introduction of these proposals may signal increasing Congressional interest in incorporating AI-related regulations in proposals to adopt national data privacy frameworks. In the coming year, we expect to see even more proposals by regulators, lawmakers, consumers, and employees to test and audit AI/ML-driven decision-making systems.