New York City's Groundbreaking New Law Will Require Audits of AI and Algorithmic Systems That Drive Employment Decisions
New York City has passed the first law in the United States that will require employers to conduct audits of automated decision-making tools used to evaluate job candidates or employees. The law may have huge implications for employers and companies that develop such tools.
Starting on January 1, 2023 (one year from now), the law will require employers to conduct bias audits for software-driven tools that substantially assist in making decisions to hire or promote. The audits must be conducted by an independent auditor.
The new law also will require that a covered employer:
- Maintain a summary of bias audit findings on its website, and keep "information about the type of data collected for the automated employment decision tool, the source of such data and the employer or employment agency's data retention policy," which the employer would have to provide to the employee or candidate upon written request;
- Notify any employee or candidate who resides in New York City in advance about the use of such a tool, including "the job qualifications and characteristics that such automated employment decision tool will use in the assessment;" and
- Allow candidates/employees to request an accommodation from being subjected to the tool.
The City's department of consumer affairs will have enforcement authority. Employers that are not in compliance starting on January 1, 2023 will face daily fines for each violation ranging from $500 to $1,500. There is no private right of action in the law itself, although the law makes clear it is not the exclusive remedy for applicants or employees.
What Employers Are Covered?
The law applies to tools used to "screen candidates for employment or employees for promotion within the city." It is unclear if this means decisions by employers physically located in New York City, decisions related to jobs located within the city, or something else.
What "Tools" Are Covered?
The law defines "automated employment decision tool" as one using "computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons." Audits are required when the tool is used on "candidates for employment or employees."
This broad definition could capture innumerable technologies used by many employers, including software that sources candidates, performs initial resume reviews, helps rank applicants, or tracks employee performance.
For now, there are no additional regulations or guidance for employers on exactly which tools or processes fall under the law's mandate.
What Sort of Audit Is Required?
This question is not clearly answered by the new measure. The law states that the required audit must be "an impartial evaluation by an independent auditor" that assesses the tool's "disparate impact" on people of a particular gender or race/ethnicity. That leaves many unanswered questions, such as what exactly should an auditor be analyzing, and what should it be looking for?
The likely answers to these questions lie in the term "disparate impact," which is used to describe a theory of discrimination developed in federal and state courts over the last 50 years. It exists when a facially neutral practice produces discriminatory results. For example, if an employer required all employees to be over six feet tall, it would disqualify more women than men as candidates. If statistical proof showed that women indeed were being selected at a significant rate below men, this could constitute disparate impact discrimination unless the height requirement was a supported business necessity. A successful claim would not require proof of any intent by the employer to discriminate.
Requiring the audits to analyze data for "disparate impact" suggests that the focus would be on the results generated by the tool, rather than other items (for example, any data used to train the tool, or how the tool weighs various factors when making a decision). There is a robust body of law that informs how statistical analyses can show or disprove the existence of a disparate impact.
One guidepost developed in federal courts is the "four-fifths rule," which holds that if a selection rate for any protected class that is less than four-fifths (80 percent) of the selection rate for the most successful group, it may be evidence of disparate impact. This quick-and-dirty test is subject to more sophisticated analyses, including regression analyses that can identify other reasons for the disparity besides a protected characteristic.
Even assuming employers can rely on existing federal law, questions remain. For example, what if people make a hiring/promotional decision based in part on information from a covered tool—at what point is the tool "substantially assisting" in the decision? How does an employer account for decisions made in part by a tool, and in part by people?
What Must Employers Disclose?
The law includes three disclosure obligations:
- 1. A posting on the employer's website, prior to using the tool, of "a summary of the results of the most recent bias audit of such tool as well as the distribution date of the tool;"
- 2. Notice to candidates or employees, at least 10 days prior to using the tool on them, that the tool will be used (with a chance to request an accommodation), and of the "characteristics" the tool will analyze to make its assessment; and
- 3. Upon written request by a candidate/employee, disclose "information about the type of data collected for the automated employment decision tool, the source of such data and the employer or employment agency's data retention policy."
It is unclear how much detail must be in the audit summary, the explanation of how the tool arrives at its decision, or the information on the "type" and "source" of data analyzed.
The requirement to explain how the tool analyzes different characteristics is likely to raise particular challenges for employers that use vendor-made software, as these vendors often protect how their tools work as trade secrets or under confidentiality agreements.
What Should Employers Do?
Employers who hire in New York City should:
- Begin analyzing all the tools used to assist with hiring and performance management to determine whether the law applies;
- Determine how they will track and maintain the data required for the audits, including gender and race of candidates/employees;
- Evaluate how they will post audit summaries and provide information upon request, including how they will explain the "characteristics" that the tool used in making its assessment;
- Review contracts with vendors that provide automated decision-making tools, including provisions involving confidentiality and indemnity; and
- Consider performing a preemptive internal audit under the protection of attorney-client privilege to evaluate potential risks.
DWT's Employment Services group and AI team have been tracking the developing law in this area, and we are working with clients and vendors on these issues.