Federal Regulators' Artificial Intelligence Initiative Is a Promising Development for Financial Industry
Update: On May 17, 2021, the agencies extended the deadline for comments from June 1, 2021, to July 1, 2021. The text of this post has been updated accordingly.
The federal financial institutions regulatory agencies collectively issued a Request for Information and Comment (RFI) on March 31, 2021, to better understand how artificial intelligence (AI) and machine learning (ML) are used in the financial services industry and identify areas where additional "clarification" (possibly in the form of informal guidance or more formal regulation) could be useful.
Issued collectively by the Board of Governors of the Federal Reserve System (the Fed), the Consumer Financial Protection Bureau, the Federal Deposit Insurance Corporation, the National Credit Union Administration, and the Office of the Comptroller of the Currency, the RFI seeks commentary on 17 questions centering on how financial institutions ensure quality of AI/ML outputs and manage model risk.
Benefits and Risks of AI/ML in the Financial Industry
As highlighted in the RFI, AI/ML developments present significant opportunities to improve bank operations and the delivery of financial services. Currently, financial institutions are using and exploring cost-effective uses of AI/ML related to, among other things, fraud management, Bank Secrecy Act compliance, improved credit decisions and underwriting, and more efficient customer service experiences.
This includes smaller and midsized community-based institutions, which are actively using and exploring AI/ML but have historically been challenged in keeping up with technology/FinTech developments. Thus, an important aspect of any AI/ML initiative will be making sure that such institutions are not competitively disadvantaged by industry AI/ML advancements that they are not able to afford or implement due to resource constraints.
The RFI also notes that AI/ML offers significant new benefits and creates opportunities to expand access to the underserved and "unbanked." But—as the RFI explains—AI/ML also presents a number of potential risks, including unlawful discrimination where models produced bias outcomes, operational vulnerabilities where processes become reliant on technology, and risk management concerns about soundness of models.
Regulatory Interest in AI/ML
The issues raised in the RFI were foreshadowed in Fed Governor Lael Brainard's recent speech "Supporting Responsible Use of AI and Equitable Outcomes in Financial Services," at the AI Academic Symposium the Fed hosted earlier this year. In that speech, Governor Brainard flagged the regulators' collaboration to explore AI issues, and noted that this RFI was in the works. She also explained that "[r]egulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve."
As the RFI notes, there are already a significant number of laws, regulations, guidance documents, and other publications from various agencies concerning the use of AI/ML. The new RFI is by no means the Agencies' first inquiry or investigation into AI/ML. For example, the CFPB issued a RFI in 2017 Regarding Use of Alternative Data and Modeling Techniques in the Credit Process in response to its 2015 Data Points study on "Credit Invisibles" which found that credit invisibility and the problems that accompany it disproportionately affect Black and Latino consumers.
Later in 2017, the CFPB issued a No Action Letter (its first) under the Equal Credit Opportunity Act (ECOA) and Regulation B to a provider of direct-to-consumer personal loans that bases its underwriting and pricing decisions on AI/ML and alternative data. In 2019, the CFPB shared highlights from its analyses of the lending platform's use of AI/ML, which found that the tested model in fact increased access to credit as compared with traditional models and did not implicate fair lending issues. (The lending platform received an updated No Action Letter in November of 2020 under both ECOA and the CFPB's unfair, deceptive and abuse acts and practices authority.)
The CFPB again highlighted its commitment to encouraging the use of AI/ML to expand access to credit this past summer, publishing an "Innovation spotlight" on its blog focused on explainability (more below) issues when using AI/ML-based underwriting and pricing models. The blog post concluded, presaging the 2021 RFI, that "stakeholder engagement with the Bureau[] may ultimately be used to help support an amendment to a regulation or its Official Interpretation."
Regulators outside the financial space have also shown increased interest in promoting transparency and explainability where AI/ML is used for decision-making and protecting against potentially discriminatory AI/ML outputs. The recent California Privacy Rights Act calls on the newly formed state privacy regulatory agency to issue rulemaking on "automated decision-making," including "requiring businesses' response to access requests to include meaningful information about the logic involved in such decision-making processes."1
Legislation on the issue of automated decision systems has been proposed (but not passed) in several states, including California, Washington, Maryland, and Vermont—though these bills mainly focus on government agency purchase and use of AI models. The Federal Trade Commission has also been active in issuing guidance on the use of AI tools, including most recently an April 2020 blog post highlighting the importance of consumer transparency and sound decision-making, and NIST is leading efforts to develop principles around explainable AI.
The RFI
The RFI poses 17 specific questions seeking information, grouped around a few key subject matters including:
- Explainability: Generally understood as the process by which the basis for certain AI/ML system outputs (decisions, recommendations, actions) is described or disclosed. The Agencies are particularly interested in learning how often financial institutions use post-hoc methods to evaluate conceptual soundness and the related limitations to those methods. Lack of explainability can inhibit a financial institution from understanding the AI/ML's conceptual soundness, which—according to the RFI—may result in less reliability when the AI/ML is used in new contexts.
Consistent with the questions the CFPB posed in its July 2020 blog post, a lack of explainability could also inhibit a company from being able to demonstrate compliance with legal and regulatory obligations such as, for instance, anti-discrimination and consumer-protection issues arising under ECOA and the Fair Credit Reporting Act (FCRA).
- Fair lending: The RFI requests feedback about:
- (1) The techniques financial institutions use to ensure fair lending compliance when AI/ML is involved in credit decisions, even for less transparent AI/ML programs;
- (2) How AI/ML can perpetuate existing biases that may lead to disparate treatment and discrimination—and how to reduce such risks; and
- (3) What approaches are taken to identify reasons for a credit decision when AI/ML is being used (which relates to the explainability issue above, as well as to ECOA's "adverse action" requirement that creditors disclose the specific reason behind an adverse action against an applicant).
- Data quality, processing, and usage: The RFI asks how financial institutions handle risks and potential shortcomings related to the quality and usage of data that trains and ultimately helps design the AI/ML's predictions or categorizations. The Agencies specifically request information about risk management issues related to the use of alternative data versus traditional data in this context and whether/how alternative data can be more effective for specific uses.
- Dynamic updating: When AI/ML continues to learn, independently, in response to new data. In some cases, dynamic updating can cause the AI/ML to evolve in unintended and potentially harmful ways. The RFI requests information on how financial institutions address issues that arise with dynamic updating.
- Overfitting: The Agencies seek information about how financial institutions manage risks related to overfitting—instances when an algorithm "learns" from patterns in the training data that are idiosyncratic and not representative of the population as a whole. The RFI asks if there are certain barriers or challenges posed by overfitting that impact the use, development, or management of AI/ML programs.
- Cybersecurity risks: Because AI/ML is a data-intensive technology, it could be vulnerable to cyber threats. Accordingly, the RFI seeks information about how financial institutions manage these issues and the extent to which they have experienced any cybersecurity issues specifically with respect to AI/ML.
- Community institutions: The RFI asks whether community institutions face particular challenges in developing, adopting, and using AI.
The RFI also solicits input on management of third-party AI/ML and includes a general call for any other relevant information. While the RFI notes that the use of AI can heighten privacy concerns, it does not actually ask any questions seeking input on issues of consumer transparency and choice related to a financial institution's use of AI.
Initial Thoughts
Comments—either addressing the specific questions posed in the RFI or more broadly responding to it—must be submitted by July 1, 2021.
Interest by these agencies in issuing guidance on AI/ML is a positive development. For the industry to be able to move forward with implementing new initiatives backed by AI/ML solutions, it is imperative that the regulatory and compliance apparatus of the Agencies develop the resources, understanding and expertise to supervise, regulate and oversee the rollout and implementation of emerging AI/ML programs.
This initiative does not necessarily presage new rules in this area, at least in the short term. In its 2020 blog post, the CFPB specifically noted that new regulations might eventually be forthcoming. But for now, the RFI is framed to help inform the agencies in what will almost certainly be a lengthy process before any actual regulatory changes.
The RFI is nonetheless a golden opportunity to help the Agencies think through these complex issues and, in so doing, guide the development of AI/ML regulation in the coming decade. Given the complex nature of the information requested in the RFI, stakeholders interested in commenting should consider gathering relevant information soon to allow sufficient time to identify issues and priorities for comment by the June 1, 2021, deadline.
Please contact any of the attorneys identified herein if you have questions or would like to comment on the RFI.
FOOTNOTE
1 Cal. Civ. Code § 1798.185(a)(17).