FTC Hearings Exploring Algorithms, Artificial Intelligence, and Predictive Analytics Focus on Notions of Fairness, Transparency and Ethical Uses
The FTC continued its series of public hearings on Competition and Consumer Protection in the 21st Century with two days of hearings on November 13-14 focused on "Algorithms, Artificial Intelligence, and Predictive Analytics.” During two days of discussions and testimony panelists generally agreed that new regulations in this area would be premature, and that finding the appropriate framework for transparency, fairness and ethics may require society to consider tradeoffs between competing value sets.
The hearings, at Howard University School of Law in Washington, DC, brought together FTC staff, industry, academia and consumer interest organizations to discuss key issues arising from the increasing adoption of algorithms, artificial intelligence, and predictive analytics in society, including:
- Current and potential uses;
- Ethics and consumer protection; and
- Policy, innovation, and associated market considerations.
Panelists discussed the fundamental aspects of algorithms, artificial intelligence, and predictive analytics (hereafter “AI”), how these technologies could impact and influence consumer protection, and emerging regulatory and legal issues associated with the use of these technologies in real-world applications.
Bias and Algorithmic Fairness Concerns Require Tradeoffs
During a panel on ethics, participants from industry and academia attempted to tackle some of the ethical considerations in the use of AI. Participants discussed the many ways in which AI contributes to improving society, including by detecting diabetic retinopathy in adults with diabetes; assisting lenders in extending credit to individuals that have not had access to credit before; and uncovering financial transactions that may be fraudulent.
At the same time, panelists acknowledged that the use of AI is not without risk. For example, panelists agreed that the use of AI may lead to bias in decision-making, including the use of biased data sets (to initially train AI tools or as ongoing data feeds). In addition, bias in AI may also be tethered to other imperfect sources, such as:
- Data encoding social prejudices from social media and other inputs;
- Less input data for minorities and other historically disadvantaged segments of society;
- Intentional prejudice (known as “data masking”), such as the bias against hiring pregnant women who may then leave the position; and
- Proxy variables, such as zip codes correlated with race/income levels.
Although panelists agreed that bias in AI exists, several argued that critical to whether the use of AI leads to “biased” results is how we, as society, define fairness. Microsoft’s Jennifer Wortman Vaughn argued that because bias may arise from incomplete or inaccurate data sets, policymakers must carefully choose metrics providing preferential treatment of some individuals recognizing the fact that tradeoffs between fairness amongst affected parties and accuracy of the algorithmic output may be necessary.
Similarly, Professor Michael Kearns from the University of Pennsylvania argued that algorithmic fairness requires tradeoffs between fairness and accuracy (and possibly also within competing notions of fairness). Regardless, panelists agreed that because humans are inherently error-prone, algorithms developed by humans to perform tasks that in the past required human intelligence (including decision making and recognition of audio and video) may be subject to the same errors and biases that humans make.
The panelists acknowledged that once fairness is defined, application of that definition in a machine learning environment may come with tradeoffs. Optimizing for fairness, for example, could come at the cost of accuracy and vice-versa. Therefore, panelists agreed that the reduction of bias and optimization of metrics must be carefully weighed at every stage in the machine learning process – from the initial input of training data (larger more diverse data sets will help eliminate biases) to the models in which the data is applied and outputted.
Existing Consumer Protection Authority over New Technology Negates the Need for New Regulations
Panelists acknowledged that the use of AI may lead to new consumer protection concerns. However, in light of existing consumer protection laws, panelists generally agreed that any new regulation targeted at AI and machine learning is unnecessary at this time. In fact, the majority of panelists agreed that current authority under the FTC Act and Section 5 is broad enough to address consumer protection matters in which the FTC has jurisdiction. Furthermore, sector-specific laws, like the Fair Credit Reporting Act, add an additional layer of consumer protection.
Panelists explained that over-reaching regulation could have a detrimental impact on the development and proliferation of AI. For example, University of Washington Professor Ryan Calo argued that a recently enacted California law, which requires bots to disclose that they are bots when communicating with humans, may be premature. Other panelists characterized the law as unnecessary and harmful to the development of AI.Panelists argued that defining consumer protection in the context of AI is difficult when human behaviors are often contradictory. For example, many individuals treat certain situations differently and have value sets different from society at large.
This presents the question of whether AI developers should calibrate outcomes based upon the expectations and norms of a specific individual, a subset of individuals, or society at large. Given these inherent contradictions, the panel agreed that it is not appropriate to adopt AI-specific laws and regulations at this time. Instead, panelists suggested that the most useful step the FTC can take with regard to AI is to issue best practices or guidelines that companies can apply to AI, which will influence how AI conforms to existing societal standards and rules.
Transparency in Algorithms, Artificial Intelligence, and Predictive Analytics Present Difficult Questions
During questions regarding the efficacy of the EU’s approach on investing in AI and advancing consumer protection under the GDPR, Justin Brookman of Consumer’s Union argued that the EU has stated a desire to advance in AI but that “they have shot themselves in the foot” in two ways with: (1) the right to explanation of significant decisions and (2) the right to erasure. “If an algorithm is used to make a significant decision about a person, you have the right to an explanation [under the GDPR].”
However, Brookman noted that companies may ultimately use humans to make those decisions “which doesn’t end up protecting consumers anymore” given that a decision accounting would not be required in that context. He further argued that “the law should say ‘an explanation should be required regardless if a human makes it.’” Under the GDPR, individuals have the right to not be subject to a decision “based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her” without any human intervention. But according to Brookman, “the GDPR will struggle with determining where consumers [or companies] can jump in to correct the effects.”
Some panelists argued that transparency obligations should apply to AI. The concept of transparency, in this context, includes “explainability,” where companies should be able to explain “what they are doing and for what purpose,” according to Brookman. “The FTC could be doing more to . . . say you have to have some basis for making these very precise claims, other than ‘I don’t know, the machine said it,’” added Brookman. Panelists also called for regular audits and to put in place feedback mechanisms to ensure that algorithms are continuing to learn and train in the manner programmed.
For example, one panelist proposed two questions that regulators should be asking when evaluating an algorithm’s harm to consumers: (1) whether or not an AI system had mechanisms in place – either technological or procedural – to verify the system was acting as designed and (2) whether the system had mechanisms in place so that the operator could identify and prevent harmful outcomes. Mechanisms could include [ethical] impact assessments and error analysis. If a company can answer “yes” to both questions, they should be considered to be acting in good faith. If a company answers “no” to at least one question, they should be sanctioned moderately; if a company answers “no” to both questions, they should be sanctioned heavily.
Panelists’ responses varied when asked what the FTC should be focusing on. One panelist advocated for the FTC to focus on the impact of historically disadvantaged populations caused by AI. Joshua New of the Center for Data Innovation responded with the need for policymakers to identify areas where market forces do not exist, particularly in the public sector (e.g., criminal justice systems where courts discriminate in sentencing) where there is little competition to encourage good output. Another panelist voiced concern over bad actors weaponizing algorithms. Lastly, Nicole Turner-Lee of the Center for Technology Innovation Brookings Institution voiced concern over, and urged the FTC to focus on, creating rules for the de-identification of data.
It is clear from this discussion that the FTC and industry are still wrapping their minds around these issues and working to develop appropriate legal frameworks. We are closely following these issues and working with clients on the day-to-day application of AI-based technologies. We will continue to watch both regulatory and self-regulatory efforts in this space and provide updates and thoughtful leadership for our clients.