House Committee Hearing on AI Highlights the Need for Industry Self-Governance to Mitigate Potential Prescriptive Regulations
Following the creation of an AI Caucus and hearings in the Senate last year, it’s clear that members of Congress are keeping a close eye on emerging Artificial Intelligence technology. Most recently the House Oversight Committee convened a series of hearings on AI which highlighted the potential for the development of standards and norms to govern AI technology in lieu of prescriptive regulations.
In its third hearing on AI, the House Oversight Committee Subcommittee examined AI risks and rewards, and the appropriate role, if any, for the government to promote, develop and (potentially) regulate machine learning systems powering AI. Committee members’ comments during the hearing highlighted key concerns, including the potential for bias, ethical issues, privacy and cybersecurity risks, and the potential for “weaponizing” certain AI. However, testimony from witnesses representing industry, academia, and non-profit organizations established that continuing ongoing works on developing industry standards and norms should be the first step to addressing those concerns. The Subcommittee is expected to expand on these issues, and outline preliminary policy perspectives, in a follow up report in the near future.
Ethical Norms, Not Export Controls, Should be Considered for AI
Senior member Rep. Darrell Issa (R-CA) asked whether the potential for harm from AI systems which could be weaponized requires explicit regulation or export controls on AI algorithms and technology sold to foreign actors. Citing limits placed on military systems (satellite missiles and other weapons systems) as potential precedent, Rep. Issa asked whether the same limits should be considered for AI technology that could be weaponized, or used to harm others.
Jack Clark, Director of Open AI, explained that export controls would be difficult, if not impossible, to apply given that AI algorithms are simply software, not physical assets like traditional weapons systems that can be more easily monitored and secured. Clark urged the committee consider a different solution: the continued development of commonly shared norms and standards governing ethical uses and safety of the AI systems with “dual use” capabilities (i.e., the same technology that can be used to diagnose tumors from x-rays can also be used to train systems to surveil or target humans) being deployed today. Such norms would more directly, and effectively, limit the potential for AI systems to be used for harm, rather than good, as Clark argues in this Open AI paper analyzing potential malicious uses of AI.
Clark’s prepared testimony argued that the government should play a lead role in the development of such ethical norms. Citing Open AI’s recent report on malicious uses of AI, Clark argued that the potential harm from AI with “dual use” capabilities can be mitigated through ethical norms governing use of the technology.
Existing Security Tools May Not Mitigate All Potential Privacy and Data Security Concerns
Ranking Chair of the Subcommittee, Rep. Robin Kelly, asked the witnesses to outline the potential risks of personal data being exposed in certain AI systems. In response Harvard’s Ben Buchanan, the author of Machine Learning for Policymakers, argued that the use of innovative data protection strategies, such as differential privacy and on-device processing could mitigate some of these concerns.
Buchanan explained that differential privacy –the practice of removing certain pieces of data from a set to create “statistical noise” which masks the individual’s identify—shows “enormous” promise and should be further utilized in data sets powering AI systems. Buchanan also suggested that other privacy risks can be mitigated through the use of on-device processing; although acknowledging that the utility of that strategy may be limited because many developers’ business models rely upon access to aggregate data (rather than individualized data).
Any Regulations Must be “Flexible and Adaptive” to Ensure Continued Innovation
Gary Shapiro, President of CTA (the Consumer Technology Association) demonstrated there is no need for a high-level top down AI policy from the federal government to ensure its continued growth and development. Instead, the same formula that has made the U.S. a world leader in technology innovation –light-touch regulation (or self-governing industry standards), private sector leadership and government investment—can, and should, be applied to AI. Urging the subcommittee to think strategically, Shapiro argued that any new regulations surrounding AI should be “flexible and adaptive.”
The take away from these hearings is clear: now is the time for the industry to develop its own self-governance norms and ethical standards to avoid heavy-handed government rules in the future. Industry leaders should be engaging with organizations like the Partnership on AI, which is organizing collaborative teams to help government and society develop and share best practices and other tools to further inform policymakers in this space.