Bots and Bureaucracy: Risks to Government Use of AI?
New technology has made it far easier for the public to participate in the regulatory proceedings of administrative agencies. For example, the advent of electronic filing of comments on proposed rules has opened the administrative process to a much broader range of stakeholders than in the past. But these innovations have also raised questions that are just starting to percolate. Recent headlines have focused on “fake” comments submitted on the Labor Department’s “fiduciary rule,” to the FCC on its Net Neutrality rulemaking, and to the Consumer Financial Protection Bureau, Federal Energy Regulatory Commission and Securities and Exchange Commission. Government agencies, companies, and their counsel will have to increasingly confront how decades-old processes and procedures apply in the Digital Age. This post explores special vulnerabilities of governments as they move from reviewing paper filings (once the arcane domain of only a few “inside the Beltway” lobbyists and industry experts), to harnessing the powers of Artificial Intelligence to crunch huge volumes of data in the process of adopting, revising, or repealing rules and regulations.
The emergence of AI technology into the regulatory process is part of a significant and growing trend. New York City, for example, has established a Mayor's Office of Data Analytics, which, among other things, is working with the city's fire department to use machine learning to decide where to send building inspectors. In additional, the Internal Revenue Service has launched an Information Reporting and Document Matching program, which applies algorithms to credit card and other third-party data to predict tax underreporting and non-filing by businesses. These are just a few examples of governments adopting artificial intelligence to supplement traditional government activities.
AI promises huge benefits to both government and private industry alike. Already, technology companies are pointing to the possibility that natural language processing and cognitive computing can make the public comment process light years faster and more transparent. However, as governments adopt these technologies several risks to the regulatory process emerge, which will be essential for private parties to consider when facing potential unlawful and unconstitutional administrative actions.
Administrative Procedures Act
As noted above, recent proceedings regarding important industry regulations have attracted extraordinary levels of input from diverse stakeholders—in the case of the FCC’s recent Net Neutrality proceeding, an unprecedented 22 million comments were received. The federal Administrative Procedure Act sets basic standards that administrative agencies must meet when they decide whether to adopt a new rule, amend an existing rule, or repeal a rule (for instance, if the rule is obsolete and no longer needed). As applied by the courts, this statute requires agencies to take into account meaningful comments and respond adequately to substantive arguments or evidence, including when the arguments or evidence cuts against the position the agency is adopting. For instance, suppose an agency wishes to adopt a new safety standard for autonomous vehicles: at a minimum, the agency must explain why the standard can be expected to improve safety; if evidence in the record showed that the rule would make no such improvement and would instead only increase costs, the agency could not just ignore that evidence. It would have to address it and articulate a reasonable explanation for why it is still adopting the new rule.
Failure to consider comments has become a factor in litigation, with courts requiring agencies to go back and address comments previously ignored. Undoubtedly, AI promises huge efficiencies if agencies are able to efficiently and responsibly sort through and analyze comments (e.g., proceedings in which millions of comments are filed) in an efficient and less labor-intensive way. But at the same time, there are risks and new questions that courts may soon need to confront: What standard should agency algorithms be held to in undertaking this sort of review without direct human intervention? If any algorithm fails to consider one line of argument buried in the huge volume of public and industry comments, will that be fatal to the ultimate decision the agency takes? What arguments can industry muster in challenging a rule if such an argument is not squarely addressed in the agency’s final decision? In some cases, an agency’s failure to exercise sufficient oversight could conceivably lead to a finding of “arbitrary and capricious” administrative action that, at a minimum, would require the agency to review the record again and come to a new decision.
Disclosure Violations
Additionally, principles of transparency are sprinkled throughout state and federal laws requiring disclosure to make available the sources used in reaching a particular decision. One general principle under the Administrative Procedure Act, for example, requires agencies to disclose for public comment and input any internal analysis or methodology it plans to use to support its decision; it cannot simply rely on a secret internal analysis that no stakeholders had an opportunity to comment on or potentially dispute.
Machine learning, however, does not allow the user of the algorithm to discern which particular relationships between variables factor into the algorithm’s classification. Given the black box nature of artificial intelligence, could a government agency using such an algorithm offer sufficient disclosure of its analytical process if it were to use AI for review and analysis of public comment? Although courts may defer to agencies using complex scientific analysis, to the extent that AI is used to for decision-making purposes rather than pure calculations, government agencies may be susceptible to reversal on appeal for transparency violations.
Improper Delegation of Authority
Finally, this use of AI raises significant concerns about government delegation of discretionary items in violation of principles of non-delegation. In essence, these principles mean that when Congress (or to the extent state agencies are involved, state statutes) delegates a task to an agency, it expect the agency to bring its expertise to bear on the matter—not farm out the project to some other entity or person. Would an agency’s extensive use of AI for fundamental discretionary tasks arguably violate these non-delegation principles? Again, this is a question the courts soon may have to grapple with.
As these questions arise, companies and other parties to administrative proceedings must be vigilant to ensure that use of artificial intelligence technologies does not obscure their position in proceedings, and they must be prepared to muster all available legal arguments (including assertions of violations of the Administrative Procedure Act) when they have good cause for concern that their position has not been adequately considered or addressed by the agency.