FTC Sets Its Eye on Algorithms, Automated Tech, and AI-Enabled Applications
Two recent Federal Trade Commission (FTC) enforcement actions reflect increased scrutiny of companies using algorithms, automated processes, and/or AI-enabled applications. In January 2021, the agency announced it had settled with a company using facial recognition technology that allowed consumers to organize online photo albums, and had filed a complaint against a company using automated processes (bots) to game the concert ticket system in alleged violation of the BOTS Act.
These enforcement actions follow the agency's recent informal guidance outlining principles and best practices surrounding transparency, explainability, bias, and robust data models. The ramifications of these enforcement actions are significant for two reasons:
- First, the emerging market for technology and applications enabled by artificial intelligence (AI) and machine learning (ML) systems is now clearly on the FTC's radar.
- Second, the FTC is using a powerful new enforcement tool not seen in prior FTC enforcement actions: forcing the deletion of data, models, and algorithms developed by using data for which there may not be express consent to use.
Given these implications, developers of AI/ML systems should carefully consider the provenance of their data, and if such systems are developed using so-called "ill-gotten data," those systems may be at risk.
Facial Recognition Technology User Must Delete Certain Machine Learning Models
On January 11, 2021, the FTC's Bureau of Consumer Protection (BCP) announced that Everalbum, Inc., the developer of the now-defunct photo storage app, agreed to settle allegations of deceptive use of facial recognition technology and retention of photos and videos of users who deactivated their accounts.
According to BCP's draft complaint, Everalbum allowed users to upload photos and videos from their mobile devices, computers, or social media accounts to the company's cloud-based storage service where, using a facial recognition feature called "Friends" that Everalbum introduced in 2017, users could organize and sort photos based on the faces in their photos. "Friends" users could "tag" individuals in the photos by name as well.
At the feature's launch, facial recognition was automatically enabled for all users. Beginning in July 2018, Everalbum informed users that it would apply facial recognition technology to their photos only if they affirmatively opted in to Ever's use of the technology. However, the FTC alleged facial recognition remained active by default for nearly all user accounts until April 2019, and only a subset were actually able to deactivate facial recognition.
The draft complaint further alleges that Everalbum combined millions of facial images extracted from users' photos with publicly available datasets to create four proprietary datasets that it used for the development of its facial recognition technology. Everalbum allegedly used this technology not only to provide the Ever app's Friends feature but also to develop Paravision, its facial recognition services for enterprise consumers.
Although not mentioned in the FTC's complaint, Paravision customers reportedly included military and law enforcement agencies. Although the use of user's photos for this purpose was apparently not reported to consumers, Everalbum did not share its users' photos, videos, or personal information with its enterprise customers, and the BCP did not charge Everalbum with any wrongdoing related to Paravision.
Finally, the BCP claims that Everalbum mislead users to believe that the company would delete photos and videos of Ever users who deactivated their accounts when in actuality—until at least October 2019—Everalbum failed to delete or destroy any photos and videos collected from users who requested deactivation.
Everalbum Settlement Could Create "Duty to Delete"
The settlement requires Everalbum to obtain express consent before using facial recognition technology on customers' content and to delete data, models, and algorithms that Everalbum developed by using photos and videos uploaded by its users without express consent—a significant remedial obligation that could have broader implications for developers of AI that use potentially tainted datasets to train their machine learning models.
The settlement is significant because it requires Everalbum to delete or destroy an algorithm that it developed with the "tainted" data. If that standard is applied more broadly, it could have a significant impact on other companies developing AI and ML systems.
Specifically, the settlement requires Everalbum to delete:
- (1) The photos and videos of Ever app users who deactivated their accounts;
- (2) All face embeddings (data reflecting facial features that can be used for facial recognition purposes) that the company derived from the photos of Ever users who did not give their express consent for this use; and
- (3) Any facial recognition models or algorithms developed with Ever users' photos or videos. This third element of the settlement has significant potential ramifications for developers of AI/ML systems.
The duty to delete the algorithms and/or AI/ML systems constitutes a remedy akin to the "fruit of the poisonous tree" concept—i.e., concluding that any algorithms and/or ML systems developed with data for which there is no express authorization must be deleted or destroyed.
Everalbum must also refrain from misrepresenting how it collects, uses, discloses, maintains, or deletes personal information, including face embeddings created with the use of facial recognition technology, as well as the extent to which it protects the privacy and security of personal information it collects. The settlement reinforces to businesses the importance of accurate and complete representations of their data-related practices, particularly when using facial recognition technology or biometric data. Furthermore, jurisdictions in the United States are increasingly regulating facial recognition technology and biometric data, in some instances requiring that users consent to collection and use.
In a 5-0 vote, the FTC commissioners agreed to issue the proposed administrative complaint and to accept the Everalbum settlement. In a separate statement, Commissioner Rohit Chopra lauded the Commission's action as an "important course correction" to previous settlements that allowed companies to retain algorithms and other technologies that had been developed or enhanced using improperly obtained data, noting that Commissioners "have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data. This is an important course correction."
Commissioner Chopra also stated that he supports efforts to enact a moratorium on facial recognition technology because, in his view, the technology is "fundamentally flawed and reinforces harmful biases."
The FTC published a description of the consent agreement package in the Federal Register. The agreement will be subject to public comment until February 24, 2021, after which the Commission will decide whether to make the proposed consent order final.
Brokers Using Automated Software to Purchase Online Tickets Facing Potential $31 Million in Civil Penalties
In another first of its kind enforcement, the FTC took action against three ticket brokers who allegedly used automated software to purchase tens of thousands of tickets to popular concerts and sporting events to resell at higher prices. Collectively, the FTC assessed more than $31 million in civil penalties for violating the Better Online Tickets Sales (BOTS) Act. Due to the brokers' inability to pay, the FTC will partially suspend the total judgment to a total $3.7 million, assessed in amounts based on the individual entities' financial capability.
Enacted in 2016, the BOTS Act gives the FTC authority to take law enforcement action against individuals and companies that use bots or other means to circumvent limits on online ticket purchases. The FTC's complaint alleges that defendants purchased more than 150,000 tickets for popular events, profiting more than $8.6 million by selling the tickets they unlawfully obtained from Ticketmaster on secondary markets.
In doing so, the FTC alleges that the companies violated the BOTS Act in a number of ways, including using:
- (1) Automated ticket-buying software to search for and reserve tickets automatically;
- (2) Software to conceal their IP addresses; and
- (3) Hundreds of fictitious Ticketmaster accounts and credit cards to circumvent event ticket limits.
Under the proposed settlements, the defendants are prohibited from further violations of the BOTS Act, including using methods to evade ticket limits, using false identities to purchase tickets, or using any bots to facilitate ticket purchases. The Commission unanimously voted to authorize the staff to refer the complaints to the Department of Justice and to approve the proposed consent decrees. The DOJ filed the complaints and proposed consent decrees on behalf of the Commission in the U.S. District Court for the Eastern District of New York.
As the adoption and use of automated agents (bots) continues to grow, companies using such technology must beware of the heightened scrutiny that certain applications are facing. The FTC's enforcement action follows enactment of a California statute that makes it unlawful for any person to use a "bot" to communicate with person in California "online" with the intent to mislead the person about the bot's "artificial identity" in order to incentivize the purchase of sales or goods, or to influence an election.
These and other automated tools may be subject to additional limitations if regulators believe the potential harms exceed the clear benefits of many of these tools.
This article was originally featured as a communications advisory on DWT.com on March 03, 2021. Our editors have chosen to feature this article here for its coinciding subject matter.