Generative AI Is Here – Are Your Workplace Policies Ready?
As generative artificial intelligence (GAI) technology, like ChatGPT, finds new and greater uses in the workplace, employers must consider the myriad of legal and other issues that come with it. For good reason, employers increasingly are implementing policies to mitigate potential risks and ensure safe and permissible uses of GAI by employees. In this post, we highlight key risks and outline strategies for developing policies to mitigate such risks.
What Is GAI and How Is It Being Used Today?
GAI generally refers to algorithms known as large language models that can be used to create new content, including audio, code, images, text, simulations, and videos, based on a user's prompt. GAI models ingest massive data sets of text, information and images from the internet and other sources, which are used to train those models to gradually "learn" and "understand" the relationship between words or data. When a user inputs a prompt, a GAI model generates new text, images or data based on the data set on which it was trained. Some popular GAIs include ChatGPT and Bard (for text generation), DALL-E, Stable Diffusion and Midjourney (for image generation), and Runway and Sythesia (for video generation).
The potential applications and use cases for this powerful technology are numerous, and employees are using these tools to generate software code, draft communications (emails, memos and correspondence), generate ideas and content, outline and summarize lengthy or complex documents, and fact-check existing content. This AI-generated content is then being used in a variety of business operations including marketing, sales, customer support and back-office functions.
What Are the Risks of Employees Using GAI at Work?
While GAI tools might offer efficiency and shortcuts in generating content, they carry a number of risks that employers need to consider. For example:
Inaccuracy/Bias: Text-generating GAI tools produce outputs by predicting the most likely next set of words based on the corpus of data used to train the model. While these tools often provide clear, coherent outputs that may be very reliable, there is always a risk the outputs are inaccurate or misleading. Indeed, developers of these models have acknowledged that sometimes these systems produce "hallucinations" – inaccurate text that is wholly fabricated. Further, these systems are limited by the data used to train them, which itself may be inaccurate, biased or simply limited in scope. ChatGPT discloses that its "knowledge" is limited by any facts arising after 2021.
Ethical/Moral Hazard: GAI systems are relatively untested and users may not know what, if any, ethical constraints are placed on the GAI, including potential outputs that reinforce or promulgate biases, stereotypes and prejudices, or ignore social or moral conventions entirely.
Privacy: Information included in the prompts for GAI systems may be used by the developer of the model to further train, refine or improve the model. Indeed, ChatGPT's terms explicitly state that its developer may use these prompts for that very purpose. As a result, including any personal information in prompts may violate privacy laws and policies applicable to that organization.
Trade Secret Security/Protection of Confidential Business Information: Similarly, if any confidential or proprietary business or enterprise information is entered into prompts, that data may be shared with the model developer and loses all security and confidentiality protections.
Copyright/Contract Claims: Commercial uses of a third-party GAI tool may subject users to copyright infringement claims, breach of contract claims, or other claims arising out of violations of the developer's terms of use, or from the duplication of intellectual property that was used in training data.
Copyright Enforcement/IP Ownership: There is a risk that content created by GAI cannot be copyrighted unless it involves significant human input.
Consumer Protection and Regulatory Compliance: The FTC and other federal agencies have asserted that if consumers are unaware that they are interacting with an automated process (e.g., bot), rather than a human, that may present potentially unfair or deceptive practices. Further, the federal, state and local legal and regulatory frameworks continue to evolve concerning use of this technology, and some of these laws require preservation of data, auditing for potential bias, and transparency or explainability duties.
Defamation: Content created with a GAI tool may be offensive, defamatory, or otherwise violate workplace policies.
Specialized Duties: Certain organizations operating in highly regulated industries may face additional compliance risks arising from such regulations. For example, attorneys considering the use of these tools should consider applicable rules of professional responsibility and avoid over-reliance on GAI, a problem recently highlighted in a well-reported case.
Takeaways for Employers: Adopt Policies That Leverage Human Oversight, Training, and Monitoring of GAI in the Workplace
Instituting new workplace policies to keep up with technology is nothing new – policies on personal electronic devices and social media use are now almost universal. Like with these technologies, employers first need to assess what their approach will be to GAI and whether, and to what extent, they will allow employees to use such tools for work, and if so, what parameters will apply to internal or external tools. This will depend heavily on the organization's mission, business, and workforce, as different industries carry different risks, as does the different potential uses for GAI (in sales, marketing, human resources, etc.).
After determining how the organization may leverage GAI, employers should make their guidelines clear to employees in a written policy. As with existing technology usage policies, a GAI policy should define GAI, explain its risks, and set out clear guardrails on permissible or prohibited use. These policies should include terms to ensure the organization takes the following steps:
- Identify and inventory all current and potential uses of GAI tools in the organization. This inventory should be refreshed periodically, possibly as frequently as quarterly or semi-annually.
- Assess the risks of the current and planned uses of GAI tools. Some applications may present little risk and thus require little oversight, while other applications (including some of those listed above) may need to be closely monitored or even prohibited. For example, it is certainly advisable to prohibit employees from publishing material generated by GAI without any sort of human review, or from inputting confidential data and trade secrets into a GAI tool that will send that data outside the organization. Maintain a record of the current uses, especially those deemed to be high-risk.
- Clearly identify permissible and impermissible applications and use cases. Employers should require employees to clarify with management whether they can use certain tools, and consider maintaining a list of permitted or prohibited tools and uses.
- Adopt transparency protocols to ensure that employees and external recipients of GAI outputs understand what content was created with GAI tools. Consider whether additional protocols or tags should be used for internal purposes to clearly designate high, medium and low risk outputs.
- Train managers and employees on the risks of GAI tools, and the organization's internal policy parameters around the use of such tools.
- Continually monitor emerging applications/use cases and compliance with the policy. Doing so is critical because this technology is rapidly evolving and being deployed in a number of novel ways.
- Further, employers should continually assess (and re-assess) what laws or regulations might apply to their employees' use of GAI tools and how their policy could shape compliance. New legal and regulatory frameworks are emerging across numerous jurisdictions, which merits special attention to ensure compliance in this area.
Please reach out to the human lawyers in DWT's employment group and AI team if you have any questions or need assistance with these analyses and policy drafting.