Skip to main content
related_content_tab

Generative AI offers unprecedented opportunities for companies of all stages of growth. It can provide time and cost efficiencies, allowing companies to focus resources on other business priorities. It also provides new content creation capabilities. But Generative AI also presents new legal and business risks.

This article provides a cheat sheet on the top ten considerations for companies using Generative AI for internal business functions, with a focus on protecting a company’s business interests and intellectual property under U.S. law and ensuring better positioning for future financings, acquisitions and other exits. It does not attempt to flag all risks of Generative AI or all the steps businesses can take to mitigate them, but is intended as an initial primer on some of the major issues.

When we refer to “Generative AI,” we mean artificial intelligence technology by which software is trained on data (often called “input”) to identify relationships and patterns in that data and generate new content (such as text, audio, data, images, video, and software code, generally called “output”) based on that training in response to prompts submitted by a user.

This article focuses on internal use of Generative AI and does not cover using Generative AI within products offered to your customers, which carries additional risks and considerations.

First, let’s discuss the Top 5 risks when using Generative AI for internal purposes:

  1. Confidentiality and Privacy—Your Prompts and Output Are Not Confidential. When you submit a prompt on a Generative AI platform, the platform may retain rights to re-use that information or to publish the output. Although each platform has different terms and conditions, Generative AI platforms typically do not commit to maintain the confidentiality or security of prompts or outputs. Architectural and business limitations for the owners of Generative AI platforms may also mean the platforms are actually unable to accomplish complete confidentiality or security for your prompts or outputs even if they have the best intentions to do so. You may not have the ability to prevent platforms from disclosing to others or re-creating the output, or from using the prompts you provide as training data for future outputs. You may also be left with no recourse (legal or otherwise) for any of these occurrences. When providing information for prompts, you should assume that all information you provide will be public—think of these as disclosures to a third party. It is especially important to be extra careful with your trade secret information (you risk losing using your trade secret protection for your business’s secret sauce), information regarding inventions that you intend to patent in future (you risk putting such material in the public domain), your customers’ confidential information (you may be breaching contractual obligations), or personal information (you could be violating privacy or publicity laws), without legal review.   
  2. Output Protectability—It May Not Be Yours (Alone). You may not actually own the results of the prompts that you feed to AI platforms.  As of June 2023, the U.S. Copyright Office and the U.S. Patent and Trademark Office have declined to extend copyright or patent protection to outputs created by artificial intelligence tools, on the basis that only works created by human authors or inventors are eligible for protection. Moreover, any protection for the human-generated portions requires specific identification of any AI-generated components in the combined work.  This approach has broadly (though not uniformly) been adopted in many other jurisdictions.  There are certain circumstances in which it may be possible to use an AI platform to iterate upon copyrighted content a user owns and generate protectible derivative works, and the Copyright Office is currently considering such a claim. Generally speaking, however, unless the law changes, you may not be able to prevent others from copying or reusing output resulting from your input, or to stop the Generative AI platform from using or disclosing identical outputs. Even if you would otherwise own the output under law, the Generative AI platform may state in its terms of service that the platform owns the output, retain a broad ability to re-use the output, or fail to assign its rights to you in an effective manner under law. Further, you may not be able to protect your proprietary materials that are combined with AI-generated materials if you are unable to keep clean records of which is which.
  3. Coding Risks—Strings Attached. If you use a Generative AI platform to develop software code, you should understand that these platforms are typically trained on publicly-available source code as its inputs, the vast majority of which are subject to open source licenses. Some of these licenses are “copyleft” and, if incorporated into your software, may subject you to requirements that you make your proprietary code available for free pursuant to the same copyleft license. Even “permissive” licenses commonly have attribution or other requirements on distribution. For those familiar with open source use and risks, Generative AI for code development presents those same risks, but without identifying the applicable licenses. To the extent the open source used for the input is identifiable in the output, these open source infection and attribution issues may be uncovered by owners of the underlying source code or by an investor or acquiror performing a code scan during diligence.  In coding outputs, there may also be bugs, code vulnerabilities or security risks. Generative AI platforms typically disclaim any responsibility for output, which means these platforms may provide source code output that is not reliable and may subject your business to legal and IP risk.
  1. Output Deficiencies—Misinformation and Bias. Generative AI is a developing technology and far from perfect. The output is often accurate and suitable for the use case. Other times, however, it may contain errors, be misleading or deceptive, or be trained on data that is inaccurate. Generative AI might also “hallucinate” (make things up but present them as facts), or generate output which is discriminatory, unethical, offensive to local custom or community norms, or biased (as the output may be influenced by the underlying training inputs with these traits). When using someone else’s model, you likely will have no insight into what data the platform used for training to adequately assess these risks. The risk is heightened where output is used in circumstances in which accuracy or fairness is essential, such as human resource decisions (hiring decisions and performance management), or whether to provide services or products to customers (such as access to credit or insurance, and the provision of healthcare). 
  2. Infringement Risks—An Area of Active Litigation. Output may infringe third party intellectual property rights, both due to the nature of the inputs and the nature of your prompts. Several lawsuits have already been filed against Generative AI platforms, including some of the largest and most popular platforms, alleging that use of inputs owned by third parties to train models and generate outputs without permission infringes their intellectual property rights and violates other rights and laws. Further, if you use Generative AI to generate output that refers to, or is inspired by, identifiable third party materials (e.g., requesting an output displaying a character designed by a third party, or that mimics another person), the output may infringe that third party’s intellectual property rights or rights of publicity or privacy absent an applicable fair use defense. Compounding these risks, Generative AI platform terms typically provide no meaningful protection against lawsuits based on output, and, in fact, often place liability entirely on the user.  Therefore, you may face liability if you generate and use problematic Outputs and have no right of indemnification or other recourse to avoid liability.

Now, let’s discuss the Top 5 steps you should be taking to mitigate these risks:

  1. Clear Policies on the Use of Generative AI. If your company is using Generative AI internally, then your company should have an AI policy that takes into account the risks of using Generative AI, potential use cases for Generative AI, and your company’s risk tolerance given the potential benefits to its business. Then you should devise clear guidelines for your employees and contractors that outline which use cases are permitted (or not), what company information is permitted to be used (or not), which platforms are permitted and prohibited, and what steps need to be taken when using Generative AI to mitigate risk. These policies are often best developed by a cross-functional team, including legal, compliance, management, IT, engineering, and other internal stakeholders. The policy should be sent to all personnel (including outsourced consultants) and have internal checks and balances to enforce it. You should also undertake regular policy reviews and updates given the fast-changing legal environment pertaining to Generative AI.
  1. Quality Control. Given the output risks identified above, it is important to apply human oversight and careful review of any outputs before they are used.  Output should be subject to the standard quality controls of the business, including accuracy, robustness and consistency with your company’s brand and other policies. This is particularly important for software outputs due to the risk of open source infection or non-compliance, and security vulnerability issues.
  1. Do Your Homework on Generative AI Platforms. Diligence Generative AI platforms, their terms, and their public-facing statements regarding data, security and associated practices, before deciding to use them, to ensure they meet your standards and expectations. Investigate and consider whether the platform has adequate security measures in place to protect your prompts and outputs, whether the platform reserves rights to use or re-use your output, whether the platform has an enterprise agreement with better contractual terms available, and what data sets the platform has used to train underlying models (to the extent they disclose this information). Consider whether there are specific instances of the AI platform models that allow you to use your own training data in a private environment. Negative information (the absence of statements or policies on given topics by the owner of Generative AI platforms) may also be informative to your decision-making process about whether to use a platform for your business.
  1. Prep for Diligence. Your company should record how it is using Generative AI, including what outputs are created and where they are used in your business. In any financing or acquisition scenario, you may be asked to identify how your company uses Generative AI and your AI policies and procedures. Further, you may be required to identify any AI-generated materials when applying for copyright or patent protection. In addition, you may be asked to identify datasets that you use (and your authorization to use them) to train Generative AI platforms. If you have clear guidelines regarding the use of Generative AI, and maintain records of Generative AI use, you can enter a diligence process in a stronger position by demonstrating that you are aware of the risks and are taking active steps to mitigate them, while still taking advantage of the technological opportunities provided by Generative AI.
  1. Transparency, Transparency, Transparency. The Federal Trade Commission is routinely releasing guidance warning businesses to be clear and accurate with respect to claims regarding artificial intelligence, including the capability of AI-powered products and the extent of use of AI. You should make accurate statements, both publicly and privately, regarding the use of any Generative AI in your business and in the development of output. This also means not over-selling your users on the benefits of your use of Generative AI.

Many thanks to Tracy Rubin, TJ Graham, Chris Chynoweth and Kristin Leavy for their contributions to this article.

 

Last reviewed: June 5, 2023
Part of the Artificial intelligence 101 collection
Related articles