by | Oct 9, 2024

Consider this scenario: Your technology company is set to launch a new, customizable code generator for your business clients that uses generative AI to complete and suggest code. The tool detects errors across a range of programming languages and helps developers write code faster and more accurately. Clients will pay your company a fee and sign a service agreement to license the tool.

For years, your company has used a client services agreement for developing or licensing software based on legal advice from a time before AI became a driving force in the tech landscape. This agreement might have the traditional software licensing terms, complete with intellectual property clauses that were once considered standard. What could possibly go wrong?

A lot, actually. In today’s rapidly evolving AI-driven world, relying on outdated client agreements can lead to unexpected challenges. No matter what industry you are in, using a one-agreement-fits-all approach in the age of AI could lead to a host of legal problems. Intellectual property issues, in particular, raise an array of unique questions in an AI-related client service agreement. For instance, who owns an AI tool’s input and output—the service provider or the client? What about third-party IP rights, or indemnification and limitations of liability? The law around AI and IP is developing, what does that mean for an agreement?

In this post, we’ll consider IP-related elements of client service agreements for AI products. And we will provide useful insights about the types of AI provisions that should be in technology services agreements.

Data Ownership in AI Contracts: Who Owns Inputs and Outputs?

Who owns the data used to train AI models, as well as the output generated by AI systems? The client service agreement should clearly identify the provider as the owner of an AI tool’s underlying IP. As in many service agreements, a brief provision may set out that the provider owns all rights, title, and interest to its service.

But generative AI-related agreements should provide more depth than traditional service contracts around other ownership issues. The law around AI-generated IP ownership is still developing, and clearly spelling out ownership can avoid problems down the road.

For example, the AI service agreement should clarify whether data entered into the tool—the inputs—are owned by the provider or clients. Conversely, the contract should also identify who owns any material generated by AI—the outputs. This includes content that may be copyrighted, like images, text, or, in the case of our hypothetical generator, software code.

How ChatGPT Handles Training Data

Here’s an example in the ownership provision of ChatGPT-maker OpenAI’s terms of use. It cedes all ownership rights over inputs and outputs to the client. “As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in input and (b) own the output. We hereby assign to you all our right, title, and interest, if any, in and to output.”

While they may assign ownership rights, providers may need material from the client to help train an AI model. OpenAI’s terms, for example, state that the company “may use content to provide, maintain, develop, and improve our services, comply with applicable law, enforce our terms and policies, and keep our services safe.”

Your specific business arrangements with a client may require that you add other potential ownership provisions. If your company customizes an AI tool for a client, you may want to clearly define who owns these specialized features.

Mastering Third-Party Training Data and IP Rights in Client Service Agreements

AI systems are trained with large amounts of data, often downloaded from third-party sources on the internet. The Congressional Research Service noted in a 2023 report that AI programs “might infringe copyright by generating outputs that resemble existing works.” Under U.S. case law, the report said, owners may be able to show that such outputs infringe their copyrights “if the AI program had access to their works and created ‘substantially similar’ outputs.”

Given the potential for infringement, a client service agreement should speak directly to third-party IP. It should incorporate any licensing terms, usage rights, indemnities or warranties. As we will discuss later in this post, “fair use” laws remain uncertain where AI-generated content is concerned, which heightens a provider’s legal risk.

Clients’ use of third-party material should be addressed, as well. As an illustration, GitHub, which offers a well-known AI code generator, advises of their responsibility in its terms. “If you’re posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any content you post; that you will only submit content that you have the right to post; and that you will fully comply with any third party licenses relating to content you post,” the terms state.

How Service Providers Can Leverage Indemnification to Safeguard Against GenAI IP Infringement

An indemnity clause that addresses IP challenges may serve as a bulwark for a provider against infringement and other claims. Service providers often require clients to indemnify them against claims and shoulder the costs of defending a lawsuit or paying damages. Indemnification clauses often cast a wide net, discussing legal claims in general. But some AI-centric companies have included specific language in their contracts around infringement on third-party IP.

The AI company Tabnine, for instance, asks clients to indemnify and hold the company harmless “from and against any loss, liability, claim, demand, damages, costs and expenses” that occurs from a “violation of any third-party right, including without limitation any copyright, property, or privacy right.”

Using the Limitation on Liability Clause as a Shield in AI-Related Services Agreements

In a similar vein, a limitation of liability provision can reduce a provider’s financial exposure. A liability clause may be crafted to specify a service provider’s or client’s responsibilities around intellectual property, including copyright infringement. And it can set out responsibilities for the use of an AI tool and the material it generates. Depending on a provider’s needs, the clause can be tailored to include, among other things, types of damages and liability caps, and apply to all or parts of the service agreement.

A case in point is OpenAI’s limitation of liability clause. It includes specific information about damages, stating the company, its affiliates, and licensors cannot be held liable for “indirect, incidental, special, consequential or exemplary damages, including damages for loss of profits, goodwill, use, or data or other losses.” The clause also caps liability: “Our aggregate liability under these terms shall not exceed the greater of the amount you paid for the service that gave rise to the claim during the 12 months before the liability arose or one hundred dollars ($100).”

How Do We Grapple with Legal Uncertainty When Licensing AI Tools?

As is often the case with new and transformative technology, the law has not quite caught up to reality. Businesses are rapidly developing AI tools even as the legal landscape around intellectual property and artificial intelligence continues to shift. The uncertainty makes it even more critical for companies to have firm grasp of the IP-related terms in their agreements.

Last year, an ominously headlined Harvard Business Review article, “Generative AI Has an Intellectual Property Problem,” summed it up nicely. Using data created by a third-parties, some of it copyrighted, to train generative AI platforms creates copyright infringement risks for businesses. Thus, “before businesses can embrace the benefits of generative AI, they need to understand the risks and how to protect themselves,” the article said. The authors added that if a business is aware of infringement, it could face hefty financial penalties of up to $150,000 per instance.

This only applies if copyrighted works aren’t covered by “fair use. But the law around fair use and AI is a work in progress. Content creators like The New York Times have hit major AI service providers like OpenAI, Microsoft, and Meta with a series of infringement suits over their use of copyrighted material to train the large-language models that power most AI offerings.

To date, the U.S. Supreme Court has issued two opinions that are giving developers hope that fair use will shield them and their AI training models from infringement judgments. In 2021, justices sided with Google in a long-running fight with Oracle over its use of copied lines of code in software for its Android phones. “Google’s copying…was a fair use of that material,” the court said. And last year, the court said a copyrighted work may be subject to fair use if that use has a “purpose and character”  substantially different from the original.

Whether work generated by AI tools can be copyrighted or patented creates another potential legal snag for businesses. Under federal law, a work must be “the product of human authorship” to earn copyright or patent protection. Last year, the U.S. Copyright Office rejected copyright registration for a comic book whose visual elements were all created by AI, and in 2022, the U.S. Court of Appeals for the Federal Circuit denied a patent to an invention created entirely by an AI system.

For providers operating internationally, AI ownership laws shift from country to country. Where necessary, service agreements should reflect these differences. In the U.K., for instance, computer-generated works are granted copyright protection if a human author’s skill and creativity played a part in the process. China’s copyright law is somewhat broader. It offers protection for AI-generated content under certain conditions, particularly when human involvement can be demonstrated. In a key case from earlier this year, the Beijing Internet Court said images generated by Stable AI’s “Stable Diffusion” tool could be counted as original works because the author had made intellectual inputs throughout the AI process.

What does all of this uncertainty mean for businesses creating agreements for their clients? Contract provisions will need to take into account current risks around issues like fair use and AI copyrights. And agreements should be drafted with enough flexibility to change terms if the legal ball bounces in another direction.

Proactive Steps to Minimize Risk in Your Services Agreements

Service providers can use their client agreements to clearly define how their products will be used, shift and minimize risk, and avoid long and costly IP infringement suits. Here is a brief summary of issues providers should consider when drafting or updating their AI service agreements with clients:

1. Clearly Define Who Owns Inputs and Outputs. Does the client own data, or does the provider? An AI Software-as-a-Service (SaaS) Agreement should make it clear. Doing so can help prevent litigation and protect valuable or proprietary information.

2. Customized Tools Require Customized Agreements. If an AI tool is tailored to the needs of a specific client, the service agreement should clear up any potential ambiguity over who owns the customization.

3. Address Third-Party IP Issues. Artificial intelligence is powered by models that use publicly available information, including copyrighted material. The agreement should address potential infringement claims and outline the responsibilities of the client and provider when using third-party data.

4. Indemnify and Limit Liability. A strong indemnification clause can shield a provider from IP infringement claims, as well as disputes over outputs or client misuse of the AI tool or platform. Similarly, an agreement should include a robust limitation of liability provision that reduces a company’s potential financial exposure and restricts the types of damages it may face.

5. Bake In Flexibility. The law around artificial intelligence and AI is shifting both in the United States and abroad. Agreements should include flexibility to allow providers to adapt to changes and manage risk no matter the client’s location.

Strengthening client service agreements to account for the distinct issues surrounding artificial intelligence and intellectual property can help providers reduce their legal risk, foster trust with their clients, and provide predictability at a moment when the law around AI is unsettled.

Gouchev Law has deep experience offering legal guidance to service providers drafting, revising, and negotiating service agreements. Visit our Corporate Law & Commercial Contracts practice page, or contact us to learn more about how we can help.

About the Author

Jana Gouchev

Jana Gouchev is the Managing Partner of Gouchev Law and a prominent corporate lawyer on the leading edge of technology law and complex commercial transactions. She delivers legal and commercial insight that propels companies forward. Jana's practice is focused on Corporate Law, Data Privacy and Information Security, Tech Law (consulting, SaaS, and AI), Complex Commercial Contracts, Intellectual Property, and M&A.

Jana is passionate about working with change-makers. Hailing from an AmLaw 50 firm, Jana is the right-hand counsel to executives of the world’s most innovative brands. Her client roster includes Estee Lauder, Hearst, Nissan, Squarespace, tech consulting firms, and SaaS companies. Jana is featured in Forbes, Bloomberg, The New York Law Journal, Law360, Modern Counsel, Inc., and Business Insider for her legal insights on including Tech Law, IP and Mergers and Acquisitions.

More Resources For You