Skip to main content
All CollectionsFAQs
How do I write a GenAI legal policy?
How do I write a GenAI legal policy?
Tim Bowers avatar
Written by Tim Bowers
Updated over a week ago

From a risk perspective, an AI policy is not optional. In addition, your AI policy must constantly evolve according to the rapidly changing landscape.

1. Start with a clear understanding of the relevant terms.

  • Generative AI: Generative Artificial Intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create content. Unlike traditional AI systems that are designed to recognize patterns and make predictions, Generative AI creates new content in the form of images, text, audio, code, simulations and more.

  • AI: The use of computer systems to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

  • AI Tools: A technology platform using artificial intelligence-based technology

  • Input: Images or text entered by a user as a prompt into the platform

  • Output: Images or text output by the platform in response to a prompt

2. Understand the risks that come with using Gen AI.

While Gen AI technology comes with incredible benefits, there are also risks to consider. Like other forms of AI, Generative AI can raise a number of ethical issues surrounding data privacy, security, policies and workforces. Generative AI can also engender new business risks, or reduce barriers to those risks being seen in your business, like misinformation, plagiarism, copyright infringements and harmful content.

Click here to learn:

3. Consider using a Greenlisting approach.

It’s important to encourage experimentation with AI, or brands risk falling behind. At the same time, we recommend having clearly defined ethical and legal parameters. For instance, you can perform initial due diligence on AI tools, and then add them to a green, amber or red lists:

Green = client facing and externally publishable + parameters

Amber = internal tools for mood boarding and ideation - can still be shared with clients but only as part of internal process

Red = not safe for use

You can review each tool against categories that are relevant to your brand, for instance:

  • Data protection

  • Security

  • Input and output ownership restrictions

  • Terms of use

  • License of the input/output

  • Reputability of company - where is this platform coming from?

  • LLM in use (diversity of the training data and possible biases that engenders)

  • Indemnifications

4. Ensure your policy is consistent between brands and agency partners.

Clearly communicate why AI is important to your business, outlining both opportunities and potential flaws. Discuss ethical considerations, bias, fairness, and inaccuracy with your partners.

Match Greenlists: verify that your brand’s “greenlist” of approved tools aligns with your agency partners’ approved tools.

Conduct Deep Dive Discussions: engage in thorough discussions about greenlist categories, internal processes, and risk appetite to ensure mutual understanding and agreement.

Assess Partner AI Policies: ensure that your partners’ AI policies are as rigorous as your own. If this step is skipped, your risk assessment could be invalidated.

Consider Legal Requirements: Determine if client/partner addendums are necessary. Using Gen AI tools can be legally similar to third-party subcontracting, which might be prohibited or require prior consent in many client-agency partnerships and existing contracts.

5. Understand best practice when it comes to defining your AI policy.

Do Not Use Unmodified Output: Avoid using unmodified AI-generated content for anything a client wouldn't want copied by others.

Do Not Input Confidential Information: Refrain from entering confidential, sensitive personal, or health information into prompts. Without assurances that the data won't be used by others, there's a risk it could resurface elsewhere.

Do Not Use Names or Third-Party IP: Avoid inputting names, images of known individuals, or third-party intellectual property (e.g., trademarks, copyrighted characters), as these can elevate infringement risks.

Carefully Review Outputs: Always check AI-generated outputs for accuracy, legal compliance, bias, and potential reputational harm.

Revise Outputs: Enhance AI-generated content with additional details or design elements to improve protectability. Significant revisions increase the likelihood of creating protectable work.

Keep Records: Maintain records of your process for creating AI-generated content whenever possible.

Did this answer your question?