All Collections
Getting the most out of AI Assistant
Key risks to consider when using AI Assistant
Key risks to consider when using AI Assistant

Using AI Assistant to draft or review contracts? Keep up to speed with the risks so you can use AI Assistant safely.

Jimmy Mooring avatar
Written by Jimmy Mooring
Updated over a week ago

What is generative AI?

Generative AI is a type of artificial intelligence that can produce various types of content - from audio, to text, to images.

Generative AI was first introduced in the form of chatbots in the 1960s, but has rapidly advanced to include the recent large language models (LLMs) that we’re hearing about so often.

These LLMs rely on effective prompting from the user to generate useful results, such as photorealistic images, or well-written copy.

How can you use AI Assistant to improve the contract process?

AI Assistant can be incorporated in your contract process to:

  • Help draft contracts, provided there are defined guardrails in place

  • Summarize legal text

  • Improve drafting, by making language less confrontational, removing legal jargon, or simplifying information

  • Identify common legal terms or concepts, to help users identify trends

  • Manage risks by highlighting onerous or unusual clauses

With generative AI’s bump in popularity, it pays to know both the capabilities and limitations, and how exactly it can help you streamline your processes in a corporate setting.


What are the risks associated with generative AI?

Below we’ve outlined a few of the risks associated with generative AI, as well as ways to mitigate them.

1. Privacy

Businesses need to understand how AI-enabled platforms will use the personal information entered into the system. Where does that information go, what is it used for, and who makes those decisions?

Contracts contain some of the most sensitive personal data in your business, so it’s critical to answer these questions before you start putting that information into an AI tool.

If a vendor isn’t clear about personal data flows and uses, then it’s going to be impossible for your business to comply with data protection requirements.

How can you be transparent with data subjects if you don’t know what their personal data is being used for? How can you accurately describe where their personal data might be transferred if your vendor can’t give a clear answer? How can you be sure that the personal data you put into the tool won’t be repurposed and fed into a large training data set?

Head to our data protection guide for more info.

2. Accuracy

Accuracy is all-important when drafting and reviewing contracts - so make sure you’re not solely relying on generative AI to give you the results you want. You have to remember that generative AI is probabilistic - in other words, it’ll give you the statistically most likely output based on its training data and the model it’s using.

Because of this probabilistic nature, AI-enabled tools can make mistakes or provide unreliable data, especially in cases where information sits beyond its scope of training.

For example, a generative AI platform may not have the latest information around the regulatory landscape in 2023, as most platforms have a cutoff point when it comes to the knowledge and context they possess. Generative AI might also ‘hallucinate’ by producing information that - although statistically likely - is untrue.

In order to draft airtight documents you can trust, make sure you double check the output, correct any inaccuracies, and run it by a legal professional.

3. Confidentiality

AI needs to be fed data in order to generate new content. To avoid confidentiality breaches, users should understand the types of data they should and shouldn’t input into an AI-enabled tool.

Business sensitive data should not be used in ‘open’ tools that aggregate input and output data from their users to feed large, public models.

Feeding an open model could cause huge repercussions for the wider business. You might be disclosing trade secrets or putting other confidential information into the public domain.

If your use case involves sensitive business information like contracts, make sure you choose a vendor you can trust, and ensure that your data isn’t being fed into aggregated, open models.

4. Intellectual property (IP)

Generative AI poses plenty of questions around intellectual property, which users should be aware of when using LLMs in their day-to-day.

When you’re using AI to create content, who owns the end result? Is it the person inputting the prompts, the company they work for, or the AI platform? Can IP actually exist in the output at all?

In these situations, it’s important to set expectations before this becomes an issue, by understanding the limitations of IP protection when it comes to AI-generated content, reading the fine print of the terms for the product you’re using, and making sure the business understands the risks.


Generative AI is a game-changer, but can also pose a range of problems and threats if it isn’t used properly.

Keeping up to date with the changing regulations and proactively mitigating the risks will help you and the business you enable improve operational efficiency with AI.

If you have any questions on your use of AI within Juro, head to our AI collection in our help center here or ask your customer success manager.

Did this answer your question?