Skip to main content
All CollectionsGetting the most out of Juro AIAdditional AI resources
Risks, limitations and data protection considerations when using Juro's AI

Risks, limitations and data protection considerations when using Juro's AI

A quick reference guide to using AI Assistant safely

Michael Haynes avatar
Written by Michael Haynes
Updated yesterday

Contents 🚀



Introduction 👋


AI Assistant is a powerful tool within Juro's contract collaboration platform. It helps you to draft, review and summarize contracts and clauses using generative AI.

This guide outlines the risks, limitations and data protection considerations that you should keep in mind when using the AI Assistant.


What are the risks associated with using the AI Assistant? 🚉


With generative AI’s bump in popularity, it pays to know both the capabilities and limitations and how exactly it can help you streamline your processes in a corporate setting. Below, we’ve outlined a few of the risks associated with generative AI, as well as ways to mitigate them.


Privacy 🔒


Businesses need to understand how AI-enabled platforms will use the personal information entered into the system. Where does that information go, what is it used for, and who makes those decisions?

Contracts contain some of the most sensitive personal data in your business, so it’s critical to answer these questions before you start putting that information into an AI tool.

If a vendor isn’t clear about personal data flows and uses, then it’s going to be impossible for your business to comply with data protection requirements.

How can you be transparent with data subjects if you don’t know what their personal data is being used for? How can you accurately describe where their personal data might be transferred if your vendor can’t give a clear answer? How can you be sure that the personal data you put into the tool won’t be repurposed and fed into a large training data set?

Head to our data protection guide for more info.


Accuracy 🎯


Accuracy is all-important when drafting and reviewing contracts, so make sure you’re not solely relying on generative AI to give you the results you want. You have to remember that generative AI is probabilistic - in other words, it’ll give you the statistically most likely output based on its training data and the model it’s using.

Because of this probabilistic nature, AI-enabled tools can make mistakes or provide unreliable data, especially in cases where information sits beyond their scope of training.

For example, a generative AI platform may not have the latest information around the regulatory landscape in 2023, as most platforms have a cutoff point when it comes to the knowledge and context they possess. Generative AI might also ‘hallucinate’ by producing information that, although statistically likely, is untrue.

To draft airtight documents you can trust, make sure you double-check the output, correct any inaccuracies, and run it by a legal professional.


Confidentiality 🤫


AI needs to be fed data to generate new content. To avoid confidentiality breaches, users should understand the types of data they should and shouldn’t input into an AI-enabled tool.

Business-sensitive data should not be used in ‘open’ tools that aggregate input and output data from their users to feed large, public models.

Feeding an open model could cause huge repercussions for the wider business. You might be disclosing trade secrets or putting other confidential information into the public domain.

If your use case involves sensitive business information like contracts, make sure you choose a vendor you can trust and ensure that your data isn’t being fed into aggregated, open models.


Intellectual property (IP) 🧠


Generative AI poses plenty of questions around intellectual property, which users should be aware of when using LLMs in their day-to-day work.

When you’re using AI to create content, who owns the end result? Is it the person inputting the prompts, the company they work for, or the AI platform? Can IP exist in the output at all?

In these situations, it’s important to set expectations before this becomes an issue by understanding the limitations of IP protection when it comes to AI-generated content, reading the fine print of the terms for the product you’re using, and making sure the business understands the risks.


What are the data protection considerations associated with using the AI Assistant? 🔏



Accountability 👩🏽‍💼


As a controller, you need to demonstrate your compliance with data protection laws.

Juro’s AI Assistant was built with EU and UK privacy laws in mind and operates on servers located in the EEA and managed by Microsoft.

We’re also providing the information in this guide to help you understand how to deploy our AI Assistant responsibly in your organisation.

You might choose to conduct a data protection impact assessment before using any AI tool. We can provide a template for you to use, pre-filled with information about Juro’s AI Assistant.


Transparency 📖


It’s important that you make clear to individuals how their personal data is processed.

When you start using Juro’s AI Assistant, make sure you update your fair processing notices to tell people that you use Juro to process their personal data, and that it includes AI capabilities.

You can find out more information about exactly how and where Juro processes personal data in our award-winning privacy policy. This also includes information about all of the sub-processors we use to provide and maintain our platform, including our AI Assistant.


Lawfulness ⚖️


You must have a lawful basis to process personal data.

This basis shouldn’t change by using Juro’s AI Assistant - you’re using personal data for the same reason, enhanced by a powerful, AI-enabled contract collaboration tool to make your processes more efficient.

Unlike some other AI-enabled contract platforms, we don’t use your data to train models for anyone else. So you don’t have to worry about Juro or its technology partners repurposing your personal data for anything else.


Accuracy 🎯


You have an obligation to keep the personal data you process accurate and up to date.

Generative AI is probabilistic. This means that its output is based on a statistical model, which predicts the most likely output based on your inputs and the model that powers it. AI can ‘hallucinate’ and produce inaccurate outputs. As a responsible controller, you must review the output of our AI Assistant before using it and correct any inaccuracies you identify.

Juro’s AI Assistant is powered by GPT, one of the world’s most sophisticated large language models, significantly enhancing the quality of output from our AI Assistant.


Fairness 🟰


You must process individuals’ personal data fairly.

That means protecting data subjects against unfair outcomes that might result from your processing, and making sure you’re using personal data in ways people expect.

Unlike some other AI-enabled contract platforms, we don’t use your data to train models for anyone else. So you can reassure people that their personal data isn’t being used by Juro or its technology providers to build or train open models.

Remember that our AI Assistant isn’t designed to help you make significant decisions about people or to process personal data about children - those things aren’t permitted under our terms.


Security 👮‍♀️


You’ll want to know that Juro is taking appropriate steps to keep your data secure.

Contracts govern your most valuable relationships, and Juro has processed more than 1 million contracts in its browser-native platform. With our new AI Assistant, we maintain that reputation for uncompromising security.

Juro is SOC 2 Type 2 and Cyber Essentials certified. We use powerful encryption for data at rest (256-bit Advanced Encryption Standard) and in transit (TLS) to help keep your contracts safe.

Our AI Assistant operates on highly secure Azure servers located in the EEA and managed by Microsoft, with data sent via our zero retention API.


Individual rights 👨🏻‍⚖️


As a controller, you need to trust that you can respond quickly and effectively to requests from individuals to exercise their data rights.

Our AI Assistant enhances your ability to find personal data in your contracts quickly and efficiently using natural language search. You can download or delete documents in Juro at the touch of a button.

Juro and its technology providers operate short retention periods for data running through our AI Assistant. Unless a policy violation is identified, we’ll delete your prompts and outputs from our systems within 30 days.


International transfers 🌏


You need to be sure that any international transfers of personal data are compliant.

Your contracts and our AI Assistant are EEA-native, operating on highly secure cloud servers operated by AWS and Microsoft.

Where our features require international transfers, we document this in our privacy policy. We don’t rely on short-lived safe harbour schemes - instead, we have in place the latest EU Standard Contractual Clauses supplemented by the UK addendum, and supported by rigorous international transfer risk assessments. This ensures strict compliance with UK and EU rules on international transfers and the requirements laid down by the Court of Justice of the European Union in the Schrems II judgment.

When it comes to AI-generated content, reading the fine print of the terms for the product you’re using, and making sure the business understands the risks.


What are the limitations associated with using the AI Assistant? ✋


As with any AI-enabled tool, there are some limitations to bear in mind:

❌ AI may produce inaccurate information:

Beyond Juro, AI platforms can ‘hallucinate’ and fabricate information. Users should thoroughly double-check any outputs from the AI Assistant before using them and, where still uncertain, run these past a trained legal professional.

📖 The Assistant has limited context to work with:

The AI Assistant cannot precisely predict your exact situation: adding detailed information to your playbooks and writing clear prompts helps with thi,s but do not give the Assistant unlimited knowledge.

🗑️ The Assistant doesn't keep a catalogue of your prompts:

Remember to note down the requests and prompts that work for you. To protect your privacy, Juro's AI Assistant does not keep a record or history of your requests and the results generated from them.

🎚️ The AI Assistant has a word limit

The Assistant has a word limit of roughly 7,500 words, which includes the prompt, context and output.

💁‍♀️ As always, our Support Team is happy to help you with anything further if needed. Start a chat with us right here by clicking the Intercom button in the bottom-right-hand corner of this page.

Alternatively, you can email your query to support@juro.com 🚀

Did this answer your question?