Skip to main content

Working with AI in Juro

Learn more about working with Juro's AI, and the information it generates, safely.

M
Written by Mo Doucoure
Updated yesterday

You may have noticed that AI-powered features in Juro carry a disclaimer:

AI may produce inaccurate information.

This article explains what that means for how you should work with AI-generated output in Juro.

How Juro's AI features work 🚀


If you're using AI to work with contracts, it’s critical to understand why AI can be wrong sometimes.

Juro's AI is built on large language models (LLMs): neural networks trained on vast amounts of text data to predict and generate language. In Juro, they weigh up the relevance of different inputs (your instructions, Juro’s background engineering, and the vast datasets it was trained on) to predict what a good response looks like.

Modern LLMs combine this predictive capability with the ability to search and retrieve trusted information to produce more accurate results. This is called retrieval augmented generation (or RAG). The output the LLM produces is its best judgment of the most likely output required, given the situation and instruction. This is called a “probabilistic” output.

This differs from traditional software, which produces “deterministic” outputs (for example, if A, then B). Traditional, deterministic software will always return the same answer from the same instruction. In contrast, AI will reconsider the most likely probabilistic answer for each instruction, so the answer can vary. Both may be read as equally authoritative. The difference is that only one of them definitely is.

AI has huge advantages. It's highly flexible and responsive to natural language when performing highly complex tasks that traditional software can't. To use it safely, you must understand that it's probabilistic by nature and can make mistakes.

AI output can be wrong ❌


There are several reasons AI gets things wrong, even when it's working as intended.

AI's knowledge has a cutoff date 📅


Most AI models are trained on data that is only periodically updated. AI might not know about recent case law, regulatory changes, or shifts in market practice. Where currency matters, the output may be out of date, despite appearing confident.

AI only work with what you give it 🚰


If important information or context is missing from your prompt, playbook, or document, the model will not be able to identify the gap. Instead, it will produce a response based on what it knows, often plausibly, without flagging missing information or context.

AI can't tell you when it's uncertain 🤷‍♂️


Unlike a careful lawyer, AI doesn't hedge when it's on shaky ground. It can state something correctly and incorrectly in the same tone and with the same authority.

The output looks the same either way.

AI handles unusual inputs less reliably 😵‍💫


LLMs perform more consistently on patterns that appear frequently in their training data. Standard clause types and familiar structures are handled better. Bespoke arrangements, unusual jurisdictions, and highly specific scenarios are more likely to produce output that reflects a common pattern rather than your actual situation.

What this all means in practice 📋


Think of AI output as a helpful colleague producing a first draft: it will get you most of the way there, faster than you could alone. But it must be checked before you rely on it.

This is not a caveat unique to Juro: all generative AI has a probabilistic nature. In practice, you should review output before accepting it as true, and treat AI-generated analysis as the starting point for your own judgement, rather than a substitute for it.

Clear, specific prompts and well-constructed playbooks reduce the likelihood of unhelpful or inaccurate responses. We suggest you experiment with your playbooks and instructions over time, iterating them to improve the quality and reliability of Juro’s outputs for your work. But even this does not remove the need for review. That review is what makes the output reliable. Just as you would with a junior colleague, always sense-check and correct output as appropriate before giving it your seal of approval

Further support 👋


If you have questions about any of the above, please contact your Customer Success Manager or reach out to the Juro Support Team.

💁‍♀️ As always, our Support Team is happy to help you with anything further if needed. Start a chat with us right here by clicking the Intercom button in the bottom-right-hand corner of this page.

Alternatively, you can email us at support@juro.com 🚀

Did this answer your question?