Skip to main content

Understanding Hallucination in AI and How to Mitigate Risks

As an advanced legal AI tool, Eve is very powerful, but it's crucial to understand and mitigate the risk of AI hallucination.

Updated over 6 months ago

We have optimized Eve's baseline answers to be as accurate as possible AND added two features that can help with hallucination explained below.

Explanation of AI Hallucination

AI hallucination refers to the AI "making up" information. It's a risk throughout all AI platforms, including Eve, and can refer to fabricated legal and non-legal facts. Hallucination occurs when the underlying Large Language Models (LLMs) use the general information they've learned in the past to inform current answers.

Eve's Built-in Safeguards

Eve has implemented two key features to mitigate the risk of hallucination:

1. Fact Search: This feature helps users perform an exhaustive search across large numbers of documents to identify all evidence related to a certain task or query. Eve reads through all documents page by page and generates a table of all key facts and quotes related to the query.

2. Quote Verification: For any answer that Eve provides a quotation for, Eve runs a series of validation checks to verify that the quote is found inside one of the provided documents. This helps increase a user's ability to trust Eve's responses.

Best Practices for Identifying and Avoiding Hallucinated Information

1. Use the "Objective. Context. Request" format when asking questions to provide clear instructions to Eve.

2. Always verify Eve's responses using the Quote Verification feature.

3. Cross-reference Eve's outputs with source documents.

4. Apply your professional judgment and legal expertise when reviewing Eve's responses.

5. If you encounter any errors or unexpected results, report them immediately to the Eve support team.

Remember, as a responsible AI user, it is crucial to understand the risk of AI surfacing hallucinated information, be informed on how the AI works to prevent these occurrences and be an expert on checking AI-generated results for accuracy.

Did this answer your question?