Summary:
Hallucination is a big concern for any generative AI user - and it poses a serious risk to attorneys. Users of generative AI must verify all responses provided - we call this the “Trust but Verify” approach. To learn more about Eve’s approach to minimizing hallucination and increasing trust in results, read more here.
Eve’s “Quote Verification” feature performs an audit for all quotes that Eve sources from case documents and enables users to easily carry out the “verify” step of their work with the AI - flagging for the user any detected hallucinations, and allowing them to verify the results with key source documents.
How does Quote Verification work?
After Eve finishes generating an answer, Eve will now run a series of validation checks, known as Rules, on the final answer.
The Quote Verification Rule detects any quotes (text in “” or ‘’) in the answer, and then verifies that quote is found inside one of the provided documents. There are 3 potential cases for a quote:
(Green) Success: Eve was able to successfully identify the quote in the document. Users can click on the quote and then jump to that spot in the document.
(Yellow) Partial Match: Eve was able to identify a similar, but not exact, quote in the document. Users can click on the partial match and then jump to that spot in the document to verify.
Not Found: Eve was unable to find a match for this quote. Users should manually verify the quote in the document.
To leverage this feature, simply ask Eve to provide you quotes, as needed for your task at hand.
🗒️ Example Message: Please respond to this special interrogatory: <ROG 1>. Before providing your answer, please outline the key quotes from the documents that support your answer.
Notes and Limitations:
Note that Eve won’t run quote verification on quotes of less than 20 characters as it might apply to multiple sources and locations (simply put, it isn’t distinct enough to cite).