Skip to main content

DeepEval Configuration Guide

A step-by-step guide to help you configure DeepEval to generate and submit evaluation metrics

Updated yesterday

In this guide, we explain how to configure and use DeepEval to systematically evaluate language model outputs.
We define standardized test cases — including input, actual output, expected output, and retrieval context — and run a set of quality metrics such as answer relevancy, faithfulness, hallucination, and bias. These metrics are mapped to broader evaluation pillars like performance, fairness & bias, safety, and reliability, providing a structured way to quantify model quality.

After collecting these raw evaluation metrics, we submit them to the TRACE Metrics API.

TRACE processes these results to generate AI governance evidence, answering questions such as:

  • Does the AI system comply with NIST AI RMF, EU AI Act, or similar guidelines?

  • How safe, fair, and robust is the system in production?

  • Are there indicators of hallucination, bias, or inconsistent behavior?

This workflow supports teams and compliance stakeholders by:

  • Providing transparent, explainable evidence for responsible AI

  • Enabling dashboards and historical monitoring of AI performance and risk

  • Helping align AI systems with internal policies and external regulatory requirements

This combined approach ensures that evaluation is not just technical, but also supports governance, auditability, and long-term risk management.

Required Fields

Field Name

Description

metric_key

Standardized name (e.g.AnswerRelevancyMetric)

value

Raw metric value from DeepEval (float)

Sample DeepEval Code (Python)

from deepeval.metrics import (
AnswerRelevancyMetric, HallucinationMetric, BiasMetric,
RoleAdherenceMetric, ToolCorrectnessMetric
)
from deepeval.test_case import LLMTestCase

# Sample test case
test_case = LLMTestCase(
input="What is the capital of Germany?",
actual_output="Berlin is the capital of Germany.",
expected_output="Berlin",
retrieval_context=["Germany is a country in Europe. Berlin is its capital."]
)

# Instantiate metrics
metrics = [
AnswerRelevancyMetric,
HallucinationMetric,
BiasMetric,
RoleAdherenceMetric,
ToolCorrectnessMetric
]

# Run evaluations
metric_results = {}
for m in metrics:
m.measure(tc)
metric_results[m.__class__.__name__] = m.score

Metric-to-Pillar Mapping

Metric name

Canonical Space

Pillar

Better high

AnswerRelevancyMetric

relevance_and_accuracy

Performance

Yes

ContextualPrecisionMetric

relevance_and_accuracy

Performance

Yes

ContextualRecallMetric

relevance_and_accuracy

Performance

Yes

RAGASAnswerRelevancyMetric

relevance_and_accuracy

Performance

Yes

RAGASContextualPrecisionMetric

relevance_and_accuracy

Performance

Yes

RAGASContextualRecallMetric

relevance_and_accuracy

Performance

Yes

TaskCompletionMetric

task_success_utility

Performance

Yes

PromptAlignmentMetric

task_success_utility

Performance

Yes

RoleAdherenceMetric

task_success_utility

Performance

Yes

ConversationCompletenessMetric

task_success_utility

Performance

Yes

ConversationRelevancyMetric

conversational_quality

Performance

Yes

ConversationCompletenessMetric

conversational_quality

Performance

Yes

AnswerRelevancyMetric

relevance_and_accuracy

Fairness & Bias

No

FaithfulnessMetric

factuality_and_faithfulness

Fairness & Bias

No

HallucinationMetric

factuality_and_faithfulness

Fairness & Bias

No

ToolCorrectnessMetric

factuality_and_faithfulness

Fairness & Bias

No

JsonCorrectnessMetric

factuality_and_faithfulness

Fairness & Bias

No

RAGASFaithfulnessMetric

factuality_and_faithfulness

Fairness & Bias

No

ConversationRelevancyMetric

conversational_quality

Fairness & Bias

No

BiasMetric

safety

Fairness & Bias

Yes

ToxicityMetric

safety

Fairness & Bias

Yes

KnowledgeRetentionMetric

knowledge_retention

Fairness & Bias

No

HallucinationMetric

factuality_and_faithfulness

Safety & Truthfulness

Yes

ToxicityMetric

safety

Safety & Truthfulness

Yes

JsonCorrectnessMetric

structural_validity

Safety & Truthfulness

Yes

PromptAlignmentMetric

task_success_utility

Task Adherence

Yes

JsonCorrectnessMetric

structural_validity

Reliability

Yes

KnowledgeRetentionMetric

knowledge_retention

Reliability

Yes

BiasMetric

safety

Privacy

No

ToxicityMetric

safety

Privacy

No

Submit Results via API

Prepare Canonical Payload

"metric_metadata": {
"application_name": "chat-application",
"version": "1.0.0",
"provider": "deepeval",
"use_case": "transportation"
},
"metric_data": {
"deepeval": {
"AnswerRelevancyMetric": 85,
"ContextualPrecisionMetric": 92,
"ContextualRecallMetric": 78,
"ContextualRelevancyMetric": 88,
"ConversationCompletenessMetric": 95,
"ConversationRelevancyMetric": 82
}

Send via Trace Metrics API

BASE_URL = "https://api.cognitiveview.com"
url = f"{BASE_URL}/metrics"

headers = {
"Ocp-Apim-Subscription-Key": auth_token,
"Content-Type": "application/json",
}

payload = {
"metric_metadata": {
"application_name": "chat-application",
"version": "1.0.0",
"provider": "deepeval",
"use_case": "transportation"
},
"metric_data": {
"deepeval": metric_results
}
}

response = requests.post(url, headers=headers, json=payload)
print(f"Status Code: {response.status_code}")

How to get your TRACE Metrics API subscription key

To use the TRACE Metrics API, you must first obtain a Authorization from CognitiveView. Follow these steps:

  1. Log in to CognitiveView

  2. Go to System Settings

    • In the main menu, navigate to System Settings.

  3. Find or generate your subscription key

    • Look for the section labeled API Access or Authorization Key.

    • If a key already exists, copy it.

    • If not, click Generate Key to create a new one.

  4. Copy and store the key securely

    • You’ll need this key to authenticate API requests.

    • Keep it safe and do not share it publicly.

Send via curl or FastAPI Client

curl -X POST https://api.cognitiveview.com/metrics \
-H "Content-Type: application/json" \
-d @eval_payload.json

Summary

Step

Action

1

Choose DeepEval metrics relevant to your run_type

2

Run metrics and get raw score

3

Submit to /metrics or mcp://... endpoint

Additional resources

  • Explore example notebooks & sample code on our GitHub: see how to call the TRACE Metrics API step by step.

Questions? Reach out: support@cognitiveview.ai

Did this answer your question?