Skip to main content

Trace Metric API Report

Detailed summary of what you receive after submitting your raw model metrics to the Trace Metric API.

Updated this week

Here’s a detailed summary of what you receive after submitting your raw model metrics to the Trace Metric API. This document explains your results, highlights key findings, and guides you on how to access and use your report.

What You Get After Using the Trace Metric API

After submitting your raw metrics, the Trace Metric API processes your data and generates a comprehensive evaluation of your AI model. Here’s what your report includes:

  • Health Index & Star Rating:
    A quick snapshot of your model’s overall quality and risk level.

  • Pillar Scores:
    Assessment across critical dimensions such as performance, fairness, safety, task adherence, reliability, robustness, and privacy.

  • Metric Breakdown:
    Detailed analysis of key metrics like answer relevance, contextual precision and recall, faithfulness, bias, and hallucination.

  • Risks & Business Impacts:
    Identification of potential risks and real-world business consequences.

  • Recommended Actions:
    Tailored next steps for engineers, managers, and compliance teams.

  • Summary Table:
    At-a-glance view of your model’s strengths, weaknesses, and priorities.

Your Model’s Results

Health Index & Star Rating

  • Health Index: 57 (out of 100)

  • Star Rating: ★★★☆☆ (3 out of 5)

  • Risk Colour: 🟠 (Moderate)

Pillar Scores

Pillar

Score

Colour

Metrics Count

Meaning

Performance

0.49

🔴

3

Needs improvement in speed or relevance

Fairness & Bias

0.55

🟠

5

Some bias detected, moderate risk

Safety & Truthfulness

0.67

🟠

3

Acceptable, but could be more reliable

Task Adherence

0.0

🔴

0

Not meeting task requirements

Reliability

0.0

🔴

0

Unreliable performance

Robustness

0.0

🔴

0

Not robust to changes or errors

Privacy

0.0

🔴

0

Privacy risks present

Metric Breakdown

Metric

Score

Risk Level

What It Means / Action Needed

Answer Relevance

0.88

Medium

Generally relevant, but improve alignment

Precision

0.75

Medium

Some irrelevant context, optimize retrieval

Recall

0.74

Medium

Some context missing, ensure completeness

Faithfulness

1.0

Low

Fully factual, maintain this standard

Bias

0.0

Low

No harmful bias detected, keep monitoring

Hallucination

1.0

High

No hallucinated content, maintain checks

Risks & Business Impacts

Your report highlights several risks and their potential business impacts, such as:

  • User Churn: Poor or irrelevant answers can drive users away.

  • Control Gaps: Inadequate controls may lead to compliance failures.

  • Support Cost Spike: Inaccurate answers increase customer support needs.

  • Public Backlash: Harmful or offensive outputs can damage your reputation.

  • Regulatory Risks: Non-compliance with laws like the EU AI Act can result in legal consequences.

Recommended Actions

For ML Engineers:

  • Refine prompts and improve data quality.

  • Fine-tune models for your specific domain.

  • Optimize retrieval and ranking strategies.

  • Implement fact-checking and bias mitigation protocols.

For Business/Product Managers:

  • Align product requirements with user expectations.

  • Monitor relevance and satisfaction metrics.

  • Ensure retriever systems deliver complete and relevant responses.

For Compliance Managers:

  • Verify outputs meet content relevance and audit standards.

  • Maintain documentation for retrieval and ranking logic.

  • Track and mitigate bias for regulatory compliance.

How to Access Your Report

After you submit your raw metrics, you receive a unique Report ID (for example: gW3eQpV63sRTrQey9uJPNp). To access your detailed report:

  1. Save Your Report ID:
    This ID is your key to retrieve the full evaluation at any time.

  2. Access the Report:

    • Go to the Trace Metric API dashboard or portal.

    • Enter your Report ID in the “View Report” section.

    • Instantly view, download, or share your comprehensive evaluation.

  3. Share with Your Team:
    You can provide your Report ID to colleagues, auditors, or stakeholders for transparency and collaboration.

Why This Report Matters

  • Pinpoints Strengths & Weaknesses:
    Quickly see where your model excels and where it needs work.

  • Guides Compliance:
    Helps you meet standards like the EU AI Act and NIST AI RMF.

  • Drives Better User Experience:
    Focuses your improvements on what matters most to users.

  • Informs Business Decisions:
    Connects technical risks to real-world impacts, helping you prioritize effectively.

Summary Table

Metric

Score

Risk Level

Action Needed

Answer Relevance

0.88

Medium

Improve alignment & accuracy

Precision

0.75

Medium

Optimize context inclusion

Recall

0.74

Medium

Ensure complete responses

Faithfulness

1.0

Low

Maintain factual grounding

Bias

0.0

Low

Continue monitoring

Hallucination

1.0

High

Maintain strict fact-checking

Keep your Report ID safe and use it to revisit your evaluation any time. The Trace Metric API gives you the clarity and guidance you need to make your AI models safer, more reliable, and more aligned with your goals.

Did this answer your question?