Skip to main content

Validate & Troubleshoot Custom Workflow Scoring (Workflow Manager – Advanced)

Verify that scoring and reporting behave as expected and resolve common issues before they affect users or insights.

Lauren Baird avatar
Written by Lauren Baird
Updated this week

Primary Role: Workflow Manager (Advanced – Editor Admin)
Secondary Role: Business Administrator
Learning Focus: Apply
Where: Workflow Editor & Reporting (Admin access required)


🧭 Before You Start

This article is for Workflow Managers with Editor Admin permission.
Always validate changes using test workflows or limited rollout before broad use.


🎯 Why This Matters

Even small configuration issues can lead to missing data, misleading scores, or broken reports. Validation ensures your workflows produce reliable, trustworthy results.


🛠️ Step 1: Complete Test Workflows

→Follow the Steps Here: How to Test a Workflow

Complete a test workflow for each testing case.


Best practice:
Complete at least:

  • One low-score scenario

  • One expected “good” scenario

  • One high-score scenario

👉 Use the Workflow Scoring Test Template to plan scenarios.


🛠️ Step 2: Open the Workflow Summary

  1. Open ANVL Insights

  2. Navigate to Live Feed

  3. Locate a recently completed test workflow

  4. Click the workflow to open the Workflow Summary


🛠️ Step 3: Review Scores

In the workflow summary, review:

  • Points earned vs. maximum possible points

  • Percentage score

  • Whether scored questions contributed as expected

Compare results to your expected outcomes.

Validate the Workflow Score is correct based on the test case. Review individual question scores to verify responses.


🛠️ Step 4: Adjust if Needed

If results don’t match expectations:

  • Review maxScore values on scored questions

  • Review response options and question design

  • Update configuration in Editor

  • Republish and re-test

Repeat until results align with intent.


Common Testing Issues

  • Scores not appearing → Question tags not configured for scoring

  • Unexpected scores → maxScore or response values misaligned

  • All scores too similar → Scale too narrow


⚠️ Watch Out For…

  • Making multiple changes at once

  • Testing only one completion scenario

  • Validating in production without a test plan

  • Assuming reports will “self-correct”

Most issues are easier to fix before broad rollout.


🔑 Key Takeaways

  • Always validate scoring and reporting after changes

  • Most issues stem from inconsistency or missed testing

  • Early validation prevents downstream rework

Did this answer your question?