Answer
Strength Score parameters are the individual measures ANVL uses to evaluate how thoroughly a workflow was completed. Each parameter looks for a specific quality signal, such as whether the user added photos, entered meaningful text, answered enough questions, spent enough time, or raised issues when needed.
At a simple level, the parameters fall into a few categories:
Issue identification and escalation — did the user surface problems, raise flags, or trigger follow-up?
Evidence and observation — did the user provide visual proof, such as photos?
Time and engagement — did the user spend enough time and interact with enough of the workflow?
Text detail and quality — did the user provide meaningful written input?
Required responses — were critical required questions completed thoroughly?
Checklist discipline — did the user complete structured checklist questions?
Overall completion coverage — how much of the workflow was meaningfully completed?
Strength Score works by combining:
the selected parameters
the weight assigned to each parameter
any required threshold values
Each parameter checks whether a specific behavior happened and contributes to the final score based on its weight.
Use the tables below to understand:
what each parameter measures
what behavior it encourages
how the system evaluates it
when that parameter should be used
This matters because a Strength Score is only useful when the selected parameters match the behavior you actually want to encourage. Poor parameter selection creates noise instead of insight.
Steps
Use the tables below to understand:
what each parameter measures
what behavior it encourages
how the system evaluates it
when that parameter should be used
This matters because a Strength Score is only useful when the selected parameters match the behavior you actually want to encourage. Poor parameter selection creates noise instead of insight.
Issue Identification & Escalation
Encourages users to surface problems and stop unsafe work
Label | Data Name | What It Encourages | How It’s Evaluated |
Flags |
| Manual issue reporting | Score = 1 if the user raised at least one flag during the workflow; otherwise 0. |
Interventions |
| Triggering follow-up actions | Score = 1 if the workflow triggered any intervention (follow-up path); otherwise 0. |
Stops |
| Stop-work behavior | Score = 1 if the workflow included any stop-work event; otherwise 0. |
Evidence & Observation
Encourages users to observe conditions and provide proof
Label | Data Name | What It Encourages | How It’s Evaluated |
Photos |
| Visual evidence | Score = 1 if the number of photos submitted divided by the number of photo questions is greater than or equal to the parameter value; otherwise 0. |
Time & Engagement
Discourages rushing and pencil-whipping
Label | Data Name | What It Encourages | How It’s Evaluated |
Duration (seconds) |
| Time-on-task | Score = 1 if total completion time is greater than or equal to the parameter value; otherwise 0. |
Questions Viewed |
| Minimum engagement | Score = 1 if the number of questions viewed is greater than or equal to the parameter value; otherwise 0. |
Text Detail & Quality
Encourages thoughtful written responses
Label | Data Name | What It Encourages | How It’s Evaluated |
% Non-Blank Text |
| Written participation | Score = the percentage of open-text questions with any text entered. |
Longest Text Response |
| At least one detailed response | Score = 1 if the longest text response meets the minimum character threshold; otherwise 0. |
Average Text Length |
| Consistent detail across responses | Score = 1 if the average character count across all open-text responses meets the threshold; otherwise 0. |
# Text Responses Completed |
| Minimum written input | Score = 1 if the number of open-text questions with text responses meets the threshold; otherwise 0. |
Required Responses
Ensures specific critical questions are answered thoroughly
Label | Data Name | What It Encourages | How It’s Evaluated |
Required Text – Minimum Characters |
| Compliance on required text questions | Score = 1 only if every open-text question tagged as required has a response meeting the character threshold; otherwise 0. |
Checklist Discipline
Prevents skipping structured questions
Label | Data Name | What It Encourages | How It’s Evaluated |
% Checklist Answered |
| Avoid skipping checklist questions | Score = the percentage of checklist questions where at least one option was selected (including “None” or “N/A”). |
Overall Completion Coverage
Encourages broad engagement with the workflow
Label | Data Name | What It Encourages | How It’s Evaluated |
% Completed Questions |
| Engagement with text/photo questions | Score = the percentage of open-text and photo questions that received either text or photo input. |
Weights vs Parameters (Critical Distinction)
Parameter = minimum threshold to earn the point
Weight = how much that behavior contributes to the total score
A parameter answers “Did it happen?”
A weight answers “How much does it matter?”
Configuration Rules (Do Not Skip)
Not all parameters should be used together
Weights should generally sum to 1.0
Binary behaviors (flags, stops) should rarely dominate the score
Strength Score should reinforce design, not compensate for bad questions
