Primary Role: Workflow Manager (Advanced – Editor Admin)
Secondary Role: Business Administrator
Learning Focus: Apply
Where: Workflow Editor & Reporting (Admin access required)
🧭 Before You Start
This article explains how Strength Score works technically.
For recommended formulas by workflow type, see Recommended Strength Score Formulas by Program.
🎯 Why This Matters
Strength Score is only effective when its parameters align with the behaviors you want to encourage. Misaligned parameters create noise instead of insight.
📝 How Strength Score Works (At a System Level)
Strength Score calculates a score out of five (5.00) for each completed workflow based on:
A set of binary or ratio-based parameters
Weights assigned to each parameter
Optional threshold values (parameters)
Each parameter evaluates whether a specific quality behavior occurred.
📋 Strength Score Parameters Overview
🔹 Issue Identification & Escalation
Encourages users to surface problems and stop unsafe work
Label | Data Name | What It Encourages | How It’s Evaluated |
Flags |
| Manual issue reporting | Score = 1 if the user raised at least one flag during the workflow; otherwise 0. |
Interventions |
| Triggering follow-up actions | Score = 1 if the workflow triggered any intervention (follow-up path); otherwise 0. |
Stops |
| Stop-work behavior | Score = 1 if the workflow included any stop-work event; otherwise 0. |
📸 Evidence & Observation
Encourages users to observe conditions and provide proof
Label | Data Name | What It Encourages | How It’s Evaluated |
Photos |
| Visual evidence | Score = 1 if the number of photos submitted divided by the number of photo questions is greater than or equal to the parameter value; otherwise 0. |
⏱️ Time & Engagement
Discourages rushing and pencil-whipping
Label | Data Name | What It Encourages | How It’s Evaluated |
Duration (seconds) |
| Time-on-task | Score = 1 if total completion time is greater than or equal to the parameter value; otherwise 0. |
Questions Viewed |
| Minimum engagement | Score = 1 if the number of questions viewed is greater than or equal to the parameter value; otherwise 0. |
📝 Text Detail & Quality
Encourages thoughtful written responses
Label | Data Name | What It Encourages | How It’s Evaluated |
% Non-Blank Text |
| Written participation | Score = the percentage of open-text questions with any text entered. |
Longest Text Response |
| At least one detailed response | Score = 1 if the longest text response meets the minimum character threshold; otherwise 0. |
Average Text Length |
| Consistent detail across responses | Score = 1 if the average character count across all open-text responses meets the threshold; otherwise 0. |
# Text Responses Completed |
| Minimum written input | Score = 1 if the number of open-text questions with text responses meets the threshold; otherwise 0. |
🧾 Required Responses
Ensures specific critical questions are answered thoroughly
Label | Data Name | What It Encourages | How It’s Evaluated |
Required Text – Minimum Characters |
| Compliance on required text questions | Score = 1 only if every open-text question tagged as required has a response meeting the character threshold; otherwise 0. |
☑️ Checklist Discipline
Prevents skipping structured questions
Label | Data Name | What It Encourages | How It’s Evaluated |
% Checklist Answered |
| Avoid skipping checklist questions | Score = the percentage of checklist questions where at least one option was selected (including “None” or “N/A”). |
🧩 Overall Completion Coverage
Encourages broad engagement with the workflow
Label | Data Name | What It Encourages | How It’s Evaluated |
% Completed Questions |
| Engagement with text/photo questions | Score = the percentage of open-text and photo questions that received either text or photo input. |
🛠️ Weights vs Parameters (Critical Distinction)
Parameter = minimum threshold to earn the point
Weight = how much that behavior contributes to the total score
A parameter answers “Did it happen?”
A weight answers “How much does it matter?”
⚠️ Configuration Rules (Do Not Skip)
Not all parameters should be used together
Weights should generally sum to 1.0
Binary behaviors (flags, stops) should rarely dominate the score
Strength Score should reinforce design, not compensate for bad questions
🔑 Key Takeaways
Strength Score is behavior-based, not correctness-based
Each parameter measures a specific quality signal
Thoughtful selection matters more than quantity
