Scorebuddy Formulas Explained:
Scorebuddy works off two formulas, Overall Percentage Score and Actual Section Score.
It is important to note that Scorebuddy does not work of averages.
Overall Percentage Score formula is shown below, as you can see there is no mention of averages.
(Sum of Answers – Sum of Failed Sections) / (Maximum Score) x 100
Note: If there is a Fail All and the Fail all Zeros rule is active, the sum of the answers will be zero, giving an overall score of zero.
This formula is based on a hierarchy that Fail Section overrides an N/A and a Fail All everything.
Actual Section Score formula is shown below, again no mention of averages.
(Sum of Answers) / (Maximum Score , IF N/A Maximum Score = 0) x 100
Note: If there is a Fail All and the Fail all Zeros rule is active, the sum of the answers will be zero, giving an overall score of zero.
This formula shows you the potential section score by disregarding the Fail Section by not subtracting the Sum of Failed Sections.
In this formula Fail Section does not override N/A by making the Max Score per answer zero if there is an N/A scored.
Summary of Endpoints needed:
/scores:
get:
summary: 'Request Scores'
description: "This call will request Scores from Scorebuddy. This would be the primary end-point to use for extracting Score data from Scorebuddy, into a data-mining system. Manipulation of the `from_last_edit_date` and `to_last_edit_date` filters to gather latest data is recommended. See `/scores/{score_id}` for relevant data links."
'/scorecards/{scorecard_id}/versions/{version}/questions/{question_id}/answers/{answer_key}':
get:
summary: 'Request a specific Answer, for a specific Question, for a specific Version, of a specific Scorecard.'
description: 'This call will request a specific Answer, for a specific Question, for a specific Version, of a specific Scorecard from Scorebuddy.'
'/scorecards/{scorecard_id}/versions/{version}/questions/{question_id}':
get:
summary: 'Request a specific Question, for a specific Version, of a specific Scorecard, from Scorebuddy.'
description: 'This call will request a specific Question, of a specific Version, of a specific Scorecard, from Scorebuddy.'
'/scorecards/{scorecard_id}/versions/{version}/questions':
get:
summary: 'Request Questions, for a specific Version, of a specific Scorecard.'
description: 'This call will request the Questions, for a specific Version, of a specific Scorecard, from Scorebuddy'
/staff/supervisors:
get:
summary: 'Request Supervisors. Identical to /staff bar limited to Supervisors by default .'
description: 'This call will request Supervisors from Scorebuddy.'
/teams:
get:
summary: 'Request Teams'
description: 'This call will request Teams from Scorebuddy.'
'/teams/{team_id}':
get:
summary: 'Request a specific Team'
description: 'This call will request a specific Team from Scorebuddy.'
/groups:
get:
summary: 'Request Groups'
description: 'This call will request Groups from Scorebuddy.'
'/groups/{group_id}':
get:
summary: 'Request a specific Group.'
description: 'This call will request a specific Group from Scorebuddy.'
Depending on which report you’d like to replicate you’d need to amend your filter parameters, eg:
For “1.1 Overall Staff Trend”
Endpoint needed: /scores
Filters needed: from_score_date, to_score_date, group_id, scorecard_id
Once the scores for the needed dates have been obtained we need to get the answers from all the questions for those scores. We do this by using the values given to us in the “/scores” endpoint call for each question to make a call to:
Once we have all those, the values are used within the formula given:
(sum of all “answer_value”) – (sum of all “answer_value” if “fail_section” is true) / (sum of all “max_score” if "answers->not_applicable" is true) * 100
“max_score” is obtained from each of the scores given in the endpoint /scores used initially.
Once the data is gathered it is broken down by staff member and the total distribution of the percentages scored:
For “2.1 Supervisor by section”
Endpoint needed: /scores
Filters needed: from_score_date, to_score_date, group_id, scorecard_id
Once the scores for the needed dates have been obtained we need to get the supervisors details from the /staff/supervisors endpoint.
From the scores endpoint we can use the scorecard_id and version number to get the different sections for each of the questions by using /scorecards/{scorecard_id}/versions/{version}
Once we have all those, the values are used within the formula given:
(sum of all “answer_value”) – (sum of all “answer_value” if “fail_section” is true) / (sum of all “max_score” if "answers->not_applicable" is true) * 100
A count of the number of results for each of the supervisor is also needed.
As per the image below the scores need to be separated by section for each of the supervisors. Totals are also provided for each of the supervisors and for each of the sections.
For “2.2 Points of Failure by Supervisor”
Endpoint: /staff/supervisors to get list of supervisors which include “supervisor_id
Then use endpoint: /scores with:
Filters needed: from_score_date, to_score_date, , group_id, scorecard_id supervisor_id
This report is broken down by supervisors.
Once the scores for the needed dates have been obtained:
Sum up all the different “staff_id” of each “supervisor_id” to get the “No. of Staff” value in the report.
Sum up all the different scores for each “supervisor_id” to get the “No. of Events” value in the report.
Sum up all “fail_all” from /scores endpoint to get total of “Fail all” section of report.
For “Fail Section”:
We now need to get the questions that would trigger a “Fail Section” from all the questions for the scores. We do this by using the values given to us in the “/scores” endpoint call for each question to make a call to:
Get the “scorecard_id” and “version” from the /score endpoint call.
This call gives us if the question is a “fail_section” trigger or not in the “fail_section” key value pair.
Once the questions for that scorecard have been obtained you can make a call to the next bullet point api endpoint to get the specific results to the questions needed for the questions that would trigger a “fail_section” if the question is failed. You’ll need the “question_id” of the ones that can fail a section to use now with:
Use results from /scores for “scorecard_id”, “version” and “answer_key” and results from the /scorecards/{scorecard_id}/versions/{version}/questions for “question_id” for the directly above endpoint call.
Then use endpoint: /scorecards/{scorecard_id}/versions/{version}/questions/{question_id}/answers/{answer_key}
You will need to sum up all the answers that have “fail_section” with a “true” value.
“Fail All %” is “Fail All” / “No of events” * 100 assigned to each supervisor.
For “4.1 Overall trend per scorecard”
Endpoint needed: /scores
Filters needed: from_score_date, to_score_date, group_id, scorecard_id
Once the scores for the needed dates have been obtained we need to get the answers from all the questions for those scores. We do this by using the values given to us in the “/scores” endpoint call for each question to make a call to:
Once we have all those, the values are used within the formula given:
(sum of all “answer_value”) – (sum of all “answer_value” if “fail_section” is true) / (sum of all “max_score” if "answers->not_applicable" is true) * 100
“max_score” is obtained from each of the scores given in the endpoint /scores used initially.
Once the data is gathered it is broken down by the time period required (week, month, year) and the total distribution of the percentages scored:
For “4.7 Causes by scorecard”
Endpoint needed: /scores
Filters needed: from_score_date, to_score_date, group_id
From the scores endpoint we can use the scorecard_id and version number to get the different scorecard names.
From this scores endpoint you can use the cause_id with in the questions array to get the causes names by using the causes endpoints.
Once we have all those, the values are used within the formula given:
(sum of all “answer_value”) – (sum of all “answer_value” if “fail_section” is true) / (sum of all “max_score” if "answers->not_applicable" is true) * 100
For “5.1 Overall Team Trend”
Endpoint: /teams to get the available teams including “team_id” to use in next endpoint api call.
Endpoint needed: /scores
Filter needed: from_score_date, to_score_date, team_id
Once the scores for the needed dates and group have been obtained we need to get the answers from all the questions for those scores. We do this by using the values given to us in the “/scores” endpoint call for each question to make a call to:
Once we have all those, the values are used within the formula given:
(sum of all “answer_value”) – (sum of all “answer_value” if “fail_section” is true) / (sum of all “max_score” if "answers->not_applicable" is true) * 100
“max_score” is obtained from each of the scores given in the endpoint /scores used initially.
This report is broken down by teams as well as dates so use data obtained to separate results by teams as well:
For “6.1 Summary”
Endpoint: /groups to get the available groups including “group_id” to use in next endpoint api call.
Endpoint needed: /scores
Filter needed: from_score_date, to_score_date, group_id
Once the scores for the needed dates and group have been obtained we need to get the answers from all the questions for those scores. We do this by using the values given to us in the “/scores” endpoint call for each question to make a call to:
Once we have all those, the values are used within the formula given:
(sum of all “answer_value”) – (sum of all “answer_value” if “fail_section” is true) / (sum of all “max_score” if "answers->not_applicable" is true) * 100
“max_score” is obtained from each of the scores given in the endpoint /scores used initially.
This report is broken down by time:
For Advanced
Overall Supervisor Trend:
Endpoint: /staff/supervisors to get list of supervisors including “supervisor_id”
Endpoint needed: /scores
Filters needed: from_score_date, to_score_date, supervisor_id
Once the scores for the needed dates for each staff member of each supervisor have been obtained we need to get the answers from all the questions for those scores. We do this by using the values given to us in the “/scores” endpoint call for each question to make a call to:
Once we have all those, the values are used within the formula given:
(sum of all “answer_value”) – (sum of all “answer_value” if “fail_section” is true) / (sum of all “max_score” if "answers->not_applicable" is true) * 100
“max_score” is obtained from each of the scores given in the endpoint /scores used initially.
The results will need to be broken down by supervisor and by month.
Notes
From_score_date & to_score_date can be changed to from_event_date & to_event_date or from_last_edit_date & to_last_edit_date in all the above.
Customization of reports can be achieved by mix and matching the filter values and the type of filters used provided by the open api.
For further information on the api possibilities please make the following endpoint call /scorebuddy.yaml.