ReviewIQ
Even the best reviewers score differently—and that variability can unintentionally skew results. Reviewr’s ReviewIQ™ brings data science to the judging process by analyzing reviewer behavior, identifying scoring tendencies, and automatically normalizing results for fairness and consistency. Instead of relying solely on raw averages, ReviewIQ™ ensures every applicant is evaluated relative to each reviewer’s unique scoring pattern—creating a level playing field where results reflect true merit, not reviewer bias.
The Challenge
When reviewing submitted entries, each evaluator brings their own unique scoring style. Some tend to be more generous, others more critical. When applicants are randomly distributed among reviewers—as they are in balanced review systems—these differences can lead to uneven results. A “tough” judge’s top score might still be lower than an “easy” judge’s average, meaning strong applicants could be unfairly penalized simply based on who reviewed them. Without a system to identify and adjust for these variations, even well-designed review processes can produce skewed outcomes and reduce confidence in final selections.
How Reviewr Solves It
ReviewIQ™ uncovers and corrects these inconsistencies through advanced scoring analytics. It begins by mapping each reviewer’s scoring behavior—displaying personal averages and question-by-question tendencies across the review team. This visibility helps program administrators identify where scoring alignment or training may be needed. Then, ReviewIQ™ goes a step further: it normalizes results by converting each raw score into a relative score compared to that judge’s average. For example, if a reviewer typically scores a 29 but gives an applicant a 33, ReviewIQ™ translates that into a 1.14—indicating the applicant performed 14% above that judge’s usual standard. These normalized scores are then averaged across all evaluators, producing a data-backed, bias-adjusted final score that accurately reflects each applicant’s standing.
The Impact
With ReviewIQ™, Reviewr turns subjective evaluations into objective insights. Programs gain a transparent view of reviewer behavior, judges receive feedback that promotes alignment and fairness, and applicants are evaluated on merit—not chance. The result is a selection process that’s smarter, more consistent, and more defensible. By combining human judgment with analytical intelligence, ReviewIQ™ helps every organization achieve what matters most: trustworthy, equitable decisions that stand up to scrutiny.
How to read the ReviewIQ report
Each row in the report is a submission
The report is broken into blocks. One block per scorecard question.
Within that blog is a column for each judge, and how they scored each applicant for that scorecard question.
At the end of these question blocks you will see the total score each judge evaluated a particular submission.
Below these scores you will see the average score per judge.
As you progress through the report you will see how each individual judge scored that report compared to their own personal average. This is tabulated by taking the score that entry received by the judge and dividing it by that judges personal average.
The last two columns of the report will give you a new average score, by submission. This average is based on the averages per judge, normalized by their own judging tenancies. This is then followed by a new “rank”.
How to Access the Report
From the Admin dashboard, click the Reports icon.
Navigate to the Reports page.
Scroll to the bottom of the page to find the Normalized Report.
Generating the Report
To generate your Normalized Report, you must select:
Evaluation Form – The specific evaluation form you want to report on.
Group – The reviewer group used to evaluate submissions.
Division – The division that contains the submissions.
⚠️ Important: These three selections—evaluation form, group, and division—must all correlate.
Ensure that:
The group you select was actually assigned to the evaluation form.
The group contains reviewers who submitted evaluations using that form.
The submissions evaluated by that group exist within the selected division.
If these do not align, the report may be blank.
Once all selections are made, click Email Results to Me. The report will be sent to your inbox. Large evaluation forms may take up to 30 minutes to process. Be sure to check your spam folder before reaching out to the Reviewr support team.
Understanding the Report Format
Once you receive and open the file, here’s how the data is structured:
1. Initial Columns
Group – Displays the name of the group you selected.
Submission – Lists the names of each submission within that group.
2. Evaluation Form Breakdown
For each question on the evaluation form:
The column header is the question name.
Below the question header, each column represents a reviewer.
The cells show the scores each reviewer gave per submission.
If a cell is empty, that reviewer did not score the submission.
Text/comment fields follow the same format, but contain written responses.
This pattern continues for all questions on the form.
3. Display Scores
After the form data, a column break separates the next section:
Display Scores – Shows each reviewer’s total score for every submission.
The bottom row contains the average score each reviewer gave across all submissions they scored.
4. Display Score (Centered)
This section adjusts for reviewer scoring bias:
A score of 1.0 represents the reviewer’s average score.
Scores above 1.0 (e.g., 1.25) are above the reviewer’s average.
Scores below 1.0 (e.g., 0.75) are below the reviewer’s average.
This helps normalize tough vs. lenient scorers.
5. Average Center Score
Displays the average centered score per submission.
Helps balance out unusually high or low individual scores.
6. Rank
Ranks submissions from highest to lowest average centered score.
Provides a normalized order of performance across all evaluated submissions.
Summary
The ReviewIQ, Normalized Report, is a powerful tool for evaluating submissions with increased fairness and clarity. By adjusting scores relative to each reviewer's tendencies, it helps eliminate scoring bias and surfaces a clearer picture of submission performance.
If you’d like access to the Normalized Report for your event, reach out to your Reviewr account representative.