Skip to main content

Admin - The Balanced Review Engine

Streamline high-volume judging with Reviewr’s Judging Queue, the Balanced Review Engine—set review limits, randomize order, and ensure fair evaluations.

H
Written by Halle McCaslin
Updated over a week ago

The Balanced Review Engine

Large applicant pools can make even the most dedicated judges hit a wall. Reviewr’s Balanced Review Engine intelligently manages evaluator workloads to ensure every submission receives fair, consistent, and high-quality scoring. Instead of manually assigning submissions to each judge, Reviewr automatically distributes evaluations evenly and dynamically removes submissions from the queue once they’ve reached the desired number of completed reviews.

Judges see a randomized list of applicants to prevent bias and burnout, while administrators gain peace of mind knowing every submission is reviewed the proper number of times—no scrambling, no reassignment headaches, and no judging fatigue.

The Challenge

Managing fair and efficient judging is one of the hardest parts of running scholarships, grants, and awards programs. Volunteer judges often face large applicant volumes that lead to burnout and inconsistent scoring. Research shows that after roughly 30–40 evaluations, review quality starts to drop—scores become less objective, and applications begin to “blend together.” Compounding this challenge, life happens: judges get busy or drop out, leaving administrators scrambling to reassign reviews at the last minute to ensure every applicant gets the fair number of evaluations they deserve.

How Reviewr Solves It

Reviewr’s Balanced Review Engine automates the applicant-to-evaluator process for fairness and efficiency. All judges are assigned to the full applicant pool, but each submission is only available until it receives a set number of completed evaluations—for example, five. Once that quota is met, it disappears from the judge queue, ensuring balanced workloads and full coverage. The system also randomizes the order of submissions each judge sees, preventing bias from everyone reviewing the same applicants first and introducing natural scoring variety. If a judge drops out, the system automatically redistributes their remaining load without manual reassignment—keeping the process seamless and equitable.

The Impact

The result is fairer evaluations, less burnout, and higher scoring integrity. Judges stay engaged and consistent, administrators save hours of manual assignment work, and applicants receive more balanced feedback and scoring outcomes. With Reviewr’s Balanced Review Engine, you can manage high-volume programs confidently—knowing every application gets the attention it deserves without overloading your reviewers.

Using the Balanced Review Engine

The Judging Queue is a powerful tool designed to streamline the evaluation process — especially for events with a high volume of submissions. It allows you to set limits on how many submissions each judge can evaluate, as well as how many times each submission should be reviewed. Paired with randomized judge search results, this tool helps ensure an even, efficient distribution of evaluations across your judging panel.

What It Does

  • Sets a maximum number of evaluations per submission.

  • Optionally randomizes the order in which submissions appear to judges each time they access the portal.

  • Works across multiple groups within a single event.

  • Helps maintain fairness and consistency in evaluation distribution.

How It Works

When enabled, the Judging Queue dynamically assigns submissions to judges based on the limits you define. For example, if a submission should be reviewed no more than three times, the system will stop offering it to judges once it’s hit that limit.

To improve fairness and avoid bias toward submissions listed first, we recommend also enabling Randomized Judge Search Results — this ensures each judge sees submissions in a different order every time they access the judging portal.

Note: In rare cases where multiple judges open the same submission at the same time and begin reviewing it, the submission may end up with one additional evaluation above the set maximum. The system cannot override simultaneous activity, so it will allow both evaluations to be submitted. Admins can then decide how to treat the extra review.


How to Set It Up

Follow these steps to enable the Judging Queue and its related settings:

1. Prepare Your Submissions and Groups

2. Enable the Judging Queue

  1. Go to the Configuration page in your admin dashboard.

  2. Navigate to the Judge Settings tab and click Edit.

  3. Scroll to find Enable Evaluation Limit.

  4. Enter:

    • The maximum number of times each submission can be reviewed (e.g., 3).

  5. (Optional but recommended) Enable Randomize Judge Search Results to shuffle the submission order for each judge.

  6. Click Save.

These settings apply across all groups and will override any evaluation settings previously configured at the group level.

3. Finalize the Assignments

  • Go to each Group.

  • Click Manage Assignments.

  • Select all Submissions

  • Select Assign All to activate your Judging Queue settings.

4. (Optional) Adjust Group-Specific Settings

If you prefer to control the Maximum Evaluations Per Submission at the group level:

  1. Go to Groups and Divisions.

  2. Click Edit on the desired group.

  3. Set the Maximum Evaluations Per Submission value.

Note: If a different value is also set in the Configuration page, the Configuration value will take precedence.

Did this answer your question?