Most product teams prioritize by counting. The feature with the most requests wins. The bug mentioned by the most customers gets fixed first. It's simple, it feels fair, and it consistently produces the wrong answer.
This article explains why, and what better prioritization looks like.
The problem with counting
Volume-based prioritization has one thing going for it: it's easy to explain. The most-requested feature gets built. Everyone can see the logic.
But volume has no sense of proportion. It treats a request from a β¬50/month customer the same as a request from a β¬50,000/month customer. It treats a bug that's been in the backlog for two years the same as one reported yesterday by your largest account. It has no way of asking: does building this move our business forward?
The result is a backlog that reflects who complained loudest, not what your business actually needs.
Revenue-weighting helps, but it's not the full picture
Weighting feedback by the revenue of the customers who raised it is a significant improvement over pure volume. It means a single enterprise customer flagging a blocker can outrank a hundred free-plan users reporting a cosmetic issue.
But revenue alone is still backward-looking. It ranks by who is paying you now, not by what will drive growth, reduce churn, or protect the accounts you most need to keep.
A customer asking for a feature that competes directly with your roadmap strategy might have high MRR and still be the wrong thing to build. Revenue weighting gets you closer, but strategic judgment still has to be applied somewhere.
The signals that actually matter
Good prioritization combines several signals that volume-only approaches miss:
Revenue impact Weight feedback by the MRR of existing customers and the deal size of prospects who raised it. A complaint from your highest-value accounts should carry more weight than the same complaint from low-value accounts.
Urgency How recently was this raised? A backlog item getting new mentions this week is more urgent than one that peaked six months ago and has gone quiet. Recency decay (the idea that older signals matter less than fresh ones) prevents your backlog from being dominated by stale requests.
Breadth How many distinct customers raised this? A complaint from one customer, however large, carries different risk than the same complaint from twenty. Breadth signals whether something is idiosyncratic or systemic.
Effort What does this cost to fix? A high-priority item that takes two days of work should surface above a high-priority item that takes six months, especially when you're choosing what to do next.
Where strategic judgment fits in
None of the signals above tells you whether building something is right for your business. A feature might score highly on revenue, urgency, and breadth, and still be out of scope for where you're taking the product.
This is where human judgment belongs: applied at the point of decision, informed by data, not replaced by it.
Pilea's priority score is designed to give you the best possible data-informed starting point. The manual override exists specifically for the cases where your strategic context changes the answer, when you know something the score doesn't. You apply the override, note your reasoning, and the team can see both the computed ranking and the decision you made on top of it.
The goal isn't to automate prioritization. It's to make sure that when you make a judgment call, you're making it with the right information in front of you, not just a count of how many people complained.
How Pilea approaches this
Pilea's priority score weighs each backlog item by:
Deal size from customers who mentioned it - both deal size from existing customers and pipeline from prospects
How many distinct customers raised it - breadth, not just volume
How recently it was raised - with recency decay so fresh signals carry more weight
Sentiment - whether the feedback was frustrated or neutral
Estimated effort - optionally dividing the score by the size of the item
Pilea's priority scores are data-informed starting points, not rules. When you have strategic context the score doesn't capture, you can apply a manual override. The computed score and your reasoning stay visible to the team, so decisions are transparent.
What this changes in practice
When prioritization accounts for business context (revenue, urgency, breadth, and effort), a few things shift:
The loudest voice stops winning automatically
High-value accounts get appropriate weight even when they're a minority
Stale backlog items stop crowding out urgent, recent feedback
Strategic overrides become visible and traceable, not silent gut calls
You're still making the judgment. The data just make you better at it.