Skip to main content

User Needs Data and Methodology

How Gartner Product Decisions scores current, historical and forecasted user needs.

Charlotte Gellene avatar
Written by Charlotte Gellene
Updated over 5 months ago

About User Needs

User Needs analyzes user sentiment at scale to identify user needs and expectations with a current, historical and forecasted view to analyze how those needs change over time. This analysis uses up-to-date data from Gartner Peer Insights and AI algorithms to analyze and prioritize product "capabilities" i.e. a major product feature that delivers a key value proposition to its users. The capabilities you see in User Needs are established and defined in Gartner Critical Capabilities documents or as part of the Gartner Peer Insights program.

Please note:

  • Only data from within the past 12 months is used to calculate current User Needs, although some combinations may use data with a shorter time frame. Data from an extended period may have been used for training algorithms.

  • The insights are based only on the underlying data and does not imply a holistic representation of a market.

User Satisfaction and Importance Scores

The tool algorithmically scores capabilities to approximate their satisfaction and importance for the selected market and user segment. Scoring is determined by filters set in the tool.

  • User satisfaction is based on the average rating of the feature in Gartner Peer Insights for that market.

  • Importance is calculated by doing a univariate regression between each of the individual product capability ratings and the overall product rating in Gartner Peer Insights.

    • This method checks to see if higher ratings for a particular capability tend to lead to higher overall product ratings.

    • If a capability’s rating has a big impact on the overall rating, it is considered more important to a user.

In addition to measuring satisfaction and importance, the tool offers additional analysis for each capability:

  • Top Rated Products presents the providers with the highest Gartner Peer Insights rating for the capability over the last 12 months. Ratings are taken on a scale of 1 to 5. Please note the average ratings within the review may represent multiple versions of a product offering, including reviews for beta products.

  • User Segmentation presents which firmographics and demographics were most likely satisfied or dissatisfied with a capability based on the average rating in Gartner Peer Insights over the last 12 months. Ratings are taken on a scale of 1 to 5.

Historical Trends Methodology

The historical trend data in Gartner Product Decisions is calculated through a multi-level aggregation process that ensures a robust representation of product capabilities over time.

  • Initially, daily scores for each capability are recorded and averaged into weekly scores.

  • These weekly scores are then aggregated into monthly scores.

  • Finally, the monthly scores are averaged to produce quarterly values.

This process helps in smoothing out short-term fluctuations and provides a more stable and reliable measure of product performance over longer periods, offering a consistent and comprehensive view of product capabilities and user satisfaction over time.

Forecasted Trends Methodology

The forecast methodology for User Needs in Gartner Product Decisions is based on long-term observed capability movement and employs a trend-based approach that aligns well with forward-looking strategic planning. This captures net movement of a capability over time, providing a clear view of the overall direction and magnitude of changes over an extended period. A capability requires a minimum of two historical quarters of data to generate a reliable forecast.

  • Confidence in these forecasts - which can be viewed in the downloadable csv file in User Needs - is derived from the volume of historical data points, more data points and higher data quality lead to higher confidence levels.

  • Variation – also included in the csv download – measures the range of expected possible values for forecasted scores, as the presented scores are a fit of the raw scores:

    • Low Variation: Indicates closely clustered scores, suggesting consistent performance.

    • High Variation: Indicates significant fluctuations in scores, reflecting more volatile performance.

While confidence measures how well the historical data supports the forecast, variation provides insight into the inherent stability of the capability scores themselves.

Did this answer your question?