Skip to main content

Generating concept reviews with AI

This article tells you more about generating concept reviews using AI.

Paul Kuijf avatar
Written by Paul Kuijf
Updated this week

Learned uses AI to prepare draft versions of reviews based on collected feedback from various sources within the platform, such as 1:1 notes, captured goals and manually submitted observations via email or other integrations. These automatic summaries help managers and employees draft reviews faster and more focused, without losing quality or nuance.

How it works?

The AI functionality within Learned generates a draft evaluation based on previously collected and captured data. This combines information from multiple sources to give the most complete and representative picture of the employee. The manager or employee always retains the option to modify this draft before it is made final.

The AI uses input from the following sources:

  • Notes from the notebook

  • Relevant information from previous reviews and self-reflections

This data is analysed for recurring themes, strengths, areas of concern and development needs. Based on this, the AI suggests text suggestions per competency, goal or evaluation item.

Privacy & data processing in generated reviews

When generating these draft evaluations, Learned strictly adheres to the applicable privacy legislation (AVG/GDPR). The processing is carefully designed with the following safeguards:

  • Only data within the Learned platform: Information explicitly entered or shared by users within the Learned platform is included in the AI analysis.

  • Manageable and transparent data processing The AI generates only a draft proposal. Users always retain full control: they can approve, modify or delete feedback before anything is finally saved or shared.

  • Data minimisation: Only relevant parts of feedback (such as observations, trends or recurring themes) are used to structure the evaluation. Not all raw data is taken verbatim.

  • Security and storage: All data is stored and processed within Learned's secure infrastructure. Access is restricted to authorised users within the organisation.

  • Processor agreement and AVG/GDPR compliance: Processing is covered by the existing processor contract with Learned. Organisations retain control over which functionalities are active and who has access to the generated content.

Bias, transparency & consent in AI applications

When using AI in staff review, care is essential. Learned is aware of the risks around bias, the black box effect and the need for active consent. Therefore, additional measures have been taken to make the use of AI fair, insightful and voluntary.

Preventing bias and discrimination

AI models - just like people - can contain unintentional biases, for example based on language use, gender or cultural differences. To prevent this:

  • The AI is continuously trained on diverse and representative datasets.

  • Output is never applied automatically: users always assess and edit the generated text themselves.

  • No sensitive personal data or signals that could lead to discrimination are explicitly used.

Did this answer your question?