Skip to main content

Reporting on Agentic Campaigns

Written by Charley Bader

The Agentic Report is your live view of how the campaign is performing, how the agent is distributing traffic, and where performance is being created across audiences and content.

Use it to answer three questions:

  • Is it working?

  • How reliable is the performance signal?

  • Where should traffic be focused next?

For Agentic Campaigns, you are also able to see our standard campaign reporting, at a global level and in real-time.


The Two Tabs

Overall Performance

Impact, trends, traffic allocations, and variant outcomes. This is your go-to view for understanding results.

Agent Activity

A log of every optimisation run, with transparent reasoning behind each decision the agent made.


Recommended Reading Order

  1. Start with the Agent Campaign Summary — phase, performance rating, strategy, and goal

  2. Check the Performance Snapshot for top-line impact versus control

  3. Use Trends to validate direction over time

  4. Use Allocations to see where traffic is shifting

  5. Check out Secondary Goal performance

  6. Use Experience Summary to understand content and segment combinations

  7. Look at Insights to understand more detail around how the variants are performing


Key Terms

Control: Baseline visitors who didn’t receive a personalised experience.

Experience: Visitors who were shown a personalised experience.

Probability to Beat Control: The likelihood that the observed uplift is real, not just noise.

Certainty: a measure of how confident the model is in its current results, based on the volume and consistency of data collected so far

Credible interval: the range within which the true value of a metric most likely falls, based on current data. Early in a campaign the range will be wide, reflecting limited certainty. As more data is collected, the range narrows. The headline figure shown is always the midpoint of this range. A wide credible interval means the result should not yet be acted on.

Optimised performance: a recalculated view of campaign impact that applies the agent's final traffic allocations back to day one, removing the distorting effect of the exploration phase. This is the default view. See Optimised vs Observed below.


Detailed Reporting Pages & Features

Agent Campaign Summary

The summary at the top of the report gives you an at-a-glance picture of where the agent is and how it's performing. Think of it like a pizza tracker. It's designed to give you visibility and confidence that work is happening, without you needing to do anything.

It shows:

  • Optimisation Phase; where the agent currently sits in its learning journey. Phases run from Early Exploration through Late Exploration to Optimisation, reflecting how confident the model is and how focused its traffic allocation has become.

  • Narrowing In; a summary of how well the agent has identified optimal content, timing, and combined performance.

  • Performance Rating; an overall rating of how the campaign as a whole is performing: Uncertain, Promising, Good, Great, or Unlikely to Beat.

  • Agent Strategy and Goal; the strategy and goal the campaign was set up with.

  • Agent Summary; a plain-language narrative of what the agent is doing. This will become richer over time as the campaign matures.

If the campaign is rated Unlikely to Beat, consider adding a "Do Nothing" experience so the agent can route traffic there rather than continuing to serve something that isn't working.

Performance Snapshot

The Performance Snapshot is the headline readout of your campaign’s impact against control for the active reporting window. It combines outcome, confidence, and data maturity into one summary view.


Core signals

Goal metric vs control — Direct outcome comparison between variant and control visitors.

Probability to Beat Control — How likely it is that variant is truly outperforming, not just by chance.

Statistical reliability — Data maturity indicators like certainty and power, plus sample depth. These tell you how much to trust the numbers.

Uplift — Absolute and relative impact above the control.


Uplift and projections

  • Total uplift — The incremental value seen in the time the campaign has been live

  • Annual projection — A projection of what uplift you would see annually. If we have been collecting data across previous months, this will be based on your traffic

  • Monthly projections — Projected uplift by month, with an optional cumulative view for a running total

Important: Strong-looking uplift with weak reliability should be treated as an early signal, not a final conclusion. Always check confidence before acting on headline numbers.


Revenue and Conversion Toggle

On conversion-rate goals, you can switch between a conversion view and a revenue view directly within the report. The revenue view uses the experience's average order value by default, or a custom figure you specify. This lets you express results in revenue terms without needing to change your goal.


CSV Export

Campaign data can be exported to CSV from the report, the agentic reporting view, and the performance snapshot. Use this to share results outside the platform or carry out further analysis.

Optimised vs Observed Performance

Every campaign starts with an exploration phase where our agent is learning, meaning that traffic is spread broadly. This early data can make performance look weaker than it is (we call it "the cost of exploration"). The performance snapshot now has 2 views: what actually happened (observed), and what performance looks like based on the agent known best allocation (optimised).

You can toggle between optimised and observed at any point using the view selector in the report. If you are sharing results with stakeholders, the optimised view is the more representative figure.

Reports default to the optimised view of performance.

Allocations

Allocations are the proportion of eligible treatment traffic directed to each experience. Unlike fixed split testing, these percentages adjust continuously as the agent gathers evidence.

Seeing allocations shift is a normal, healthy sign that the agent is learning.


Why allocation percentages change

  • The agent increases share to experiences showing stronger performance

  • It keeps some exploration active so alternatives can still be tested

  • Daily shifts are expected as new evidence is incorporated


Viewing traffic allocation data

The allocations section shows how the agent is distributing traffic across your experiences over time. You can view this data as a table or graph using the selection above the allocations area. The graph gives you a visual picture of how shares have shifted day by day, while the table is useful when you want to compare exact percentages at a specific point in time.

Growing share for an experience usually indicates increasing confidence from the agent. Shrinking share indicates it is being deprioritised.

Two modes are available depending on your campaign type:

  • Content mode; Shows which content type is currently being favoured

  • Timing mode; Shows which trigger moments are currently being favoured. Only available when timing optimisation is enabled.


How to interpret shifts

We recommend you focus on trajectory, not isolated single-day movements. Validate major shifts against performance and reliability indicators. Expect more fluctuation during Exploration, and more stability as the campaign matures into Optimisation.

Secondary Goals

Every goal tracked in your campaign is visible as a secondary metric in reports, not just the one the agent is optimising for.

With secondary goals, you can:

  • Track transactional performance continuously alongside your primary goal, so you always have visibility of revenue and conversion even when optimising for something else

  • See performance data for every goal attached to your campaign, whether transactional, engagement-based, or otherwise

Guardrails

Revenue per User and Conversion rate are always tracked as Guardrails for campaigns. This means that, although you may be optimising for a non-transactional goal, you can ensure these aren't negatively affected. A shield icon marks guardrail goals in all relevant tables and views.

The status bar at the top of the report shows whether monitored transactional metrics are within acceptable bounds. Clicking the status indicator takes you directly to the guardrails section, where you can review current performance.

Experience Summary

Experience Breakdown explains performance at audience-content level — so you can see which combinations are driving lift, and which aren’t. It’s where you go when you want to understand where results are coming from.

Results in Experience Breakdown are shown alongside statistical significance indicators for each variation, segment, and timing. You can view significance for each goal and metric independently using the dropdown selector, rather than only seeing the primary goal.


Experiences vs Variants

Experience — A piece of content applied to an audience context.

Variant — A specific audience × content combination that is receiving traffic and being evaluated against control.


Overview metrics

Experience Metrics:

  • Experience Users: total users currently receiving an experience, shown as a percentage of all users

  • Experience Goal (e.g. Conversions): total experience users who have achieved your selected goal

  • Experience Goal Rate (e.g. Conversion Rate): percentage of experience users who have achieved your selected goal

  • Control Users : total users in the control group, shown as a percentage of all users

  • Control Goal (e.g. Conversions): total control users who have achieved your selected goal

  • Control Goal Rate (e.g. Conversion Rate): percentage of control users who have achieved your selected goal

Experience Components:

  • Active Variants : combinations currently receiving traffic

  • Active Segments : how broadly personalisation is covering your audience

  • Active Content : number of distinct content types currently active

  • Optimised % : the proportion of traffic currently being optimised, reflected by the certainty dot


Detailed view

Drill into results at three levels:

  • Content level — Performance aggregated by experience

  • Segment level — Performance aggregated by visitor group


Variant-level metrics

Users — How many visitors were exposed to this combination, and the % of the total visitors

Goal Rate (e.g. Revenue Per User) — Outcome per exposed visitor

Relative Uplift — Percentage difference versus control

Probability to Beat Control — Confidence that the uplift is real

Certainty A measure of how confident the model is in its current results, based on the volume and consistency of data collected so far


How to use this section

Use Experience Breakdown to identify scalable winners, combinations that need more data, and persistent underperformers worth iterating on or removing.

Insights

Insights give you a deeper view of how the agent is making decisions, and where performance is being created across your audience.

Within the Insights tab you can see:

  • Performance by criteria; See which content the agent is favouring for different visitor groups, broken down by the criteria it has chosen for optimisation, such as product price or behaviour. This helps you understand not just what is winning, but who it is winning for

  • Matrix view; Switch to a matrix view for a side-by-side comparison of variants and allocations, showing where the agent's decisions have been concentrated

Insights → new campaigns. The real power of Insights is that they don't just explain performance, they point to your next opportunity. If the agent is consistently favouring one variant for a specific segment, that's a clear signal to spin up a new, more targeted campaign for that audience.

Did this answer your question?