Skip to main content

Agentic Campaign FAQs & Troubleshooting

Written by Charley Bader
Updated this week

How it compares

How is this different from A/B testing?

A/B tests use fixed traffic splits over a defined period. The agent adjusts traffic in real time, personalises by visitor group, and runs continuously — so performance improves over time rather than simply producing a report at the end.


Does this replace A/B testing entirely?

No. A/B testing is still the right tool when you want a one-off, unambiguous answer, e.g. "should the hero CTA say 'Shop now' or 'Buy now'?"

Agentic Campaigns are the right tool when the answer is different for different audiences and you want the site to keep responding correctly as those audiences evolve. Many MWI clients run both: A/B tests to validate specific design hypotheses, Agentic Campaigns to operate those decisions at scale across audiences.


How is this different from Custom Campaigns?

With Custom Campaigns, you define who to target, when to show the experience, and what the journey looks like. With Agentic Campaigns, you provide the experiences and the strategic intent — the agent finds the right visitors and the right moments automatically.

Control & Safety

Do I have control once it’s live?

Yes. You retain control of targeting rules and can pause or turn off the campaign at any time. The agent manages optimisation decisions within the boundaries you set.

You can also:

  • Include a control or baseline experience as a fallback and comparison point (this should be added in addition to the standard control)

  • Adjust the Control Share at any time

  • Guide segmentation by choosing the strategy that best fits your audience and goal


How does the agent play it safe?

Add a control or baseline experience (in addition to the control that already exists). If the agent isn’t confident in any of your variations, it falls back to that instead. It also avoids over-committing to any one experience while it’s still learning, and you can turn off the campaign at any time.

Measuring Performance & Transparency

How do I know it’s working?

Check for the green “Data Received” checkmark on the Campaigns page — then head to the Agentic Report for a full view of how it’s performing, where traffic is going, and why.


Is this a black box?

Not fully. The agent combines statistical methods, contextual modelling, and reasoning AI to make decisions — and provides clear reporting signals and summaries to explain what’s happening and why.

The Agent Logs in the Agentic Report give detailed insight into every decision made. Agent Summaries translate that into plain language for stakeholder-friendly communication.


How many segments does the agent actually create?

Up to 500 segment combinations per campaign (a cardinality cap we enforce so each segment retains enough sample depth to be statistically useful). On a typical retail site that resolves to ~60–120 active segments during Optimisation phase, spanning device × channel × intent × session-depth contexts.

Seasonality & Peak Trading

How does it handle peak trading?

The default training window is a rolling 90 days, which helps the agent learn across a range of trading conditions. Higher traffic volumes during peak periods can actually help the agent adapt and learn faster. It continuously updates based on the most recent data.

Operational

How much bandwidth does this actually take?

  • Setup: 30–60 minutes per campaign (strategy choice, experience build, rules)

  • Ongoing: ~5 minutes a week to check the Performance Snapshot, Probability to Beat Control, and Biggest Movers

  • Monthly: 15–30 minutes to review the AI Decision Log and consider whether to add/remove experiences

Most MWI clients run 3–8 Agentic Campaigns concurrently with the bandwidth of a single CRO or marketing operations person.


What's the most effective way to run agentic campaigns?

For strategies that don't optimise timing, always have at least 2 experiences - as the Agent needs this in order to compare results.

It also works best if you have structurally different experiences. For example, if you were just testing the placement of the same experiences, there may not be enough contrast for the agent to find a winner.

Troubleshooting

What if it’s not working as expected?

First, check:

  • Campaign status is Live

  • Your experiences are live and correct on your site

  • Global rules are targeting the right audience

If something still looks off, check the Agent Logs in the Agentic Report for errors or unexpected behaviour. Contact support if you see repeated issues.


Allocations have shifted suddenly — is something wrong?

Usually no. Large day-on-day allocation shifts are expected during Exploration phase and after any of these events:

  • An experience was added or edited

  • A major traffic-composition change (paid campaign started, referral source spiked)

  • A seasonal demand shift (e.g. gifting season kicked in)

Check the Agent Activity tab for that day's Decision Log entry — it'll name the factor the agent responded to. If the log points to a data-quality issue (e.g. "goal events delayed") that's when to contact support.

Did this answer your question?