Wisepops A/B testing helps you optimize campaigns for higher CTRs or deeper insights into visitor behavior.
Test popups, bars, or embeds to:
Boost Performance: Find the best designs, copy, or offers to improve clicks and conversions.
Measure Incremental Gains: Small changes, like a 5% lift in sign-ups, can lead to big results over time.
Test Easily: Use our WYSIWYG editor to tweak campaigns without coding.
2. Go Beyond Clicks: Measure the Full Impact of Your Campaigns - only available in the Wisepops Intelligence plan.
For a deeper understanding of how your experiments influence visitor behavior, Wisepops Experiments provides advanced tools to:
Track the Full Customer Journey: See how changes impact engagement, retention, and conversions.
Quantify Incremental Impact: Measure how even minor tweaks influence behavior, like session duration, repeat visits, or Shopify average order value.
Leverage Control Groups: Compare variants against a baseline to isolate the true impact of your changes.
Access Detailed Analytics: Get daily updates with statistical significance, confidence intervals, and performance benchmarks.
These insights help you prioritize changes that deliver measurable value, ensuring your campaigns drive not just clicks but long-term results.
What You Can Test
A/B/n variants
Campaign Types: Popups, Bars, and Embeds.
Testable Elements: Design, copy, coupon amounts, targeting rules, images, or CTAs.
Coming in Q2 2025: Feed Campaign experiments.
💡 Note: Features like Control Groups and Detailed Results require a Wisepops Intelligence plan.
How to Create an A/B Test
Step 1: Define Variants & Traffic Allocation
Hover over your campaign in the dashboard and click the A/B Test icon.
Choose to either:
Duplicate the campaign (test a new version).
Select an existing campaign (test against a live variant).
Set Traffic Allocation: Assign the percentage of visitors who will see each variant (e.g., 50% Variant A, 50% Variant B).
Optional: Add a control group (Intelligence plan only). Exclude a share of visitors from seeing any variant to measure baseline performance.
Step 2: Assign Your Experiment Goal
Select a primary goal metric (e.g., clicks, conversions) to determine when your experiment concludes.
The test ends when one variant achieves statistical significance (95% confidence) over others or the control group.
Tip: If you are not sure about the goal of your campaign, you can select CTR.
Step 3: Launch the Experiment
Before launching the experiment, customize your variants: Click on a variant to edit elements like design, images, coupons, or targeting rules.
Change the status from Draft to Published in your dashboard.
The experiment starts tracking once the first visitor interacts.
🔄 Pro Tip: Duplicate existing experiments to reuse setups across websites.
Monitor & Conclude Your Test
Experiment Dashboard
Accessible 24 hours after launch, this dashboard provides:
Real-time overviews of active and completed experiments.
Key insights: Identify winning variants and conclude tests at a glance.
Declare a Winner
Check the statistical significance of results in the dashboard.
You will also receive an automated email when an experiment has reached its conclusion.
Once a winner is confirmed:
Click “Duplicate as new campaign” on the winning variant.
Activate the new campaign to replace the test (it will track stats separately).
The original experiment will stop automatically.
Deep Dive into Performance
💡 Note: Detailed Results require a Wisepops Intelligence plan.
Unlock Detailed Results to analyze:
Campaign Impact: Click-through rates, conversion uplift, bounce rates.
Customer Journey Metrics:
Revenue per visitor, Conversion rate, Average order value (Shopify only).
Session duration, return visits, and more.
How It Works:
Metrics compare each variant to the baseline (control group or lowest-performing variant).
Bayesian statistics (confidence interval, statistical significance) are available for each metric
Data updates daily (vs. 15-minute updates in campaign dashboards).
Best Practices for Effective Testing
✅ Test One Element at a Time: Isolate changes (e.g., headline or image) to pinpoint what drives results.
✅ Limit Variants: Start with 2-3 variants for faster, clearer conclusions.
✅ Be Patient: Wait for statistical significance (dashboard alerts you when achieved).
✅ Traffic Allocation: Ensure even splits for reliable data (e.g., 50/50 for 2 variants).
FAQs
Can I A/B test campaigns with different triggering or targeting settings?
Yes, you can A/B test campaigns with different triggering or targeting settings. However, it’s important to ensure that you’re comparing metrics that provide meaningful insights.For example, if you’re testing a landing page signup popup that appears immediately versus the same popup triggering after an 8-second delay, the delayed version may naturally have a higher click-through rate (CTR) due to users having more time to engage with the page. Instead of focusing solely on CTR, consider analyzing whether the delay improves other key metrics, such as bounce rate or the total number of leads generated. This approach will help you determine if the change in timing positively impacts user experience and overall campaign effectiveness.Why daily updates for Detailed Results?
The detailed results page calculates thousands of bayesian statistics and therefore requires more compute time;; minor discrepancies with real-time dashboards are normal.Can I edit variants after launching?
We recommend against editing variants after you have already started collecting data —publish changes only after concluding the test to avoid skewed data.