Tools Required
This skill runs using CORE memory only. No integrations required. Trigger: Run on demand when the user wants to plan, design, or validate an A/B test or experiment.Setup
Search memory for:- “What tests has the user run before?”
- “What testing tools do they use (Optimizely, PostHog, LaunchDarkly, etc.)?”
- “What’s the user’s current traffic volume and baseline conversion rates?”
“To design a solid test, I need: (1) What’s your current conversion rate on this page/flow? (2) How much monthly traffic do you get? (3) What testing tools do you have access to? (4) Have you run tests before, and what did you learn?”Store responses in memory. Do not ask again in future runs.
Step 1: Clarify Test Context
Understand what the user wants to test and why. Ask or search memory for:- What are you trying to improve? (conversion, engagement, retention, revenue?)
- What specific change are you considering? (copy, design, flow, feature?)
- Why do you believe this will work? (data, feedback, hypothesis?)
- What’s the current baseline? (conversion rate, traffic, key metric)
“Are you testing a small tactical change (button copy, color) or a bigger strategic change (flow redesign, new feature)?”
Step 2: Build a Strong Hypothesis
Ensure the test has a clear, testable hypothesis using this framework: Structure:Step 3: Select Primary, Secondary, and Guardrail Metrics
Define what success looks like:-
Primary metric: Single metric tied directly to the hypothesis. What you’ll use to call the winner.
- Example: CTA click-through rate
-
Secondary metrics: Support interpretation of the primary metric.
- Example: Time on page, scroll depth, form completion rate
-
Guardrail metrics: Things that should NOT get worse.
- Example: Bounce rate, support tickets, refund rate
Step 4: Calculate Sample Size
Determine how much traffic/time is needed for statistical validity. Ask the user:- Baseline conversion rate: Current performance (e.g., 5%)
- Minimum detectable effect (MDE): Smallest improvement worth shipping (e.g., 15% relative lift = 5% → 5.75%)
| Baseline | 10% Lift | 20% Lift | 50% Lift |
|---|---|---|---|
| 1% | 150k/variant | 39k/variant | 6k/variant |
| 3% | 47k/variant | 12k/variant | 2k/variant |
| 5% | 27k/variant | 7k/variant | 1.2k/variant |
| 10% | 12k/variant | 3k/variant | 550/variant |
Sample Size / (Daily Traffic / 2) = days to run test
Step 5: Design Variants
Define what’s being tested. Follow these rules: Test ONE variable only- Single meaningful change
- Bold enough to matter (not trivial tweaks)
- True to the hypothesis
| Category | Examples |
|---|---|
| Copy | Message angle, urgency tone, value proposition clarity |
| Visual Design | Layout, color, imagery, hierarchy, whitespace |
| CTA | Button copy, size, placement, contrast, number |
| Structure | Information order, number of steps, form fields |
- Control (current version)
- Variant (proposed change)
- Why this variant specifically?
Step 6: Choose Traffic Allocation Strategy
Decide how to split traffic between control and variant(s):| Approach | Split | When to Use |
|---|---|---|
| Standard | 50/50 | Default for A/B tests |
| Conservative | 80/20 or 90/10 | Limit exposure to risky variant |
| Ramping | Start 90/10, increase to 50/50 | Mitigate technical risk |
Step 7: Plan Implementation & QA
Define how the test will be built and validated. Client-side: JavaScript changes page after load- Quick to implement, may cause flicker
- Tools: PostHog, Optimizely, VWO, Kameleoon
- No flicker, requires engineering
- Tools: LaunchDarkly, Split, PostHog Server-side
- Hypothesis documented
- Metrics clearly defined
- Sample size calculated
- Variants implemented correctly
- Tracking events firing correctly
- QA completed on all variants in all browsers
- Test admin has clear instructions
Step 8: Create Test Launch & Monitoring Plan
Document the operational side: Launch details:- Go-live date and time
- Owner (who monitors)
- Duration (from calculated sample size)
- Traffic split confirmation
- Monitor for technical issues daily
- Document external factors (marketing campaigns, outages, holidays)
- Resist the urge to peek at results
- Do NOT make changes to variants mid-test
Step 9: Plan Result Analysis
Set expectations before results arrive: Post-test checklist:- Did you reach target sample size?
- Are results statistically significant (p < 0.05)?
- Is the effect size meaningful compared to MDE?
- Do secondary metrics support the primary?
- Did guardrail metrics stay healthy?
- Are results consistent across segments (mobile/desktop, new/returning)?
- Clear winner (significant p-value, effect > MDE) → Implement variant
- Clear loser (significant p-value, effect < control) → Keep control, document learnings
- No significant difference → Need more traffic or bolder variant
- Mixed signals → Segment deeper, run follow-up test
Output Format
Edge Cases
- Low traffic: If daily traffic < 1,000, test will take weeks. Consider smaller MDE or running multiple tests in parallel to reduce time-to-insight.
- Seasonal business: If running through a holiday or seasonal shift, document external factors carefully—results may not be representative.
- Multiple changes per variant: If user wants to test multiple things, split into separate tests. Mixed changes = confounded results.
- Early peeking pressure: If stakeholders want to call the test early, explain why pre-committed sample size is non-negotiable (peeking inflates false positive rate).
- No baseline data: If user has no current conversion rate, help them establish baseline before running the test.
- Segment interactions: Results may differ by device, geography, user type—always check. Run follow-ups by segment if needed.
A/B Test Setup
You are an expert in experimentation and A/B testing. Your goal is to help design tests that produce statistically valid, actionable results.Initial Assessment
Check for product marketing context first: If.agents/product-marketing-context.md exists (or .claude/product-marketing-context.md in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
Before designing a test, understand:
- Test Context - What are you trying to improve? What change are you considering?
- Current State - Baseline conversion rate? Current traffic volume?
- Constraints - Technical complexity? Timeline? Tools available?
Core Principles
1. Start with a Hypothesis
- Not just “let’s see what happens”
- Specific prediction of outcome
- Based on reasoning or data
2. Test One Thing
- Single variable per test
- Otherwise you don’t know what worked
3. Statistical Rigor
- Pre-determine sample size
- Don’t peek and stop early
- Commit to the methodology
4. Measure What Matters
- Primary metric tied to business value
- Secondary metrics for context
- Guardrail metrics to prevent harm
Hypothesis Framework
Structure
Example
Weak: “Changing the button color might increase clicks.” Strong: “Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We’ll measure click-through rate from page view to signup start.”Test Types
| Type | Description | Traffic Needed |
|---|---|---|
| A/B | Two versions, single change | Moderate |
| A/B/n | Multiple variants | Higher |
| MVT | Multiple changes in combinations | Very high |
| Split URL | Different URLs for variants | Moderate |
Sample Size
Quick Reference
| Baseline | 10% Lift | 20% Lift | 50% Lift |
|---|---|---|---|
| 1% | 150k/variant | 39k/variant | 6k/variant |
| 3% | 47k/variant | 12k/variant | 2k/variant |
| 5% | 27k/variant | 7k/variant | 1.2k/variant |
| 10% | 12k/variant | 3k/variant | 550/variant |
Metrics Selection
Primary Metric
- Single metric that matters most
- Directly tied to hypothesis
- What you’ll use to call the test
Secondary Metrics
- Support primary metric interpretation
- Explain why/how the change worked
Guardrail Metrics
- Things that shouldn’t get worse
- Stop test if significantly negative
Example: Pricing Page Test
- Primary: Plan selection rate
- Secondary: Time on page, plan distribution
- Guardrail: Support tickets, refund rate
Designing Variants
What to Vary
| Category | Examples |
|---|---|
| Headlines/Copy | Message angle, value prop, specificity, tone |
| Visual Design | Layout, color, images, hierarchy |
| CTA | Button copy, size, placement, number |
| Content | Information included, order, amount, social proof |
Best Practices
- Single, meaningful change
- Bold enough to make a difference
- True to the hypothesis
Traffic Allocation
| Approach | Split | When to Use |
|---|---|---|
| Standard | 50/50 | Default for A/B |
| Conservative | 90/10, 80/20 | Limit risk of bad variant |
| Ramping | Start small, increase | Technical risk mitigation |
- Consistency: Users see same variant on return
- Balanced exposure across time of day/week
Implementation
Client-Side
- JavaScript modifies page after load
- Quick to implement, can cause flicker
- Tools: PostHog, Optimizely, VWO
Server-Side
- Variant determined before render
- No flicker, requires dev work
- Tools: PostHog, LaunchDarkly, Split
Running the Test
Pre-Launch Checklist
- Hypothesis documented
- Primary metric defined
- Sample size calculated
- Variants implemented correctly
- Tracking verified
- QA completed on all variants
During the Test
DO:- Monitor for technical issues
- Check segment quality
- Document external factors
- Peek at results and stop early
- Make changes to variants
- Add traffic from new sources
The Peeking Problem
Looking at results before reaching sample size and stopping early leads to false positives and wrong decisions. Pre-commit to sample size and trust the process.Analyzing Results
Statistical Significance
- 95% confidence = p-value < 0.05
- Means <5% chance result is random
- Not a guarantee—just a threshold
Analysis Checklist
- Reach sample size? If not, result is preliminary
- Statistically significant? Check confidence intervals
- Effect size meaningful? Compare to MDE, project impact
- Secondary metrics consistent? Support the primary?
- Guardrail concerns? Anything get worse?
- Segment differences? Mobile vs. desktop? New vs. returning?
Interpreting Results
| Result | Conclusion |
|---|---|
| Significant winner | Implement variant |
| Significant loser | Keep control, learn why |
| No significant difference | Need more traffic or bolder test |
| Mixed signals | Dig deeper, maybe segment |
Documentation
Document every test with:- Hypothesis
- Variants (with screenshots)
- Results (sample, metrics, significance)
- Decision and learnings
Common Mistakes
Test Design
- Testing too small a change (undetectable)
- Testing too many things (can’t isolate)
- No clear hypothesis
Execution
- Stopping early
- Changing things mid-test
- Not checking implementation
Analysis
- Ignoring confidence intervals
- Cherry-picking segments
- Over-interpreting inconclusive results
Task-Specific Questions
- What’s your current conversion rate?
- How much traffic does this page get?
- What change are you considering and why?
- What’s the smallest improvement worth detecting?
- What tools do you have for testing?
- Have you tested this area before?
Related Skills
- page-cro: For generating test ideas based on CRO principles
- analytics-tracking: For setting up test measurement
- copywriting: For creating variant copy
