Skip to main content

Use A/B tests on a sequence step

Learn how to A/B test a step in your campaign to optimize performance and achieve better results.

Updated over a week ago

Learning Objective

By the end of this guide, you'll know how to create A/B tests for sequence steps, follow best practices for meaningful testing, analyze results to determine winners, and apply winning variations to future leads, all to maximize your campaign performance through data-driven optimization.

Why This Matters

Guessing what resonates with your audience is risky. A/B testing removes the guesswork by letting you compare two versions of the same email side-by-side with real data. By testing elements like subject lines, email copy, calls-to-action, or images, you discover exactly what drives replies and engagement.

The benefits:

  • Increase reply rates by 20-40% by identifying what messaging works best

  • Understand your audience deeply through their behavior, not assumptions

  • Build a library of proven messaging that you can reuse across campaigns

  • Continuously improve every step of your sequence based on real performance data

A/B testing transforms your outreach from guesswork to science, ensuring every campaign gets better than the last.

Prerequisites

Before setting up an A/B test:

  • Your sequence is complete – Have at least one email step ready to test

  • You have enough leads – Aim for at least 100 leads to get statistically meaningful results (50 per variation)

  • You know what to test – Decide which specific element you want to test (subject line, intro, CTA, etc.)

  • Your campaign isn't launched yet – A/B tests should be set up before launching (though you can add them to active campaigns with new leads)

What Is A/B Testing?

A/B testing (also called split testing) means creating two versions of the same email:

  • Version A – Your control (original version)

  • Version B – Your variation (modified version)

lemlist randomly sends Version A to 50% of your leads and Version B to the other 50%. You then compare performance metrics (open rates, reply rates, click rates) to determine which version performs better.

Example:

  • Version A subject line: "Quick question about [Company]"

  • Version B subject line: "Idea for improving your [Department]"

After 100 leads receive each version, you analyze which subject line generated more replies. The winner becomes your go-to approach.

Key Rules for Effective A/B Testing

Test one element at a time

Only change ONE thing between Version A and Version B. If you change the subject line AND the email body AND the CTA, you won't know which change caused the performance difference.

Good test:

  • Version A: Subject line "Quick question"

  • Version B: Subject line "Thought for your team"

  • Everything else identical

Bad test:

  • Version A: Subject line "Quick question" + short email + calendar link CTA

  • Version B: Subject line "Thought for your team" + long email + reply-based CTA

  • Too many variables—you won't know what caused the difference

Test each step of your sequence

Don't just A/B test Step 1. Test every step in your sequence to optimize the entire flow. Each step serves a different purpose and may respond differently to various approaches.

Example sequence testing:

  • Step 1: Test subject lines

  • Step 2: Test follow-up angles (referencing previous email vs. new value)

  • Step 3: Test CTAs (direct ask vs. soft question)

Ensure sufficient sample size

Small sample sizes produce unreliable results. Aim for at least 50-100 leads per variation (100-200 total) before drawing conclusions.

If Version A gets 2 replies from 10 leads (20% reply rate) and Version B gets 1 reply from 10 leads (10% reply rate), the difference could be random chance. With 100 leads each, patterns become clearer.

Let the test run its full course

Don't call a winner after the first day. Wait until most leads have progressed through the step and had a chance to respond. For email steps, wait at least 5-7 days after the last send before analyzing results.

Core Lesson: Step-by-Step Workflow

Phase 1: Set Up the A/B Test

Step 1: Open your campaign sequence

Go to Campaigns, select your campaign, then open the Sequence tab.

Screenshot

Then click Sequence at the top of the campaign.

Screenshot

Step 2: Choose the step to test

Decide which step you want to A/B test. This is usually your first email (Step 1), but you can test any step in the sequence.

Step 3: Open the step editing menu and create the A/B test

Click the email step you want to test, then click the three dots (⋮) menu on the step and select A/B test this step.

Screenshot

lemlist automatically duplicates your email, creating two versions side by side: Version A and Version B.

Phase 2: Configure Your Test Variations

Step 4: Choose your starting point for Version B

When lemlist creates Version B, you have two options:

Option 1: Start with a copy of Version A

Version B begins as an identical copy of Version A. You then modify only the element you want to test.

💡 Best for: Testing small changes like subject lines, one paragraph, or CTA buttons.

Option 2: Start with a blank template

Version B starts empty, and you build it from scratch.

💡 Best for: Testing completely different approaches (short vs. long email, text-only vs. image-heavy, etc.).

When prompted, choose whether to prefill Version B with the current message or not.

Screenshot

Step 5: Edit Version A (if needed) and Version B

Review Version A (your control) and make any final adjustments, then switch to Version B and modify only the one element you're testing.

Screenshot

Common elements to test:

Subject lines:

  • Version A: "Quick question about [Company]"

  • Version B: "Noticed something about [Company]"

Email length:

  • Version A: 3 short paragraphs (~75 words)

  • Version B: 1 short paragraph (~40 words)

Call-to-action:

  • Version A: "Would you be open to a quick call?"

  • Version B: "Mind if I send over some ideas?"

Personalization level:

  • Version A: Generic value proposition

  • Version B: Specific reference to their LinkedIn post or company news

Email tone:

  • Version A: Formal and professional

  • Version B: Casual and conversational

Step 6: Save your test

Once both versions are configured, save the step. lemlist will now randomly split your leads 50/50 between Version A and Version B when this step sends.

Phase 3: Launch and Monitor

Step 7: Launch your campaign

Go to the Review or Launch section and launch your leads as usual.

lemlist automatically splits leads between the two versions. You don't need to do anything manually—the system handles distribution.

Step 8: Monitor early performance (optional)

While the test is running, you can check preliminary results in the Analytics section. Look for metrics like:

  • Open rate (for subject line tests)

  • Reply rate (for content/CTA tests)

  • Click rate (for link placement tests)

💡 Don't jump to conclusions early: Wait for statistically significant data before deciding on a winner.

Phase 4: Analyze Results and Choose a Winner

Step 9: Review A/B test performance

Once most leads have received the step (wait at least 5-7 days after the last send), open your campaign reporting view and review the A/B test breakdown for each version.

Screenshot

Step 10: Compare key metrics

Focus on the metric that matters most for your test:

Testing subject lines? → Compare open rates

Testing email copy or CTA? → Compare reply rates

Testing link placement? → Compare click rates

Example results:

  • Version A: 100 sends, 45 opens (45%), 8 replies (8% reply rate)

  • Version B: 100 sends, 52 opens (52%), 12 replies (12% reply rate)

Winner: Version B (higher open and reply rates)

Step 11: Determine if the difference is significant

A small difference (e.g., 8% vs. 9% reply rate) might be random variation. A large difference (e.g., 8% vs. 15% reply rate) is likely meaningful.

Rule of thumb:

  • Difference < 20%: Could be random chance, test longer or with more leads

  • Difference > 30%: Likely a real performance difference, safe to choose a winner

Phase 5: Apply the Winning Version

Step 12: Select the winning version

Once you've identified the winner, go back to your sequence and open the A/B test step. Click the three dots (⋮) next to the version you want to keep, then choose Choose step variation A (or the equivalent option for Version B). This applies the selected variation for all future leads.

Screenshot

⚠️ Warning: Once you select a winner, this decision is permanent and cannot be changed. Make sure you're confident in your choice before confirming.

Step 13: Apply to newly imported leads

When you select a winner, lemlist automatically uses that version for all leads imported to the campaign after this point.

Leads who already received Version A or Version B are not affected. They keep their original version in the campaign history.

What Happens When You Select a Winner

After selecting a winner:

Future leads receive only the winning version – No more 50/50 split

Past leads remain unchanged – Their history and analytics stay intact

The losing version is archived – You can still view it but it won't send

The decision is permanent – You cannot switch back or re-enable the losing version

💡 Best practice: Document why the winning version performed better (e.g., "Version B's specific pain point resonated more than Version A's generic value prop"). This insight helps future campaigns.

Practical Application / Real-Life Examples

Example 1: SaaS Company Tests Subject Lines

A SaaS company targeting marketing directors ran an A/B test on Step 1 subject lines.

Version A: "Quick question about [Company]" Version B: "Your [Department]'s biggest challenge?"

Results (200 leads, 100 per version):

  • Version A: 38% open rate, 7% reply rate

  • Version B: 51% open rate, 11% reply rate

Winner: Version B

Why it worked: Version B created curiosity by mentioning a "challenge," making recipients more likely to open and engage.

Application: They applied Version B to all future leads and tested variations of the "challenge" angle in Step 2.

Example 2: Agency Tests Email Length

A lead generation agency tested whether short or long emails performed better for cold outreach.

Version A: Long email (150 words, 4 paragraphs, detailed value prop) Version B: Short email (50 words, 2 paragraphs, direct question)

Results (300 leads, 150 per version):

  • Version A: 42% open rate, 5% reply rate

  • Version B: 44% open rate, 13% reply rate

Winner: Version B

Why it worked: Short emails felt more personal and less "salesy," increasing reply likelihood.

Application: They applied the short format to all future campaigns and tested different short CTAs in follow-up tests.

Example 3: B2B Sales Team Tests Personalization Depth

A B2B sales team tested generic vs. highly personalized intros in their first email.

Version A: Generic value prop ("We help companies like yours improve sales efficiency...") Version B: Specific reference ("Saw your recent post about scaling your SDR team—here's an idea...")

Results (150 leads, 75 per version):

  • Version A: 35% open rate, 6% reply rate

  • Version B: 48% open rate, 18% reply rate

Winner: Version B

Why it worked: Specific personalization proved the sender did research, building trust and increasing reply likelihood.

Application: They trained their team to find specific LinkedIn posts or company news for every lead and made Version B their template.

Troubleshooting

Issue: Both versions have nearly identical performance

Root cause: The element you tested may not significantly impact performance, or your sample size is too small

Fix:

  • Increase lead volume and let the test run longer

  • Consider testing a more impactful element (e.g., subject line usually has bigger impact than font color)

  • If results stay identical after 200+ leads, accept that this element doesn't matter much and move on

Issue: I accidentally selected the wrong winner

Root cause: Once a winner is selected, the decision is permanent

Fix:

  • You cannot undo the selection

  • If critical, duplicate the campaign, set up the A/B test again with the correct winner, and migrate leads

  • For future tests, double-check results before selecting a winner

Issue: My A/B test results don't show in analytics

Root cause: Analytics may not have loaded yet, or the test hasn't collected enough data

Fix:

  • Refresh the analytics page

  • Wait until at least 50-100 leads per version have received the step

  • Check that your campaign has actually sent emails (not paused or stuck in review)

Issue: I want to test more than two variations (A/B/C testing)

Root cause: lemlist's A/B testing currently supports two variations only

Fix:

  • Run multiple A/B tests sequentially: Test A vs. B first, then test the winner vs. C

  • Or create separate campaigns for each variation and compare results manually

  • This takes longer but allows testing multiple approaches

Issue: One version is getting way more sends than the other

Root cause: Distribution should be 50/50, but technical issues or campaign pauses can skew it

Fix:

  • Check that your campaign isn't paused mid-test

  • Verify that leads are being imported evenly over time (not in one giant batch followed by a long pause)

  • Ensure to create an A/B test before launching leads. If the A/B test is created after the launch, all leads will end up in the A version.

Optimization Tips

Start with high-impact elements: Test subject lines first (biggest impact on open rates), then test email copy (biggest impact on reply rates), then test smaller elements like CTAs.

Test throughout your sequence: Don't just optimize Step 1. Test each step independently to maximize the entire sequence's performance.

Document your learnings: Keep a spreadsheet of all A/B tests run, what you tested, and what won. Over time, you'll build a playbook of what works for your audience.

Use insights across campaigns: If Version B wins in one campaign, apply that insight to similar campaigns immediately. Don't re-test the same thing repeatedly.

Test bold differences: Small tweaks (e.g., "Hi" vs. "Hey") rarely produce meaningful results. Test significantly different approaches (formal vs. casual, long vs. short) for clearer winners.

Wait for statistical significance: Don't declare a winner after 20 leads. Wait for at least 100 leads per version to ensure results are reliable.

Prioritize reply rate over open rate: Opens are good, but replies drive business. If testing email content, focus on reply rate as your success metric.

Test one step at a time: Don't A/B test multiple steps simultaneously in the same campaign—it complicates analysis. Finish one test, apply the winner, then move to the next step.

Combine winners for compound impact: If Step 1 Version B gets 30% better results and Step 2 Version B gets 25% better results, applying both winners compounds your overall campaign performance significantly.

Related Articles

Did this answer your question?