founder learnings! part 8. A/B test math interpretation - I love stuff like this: Two members of our team (Fletcher Ehlers and Marie-Louise Brunet) - ran a test recently that decreased click-through rate (CTR) by over 10% - they added a warning telling users they’d need to log in if they clicked. However - instead of hurting conversions like you’d think, it actually increased them. As in - Fewer users clicked through, but overall, more users ended up finishing the flow. Why? Selection bias & signal vs. noise. By adding friction, we filtered out low-intent users—those who would have clicked but bounced at the next step. The ones who still clicked knew what they were getting into, making them far more likely to convert. Fewer clicks, but higher quality clicks. Here's a visual representation of the A/B test results. You can see how the click-through rate (CTR) dropped after adding friction (fewer clicks), but the total number of conversions increased. This highlights the power of understanding selection bias—removing low-intent users improved the quality of clicks, leading to better overall results.
Benefits of A/b Testing
Explore top LinkedIn content from expert professionals.
Summary
A/B testing, also known as split testing, is a methodical way of comparing two different versions of a webpage, email, or other marketing elements to determine which performs better based on user engagement or conversion goals. This data-driven approach helps businesses refine strategies and make confident decisions.
- Filter out low-quality leads: Use A/B testing to identify elements that attract high-intent users and reduce wasted clicks or low-engagement traffic.
- Test small changes: Experiment with variations like copy, design, or call-to-action wording to uncover which subtle tweaks yield significant engagement improvements.
- Align with audience needs: Tailor tests to focus on your audience's preferences and behaviors, ensuring your messaging resonates and drives meaningful actions.
-
-
We ran 2 A/B tests with our homepage interactive demo. One test was inconclusive, but one showed a 33 - 50% higher CTR. The goal was to prove if demos segmented by persona perform better than a generic overview demo. For some background, last year we experimented with a persona homepage demo and saw: • +45% lift in folks who submitted our book a demo form • 6.3x improvement in number of MQLs • Roughly 2x lift in demo completion This year, we wanted to repeat but with our new native A/B testing. Below is a breakdown of the two tests we ran. Test #1: Segmented demos format (select persona upfront) → List of roles (sales, marketing, product) → Buttons to choose your role Test #2: Overview demos format → A short 13-step demo → A demo with a short opening that goes into a longer checklist Each test ran for 1.5 weeks. Each demo received between 850–1,130 visitors Success was measured by in-demo CTR (“Book a Demo” or “Start Free” CTA) Results: ▪︎ Segmented demos had an average CTR of 25% (format didn't matter) ▪︎ Short overview demo had an average CTR of 18% ▪︎ Long checklist demo had an average CTR of 15% According to our 2025 State of the Interactive Product Demo, a 25% CTR is almost in the top 10% of Navattic demos (the top 10% had a 28% CTR) Takeaways: ▪︎ What didn’t matter: the layout of the segmentation ▪︎ What did matter: letting users self-select their role before starting a demo If your product works for multiple personas, try testing a demo segmented by role, use case, or industry. Tomorrow I'll share more about how you can run similar tests with our new native A/B testing.
-
👀 Lessons from the Most Surprising A/B Test Wins of 2024 📈 Reflecting on 2024, here are three surprising A/B test case studies that show how experimentation can challenge conventional wisdom and drive conversions: 1️⃣ Social proof gone wrong: an eCommerce story 🔬 The test: An eCommerce retailer added a prominent "1,200+ Customers Love This Product!" banner to their product pages, thinking that highlighting the popularity of items would drive more purchases. ✅ The result: The variant with social proof banner underperformed by 7.5%! 💡 Why It Didn't Work: While social proof is often a conversion booster, the wording may have created skepticism or users may have seen the banner as hype rather than valuable information. 🧠 Takeaway: By removing the banner, the page felt more authentic and less salesy. ⚡ Test idea: Test removing social proof; overuse can backfire making users question the credibility of your claims. 2️⃣ "Ugly" design outperforms sleek 🔬 The test: An enterprise IT firm tested a sleek, modern landing page against a more "boring," text-heavy alternative. ✅ The Result: The boring design won by 9.8% because it was more user friendly. 💡 Why It Worked: The plain design aligned better with users needs and expectations. 🧠 Takeaway: Think function over flair. This test serves as a reminder that a "beautiful" design doesn’t always win—it’s about matching the design to your audience's needs. ⚡ Test idea: Test functional designs of your pages to see if clarity and focus drive better results. 3️⃣ Microcopy magic: a SaaS example 🔬 The test: A SaaS platform tested two versions of their primary call-to-action (CTA) button on their main product page. "Get Started" vs. "Watch a Demo". ✅ The result: "Watch a Demo" achieved a 74.73% lift in CTR. 💡 Why It Worked: The more concrete, instructive CTA clarified the action and benefit of taking action. 🧠 Takeaway: Align wording with user needs to clarify the process and make taking action feel less intimidating. ⚡ Test idea: Test your copy. Small changes can make a big difference by reducing friction or perceived risk. 🔑 Key takeaways ✅ Challenge assumptions: Just because a design is flashy doesn’t mean it will work for your audience. Always test alternatives, even if they seem boring. ✅ Understand your audience: Dig deeper into your users' needs, fears, and motivations. Insights about their behavior can guide more targeted tests. ✅ Optimize incrementally: Sometimes, small changes, like tweaking a CTA, can yield significant gains. Focus on areas with the least friction for quick wins. ✅ Choose data over ego: These tests show, the "prettiest" design or "best practice" isn't always the winner. Trust the data to guide your decision-making. 🤗 By embracing these lessons, 2025 could be your most successful #experimentation year yet. ❓ What surprising test wins have you experienced? Share your story and inspire others in the comments below ⬇️ #optimization #abtesting
-
We made a $50,000 mistake and it was right on our website. For a while, we had transparent pricing listed on our site. I thought it was the right move, save time, build trust, pre-qualify people. It works for everyone else, right? Yeah... not really. We ran an A/B test. One version with pricing. One version without. Same team, same process, everything else equal. And here’s what happened: → Lead volume stayed the same → Close rates stayed the same → But the quality of leads got way better Way more serious prospects. Way fewer “just browsing” conversations. When you throw a price up front, people anchor on the number. Not the value. Not the problem you’re solving. Just the sticker shock. Especially when you sell a premium, customized service, base pricing doesn’t tell the real story anyway. Taking pricing off didn’t lose us leads. It just filtered out the noise and let us actually have real conversations about value first. Moral of the story: don’t assume what works for someone else will work for you. A/B test it. If your leads feel stuck on price... maybe it’s because you’re leading with it.
-
Are you still making marketing decisions based on gut feelings? It’s time to A/B test. It should be at the heart of your strategy. Here’s why you should make this a priority: 1. Data-Driven In an era where every click, view, and engagement can be tracked, relying on guesswork is not just outdated, it's inefficient. A/B testing provides concrete data on what resonates with your audience and what doesn’t. This empowers you to make decisions based on real-world feedback, not just hypotheses. Whether it's ad copy, email subject lines, or landing page layouts, A/B testing removes the guesswork, allowing you to refine your marketing efforts based on solid data. 2. Audience Alignment Every audience is unique, and understanding the specific preferences of your target market is crucial. A/B testing allows you to tailor your content, design, and messaging to more closely align with your audience's expectations and interests. By testing different variations, you gain insights into the nuances of what captures their attention, engages them, and prompts them to act. 3. Increased Conversions Ultimately, the goal of any marketing campaign is to convert potential leads into customers. A/B testing is instrumental in optimizing every element of your campaign for higher conversion rates. By continuously testing and implementing the more successful variations, you incrementally improve the effectiveness of your marketing efforts, leading to more conversions and, ultimately, better ROI.
-
What’s Working for You? (How you can test to see if you are right!) One common method to find out which product offering Or which email outreach style is doing better Is to perform an A/B Test. The premise of the test is simple Obtain feedback or observe behaviors of customers That are exposed to either product A or product B And see if there is a clear difference in preferences. Let us consider the example of Marketing LLC Who wanted to see which email style was resonating more With their potential clients. After conducting required background research On their Ideal Client Profile (ICP), They decided to test their email styles using the A/B Testing method. We sent out 300 emails of Style A to one group And 300 emails of Style B to another group. The groups were randomly selected from their ICP list And the content of the emails was very similar. The subject line and first two sentences of the emails were different. Observation & Proportions: - 100 or 33% of Style A emails were opened. - 120 or 40% of Style B emails were opened. - Total or joint open rate was 220 out of 600 or 37% Clearly the numbers show that Style B had a higher rate of opening. However, it is essential to test this statistically before deciding Whether to go with Style B or Style A for sending future emails to ICPs. We can use a Test of Proportions at a 95% confidence level To ensure that Style B is better, using statistical significance. Actual Test: * Joint p* = 0.37 * Std. Error Sp = sqrt((0.37 x 0.63/300) = 0.03 * Test Z-value = (0.4 – 0.33)/0.03 = 2.33 * 95% Z-value = 1.96 (this is a very important and constant critical value) Since the Test Z-value is greater than 1.96, we can now conclude with 95% confidence that: Emails sent using Style B, were doing better. Actionable Insights from A/B Testing: 1. Deep Dive: Analyze the elements of Style B that contributed to the higher open rates. This could include the subject line, tone, or specific keywords. 2. Limit Variables: When conducting A/B tests, focus on one or two variables at a time to isolate the impact of each change. 3. Scale Up: Increase volume of emails following Style B to further validate the results & reach a larger audience within your ICP. 4. Content Quality: Ensure that the content of the email is compelling & relevant. An opened email is just the first step; the content must result in engagement and conversions. 5. Continuous Testing: Regularly perform A/B tests to keep refining your email strategies. Market dynamics & customer preferences can change over time. 6. Segmentation: Segment ICP further to tailor email styles to different sub-groups, for personalization & relevance. 7. Feedback Loop: Collect feedback from recipients to understand their preferences & pain points, to improve future email campaigns. #PostItStatistics #DataScience Follow Dr. Kruti or Analytics TX, LLC on LinkedIn (Click "Book an Appointment" to register for the workshop!)
-
#1 thing brands often miss when growing their email list: Not A/B testing email capture pop-ups A/B testing is critical for maximizing the performance of your email list growth strategy. Too many brands set up a single pop-up and forget about it. Why is it important? Testing allows you to understand what resonates most with your audience, so you can optimize for higher engagement. What works for one brand or audience might not work for yours, and small tweaks in copy, images, or offers can make a huge difference in your email capture rate. What should you A/B test? - Copy: Does a playful tone or more straightforward messaging convert better? - Images: Should you highlight a product on a clean white background, or use a lifestyle image that shows your product in use? - Offer: What motivates your audience more—15% off their first purchase, or a free gift with their order? What metrics should you analyze? Email capture rate: Track how many visitors are opting in based on the different versions of your pop-ups. This is your primary metric for understanding what’s working. Here are a few examples of well-executed email capture pop-ups below. Have you A/B tested your brand’s email capture pop-up yet?
-
Asana ran an A/B test to millions of site visitors ⬇️ They wanted to test their hero CTA text. Both versions linked to the same demo unlock form. Single variable split test. One said “View Demo” and one said “See How it Works” Clarity won out. With “View Demo” we know exactly what we are getting, but “See how it works” could mean a lot of things. I have seen dozens of tests like this. “Try it free” vs. “Start listening”. Both linked to a trial. Try it free won. “See it in Action” vs. “Watch demo”. Both linked to a live demo. Watch demo won. Testing this in past roles, I found the clear CTA verbiage often provided a 3-5% click-through lift. In one egregious case at Bonjoro, we tested “Try it Free” vs “Start Connecting” (both linked to our trial form), and “Try it Free” had a 40% better click-through. From digging into the data, the takeaway is clear. Match expectation to reality. Having CTA’s where the next step is what a prospect or customer expects is the best practice.