Incrementality testing is crucial for evaluating the effectiveness of marketing campaigns because it helps marketers determine the true impact of their efforts. Without this testing, it's difficult to know whether observed changes in user behavior or sales were actually caused by the marketing campaign or if they would have occurred naturally. By measuring incrementality, marketers can attribute changes in key metrics directly to their campaign actions and optimize future strategies based on concrete data. In this blog written by the data scientist team from Expedia Group, a detailed guide is shared on how to measure marketing campaign incrementality through geo-testing. Geo-testing allows marketers to split regions into control and treatment groups to observe the true impact of a campaign. The guide breaks the process down into three main stages: - The first stage is pre-testing, where the team determines the appropriate geographical granularity—whether to use states, Designated Market Areas (DMAs), or zip codes. They then strategically select a subset of available regions and assign them to control and treatment groups. It's crucial to validate these selections using statistical tests to ensure that the regions are comparable and the split is sound. - The second stage is the test itself, where the marketing intervention is applied to the treatment group. During this phase, the team must closely monitor business performance, collect data, and address any issues that may arise. - The third stage is post-test analysis. Rather than immediately measuring the campaign's lift, the team recommends waiting for a "cooldown" period to capture any delayed effects. This waiting period also allows for control and treatment groups to converge again, confirming that the campaign's impact has ended and ensuring the model hasn’t decayed. This structure helps calculate Incremental Return on Advertising spending, answering questions like “How do we measure the sales directly driven by our marketing efforts?” and “Where should we allocate future marketing spend?” The blog serves as a valuable reference for those looking for more technical insights, including software tools used in this process. #datascience #marketing #measurement #incrementality #analysis #experimentation – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gWKzX8X2
Using Analytics to Measure Campaign Effectiveness
Explore top LinkedIn content from expert professionals.
Summary
Using analytics to measure campaign effectiveness involves assessing the true impact of marketing efforts by analyzing data to determine what drives desired outcomes, such as sales or customer engagement. This approach helps businesses make data-driven decisions, improve resource allocation, and refine future strategies.
- Conduct incremental testing: Use geo experiments or control groups to identify the specific impact of your campaigns by comparing treated and non-treated markets over time.
- Analyze conversion paths: Break down how customers interact with your campaigns to identify cross-channel interactions and adjust strategies for better returns.
- Focus on actionable metrics: Track key performance indicators like incremental ROI, new-to-brand sales, and lifetime customer value to refine your future campaigns.
-
-
Ever wondered exactly how your customers interact with your campaigns? Or how different ad formats work together to drive conversions? With the new Conversion Path Report (currently in beta), you’re about to get that clarity. This report breaks down the customer journey step-by-step. For example: 👉 Display (Sponsored Brands) > Sponsored Products > Purchase For the first time, brands can see: ▪ What percentage of sales come from specific ad paths ▪ New-to-brand sales, showing how many first-time customers each path brings in Now imagine this: A customer’s journey starts with a Display ad, moves to Sponsored Products, and ends with a purchase. The Conversion Path Report doesn’t just track this—it helps you: ➤ Identify which campaign types are driving the most influence ➤ Spot weak links in your ad strategy ➤ Shift resources to the paths delivering the best ROI Here’s how to put it to work: 𝗙𝗶𝗻𝗱 𝗮𝗻𝗱 𝗙𝗶𝘅 𝗜𝗻𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝗣𝗮𝘁𝗵𝘀 → Analyze paths with low purchases and compare spend for those campaign types. Then: - Redirect resources to better-performing paths. - Experiment with restructuring weaker campaigns. 𝗕𝗼𝗼𝘀𝘁 𝗥𝗲𝗽𝗲𝗮𝘁 𝗖𝘂𝘀𝘁𝗼𝗺𝗲𝗿 𝗘𝗻𝗴𝗮𝗴𝗲𝗺𝗲𝗻𝘁 → Calculate the percentage of repeat customers within each path. → Double down on campaigns that retarget loyal customers to maximize repeat revenue. 𝗥𝗲𝗳𝗶𝗻𝗲 𝗟𝗶𝗳𝗲𝘁𝗶𝗺𝗲 𝗩𝗮𝗹𝘂𝗲 (𝗟𝗧𝗩) 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗲𝘀 → Use repeat customer data to refine campaigns that target high-LTV customers. → Experiment with ad formats to drive both new acquisitions and long-term retention. For example: If a path has 1,000 total purchases but 700 are new-to-brand, you’ve got 300 repeat customers. Divide that by total conversions (1,000), and you’ll see 30% of customers are loyal repeat buyers. If you’re focused on lifetime value (LTV), this is gold. The report is still in beta, and yes, it’s missing some crucial metrics like ACoS, CPC breakdowns, and conversion rates. But even as it stands, it’s a game-changer for sellers looking to optimize ad strategies and uncover growth opportunities. Got questions? DM me—I’d love to hear how you’re using this!
-
How we use geo experiments to measure marketing incrementality for our clients. Cookie tracking is dead, and click-based attribution is flawed. We need a better way to measure marketing effectiveness. Hello, geo experiments 👋 You may hear them called "Matched Market Tests" or "MMTs." By increasing/decreasing ad spend in a test market, we can measure the LIFT in conversions/revenue compared to the control. Even better, geo experiments can be run on digital-native or offline campaigns - Digital: paid search, social, display, etc. - Offline: direct mail, linear tv, ooh, etc. 1.) What you need to run a geo test: - Budget. This is ad spend to put against your campaigns in the test - Test and Control Markets - Time to run the test (2-6 weeks, depending on market sizes and budget) 2.) 𝗖𝗵𝗼𝗼𝘀𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝘁𝗲𝘀𝘁 𝗺𝗮𝗿𝗸𝗲𝘁𝘀: - You want to find markets that behave similarly. These are called "matched markets" or "market pairs". - You'll need historical data. This could be conversions, revenue x geo (city/dma) x day. - You can pull this from your analytics tool (e.g. GA, Adobe) - This data is analyzed w/ statistics to find matched markets (sample libraries below) - These market pairs are then selected for testing. For example, Boston and Philadelphia might be a good "pair. 3.) Some good open-source libraries (links in the comments): If you or your data team has Python/R experience, you can discover market pairs yourself using one of the libraries below. - GeoX by Google - Geolift by Meta Our data science team at Power uses a proprietary model evolved from the libraries above. We also use synthetic controls in some cases. This is more advanced but can provide a better match rate by combining fractional control markets against the test market. Synthetic controls can also reduce the test period and budget requirements. 4.) Budget and timing - We want the test to be statistically significant - Markets and ad budgets within those markets need to be large enough to detect a lift 5.) Running the test - During the test, we'll suppress ad spend (holdout test) or increase spend (growth/scale test) - During the test and post-test periods, we'll measure the lift - This can give us an incremental ROAS (iROAS) for that channel or tactic - We often look at iCAC as well for acquisition channels This may teach us that TOF Meta ads have an iROAS of 3.5 while BOF-branded Google search ads have an iROAS of 1.0. Based on these results, we may shift ad budgets from BOF to TOF tactics. 6.) Wash, rinse, repeat. - By using a testing calendar, we'll typically run multiple tests across the year - this helps us zero in on which channels and tactics drive the best lift - we'll periodically re-run tests to validate if the lift coefficient has changed Has your team used an experiment-led model to measure marketing effectiveness (incrementality)? #marketinganalytics #growthmarketing #incrementality