Methods For Evaluating The Impact Of Advertising On Behavior

Explore top LinkedIn content from expert professionals.

Summary

Understanding the effectiveness of advertising campaigns is crucial for businesses aiming to influence consumer behavior. Methods for evaluating the impact of advertising, like incrementality testing and media mix modeling, help identify whether campaigns drive meaningful behavioral changes or simply align with existing customer behavior.

  • Test incrementality: Use experiments like geo-testing or holdout tests to compare groups exposed to ads with those that aren't, providing clarity on the actual impact of your advertising efforts.
  • Use multiple methods: Combine approaches like media mix modeling, multi-touch attribution, and experiments to gain a comprehensive understanding of your marketing impact over time.
  • Account for variables: Incorporate techniques such as statistical testing, data validation, and time-based adjustments to ensure reliable and actionable insights from your advertising evaluation.
Summarized by AI based on LinkedIn member posts
  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    49,018 followers

    Incrementality testing is crucial for evaluating the effectiveness of marketing campaigns because it helps marketers determine the true impact of their efforts. Without this testing, it's difficult to know whether observed changes in user behavior or sales were actually caused by the marketing campaign or if they would have occurred naturally. By measuring incrementality, marketers can attribute changes in key metrics directly to their campaign actions and optimize future strategies based on concrete data. In this blog written by the data scientist team from Expedia Group, a detailed guide is shared on how to measure marketing campaign incrementality through geo-testing. Geo-testing allows marketers to split regions into control and treatment groups to observe the true impact of a campaign. The guide breaks the process down into three main stages: - The first stage is pre-testing, where the team determines the appropriate geographical granularity—whether to use states, Designated Market Areas (DMAs), or zip codes. They then strategically select a subset of available regions and assign them to control and treatment groups. It's crucial to validate these selections using statistical tests to ensure that the regions are comparable and the split is sound. - The second stage is the test itself, where the marketing intervention is applied to the treatment group. During this phase, the team must closely monitor business performance, collect data, and address any issues that may arise.  - The third stage is post-test analysis. Rather than immediately measuring the campaign's lift, the team recommends waiting for a "cooldown" period to capture any delayed effects. This waiting period also allows for control and treatment groups to converge again, confirming that the campaign's impact has ended and ensuring the model hasn’t decayed. This structure helps calculate Incremental Return on Advertising spending, answering questions like “How do we measure the sales directly driven by our marketing efforts?” and “Where should we allocate future marketing spend?” The blog serves as a valuable reference for those looking for more technical insights, including software tools used in this process. #datascience #marketing #measurement #incrementality #analysis #experimentation – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gWKzX8X2 

  • View profile for Peter Quadrel

    New Customer Growth for Premium & Luxury Brands | Scale at the Intersection of Finance & AI Powered Advertising | Founder of Odylic Media

    33,524 followers

    Meta, Google, TikTok, and other ad channels are misleading you. Third-party attribution tools like Triple Whale and North Beam aren't better—they’re flawed too. Tracking has always relied on estimated models, not hard numbers. After iOS 14, tracking became harder, leading to a surge in third-party solutions. But these also provide conflicting data, making it tough to find the truth. So, what is the truth? The only reliable way to measure your marketing efforts is through incrementality tests. These tests answer the question, "What if this channel or ad never existed?" By showing ads to one group and withholding from another, you can measure the true impact on revenue and profit. For example, if you're running Facebook ads and selling on Shopify and Amazon, incrementality tests reveal how Facebook ads impact Amazon sales. Without the initial Facebook touchpoint, an Amazon purchase might not have happened, even though traditional attribution wouldn’t show this. This is why ROAS and third-party attribution aren’t accurate. They use models that can be thwarted by privacy settings and cross-channel purchases. By running incrementality tests, you discover the true impact of your marketing efforts. We ran a 14-day Meta holdout test and found that zip codes shown ads generated 50% more Amazon revenue than those not shown ads, despite sending traffic to Shopify. Now is the perfect time to run these tests. Q3 is calm, free from major holidays that skew results. This is your chance to optimize before Q4. If your brand generates seven figures annually, this should be a top priority to grow profits in Q4.

  • View profile for Jacob Ross

    CEO at PebblePost | Make every marketing dollar count

    5,331 followers

    At my first ad tech job, I saw something that still haunts me: vendors celebrating record ROAS … while marketers watched customer growth flatline. The disconnect? Incrementality - understanding if your marketing actually changed customer behavior. Here's what I mean: Say you spend $1M on ads and track $5M in sales from customers who saw them. Impressive 5x ROAS, right? But what if $4M of those sales would have happened anyway? Your real return isn't 5x - it's 1x. And you might be optimizing toward the wrong channels. This is why brands are shifting to incrementality testing. By comparing exposed vs. unexposed audiences, you can: - Identify which channels actually drive NEW customers (vs. just claiming credit for existing ones) - Spot when high ROAS signals you're over-targeting existing buyers - Make smarter budget allocation decisions based on true incremental impact This transformation is happening right now.  One retail client discovered their highest ROAS channel was cannibalizing organic sales. So they shifted their budget and drove 40% more new customers *while spending the same amount.* With marketing budgets under more scrutiny than ever, we can't afford to chase the wrong metrics. The real question isn't "what happened after my ad?" but "what happened because of my ad?"

  • View profile for Jack Lindberg

    Product & Marketing @ Shalion | Digital Shelf & Retail Media Analytics

    5,239 followers

    Incrementality in Retail Media: Key Insights Following up on my previous post about key questions for incrementality models, here’s what strong answers look like: 1.Bias Handling Approach: We use propensity score matching and covariate balancing to ensure test and control groups are comparable. Why It Matters: These methods create fair comparisons between groups exposed and not exposed to marketing, ensuring accurate assessments. 2.Core Assumptions Approach: Our model assumes SUTVA (Stable Unit Treatment Value Assumption) and no hidden confounders, which we rigorously test. Why It Matters: Ensures one customer's behavior doesn't influence another's, enhancing result reliability. 3.Causal Inference Techniques Approach: We apply difference-in-differences, synthetic control methods, and regression discontinuity designs as appropriate. Why It Matters: These techniques isolate the true impact of marketing efforts from other variables. 4.Visual Models Approach: We use Directed Acyclic Graphs (DAGs) to map causal relationships and identify confounders, refining them with domain experts. Why It Matters: DAGs visualize complex factor interactions, clarifying causal pathways. 5.Data Granularity Approach: We leverage transaction-level data with privacy-preserving techniques and apply ecological inference for aggregated data. Why It Matters: Detailed data enables precise incrementality estimates; ecological inference aids insights from group data. 6.Handling Unusual Data Approach: We employ multiple imputation for missing data, robust regression for outliers, and sensitivity analyses for anomalies. Why It Matters: These methods address real-world data issues, ensuring data integrity. 7.Model Validation Approach: We perform A/B tests, backtesting, out-of-sample validation, and compare with traditional marketing mix models. Why It Matters: Validates our model’s accuracy and reliability across different scenarios. 8.Time-Based Adjustments Approach: We incorporate Bayesian structural time series models to account for seasonality, trends, and external events. Why It Matters: Captures temporal patterns like holiday spikes and market shifts. 9.Sample Size Requirements Approach: We conduct power analyses and use adaptive sampling to balance statistical significance and cost-efficiency. Why It Matters: Ensures sufficient data for reliable insights without resource waste. 10.Model Flexibility Approach: Our model utilizes transfer learning to adapt to various campaign types and objectives, from awareness to conversion. Why It Matters: Enables consistent measurement across diverse marketing strategies. #RetailMedia #Incrementality #MarketingAnalytics #DataScience

  • View profile for Scott Zakrajsek

    Head of Data Intelligence @ Power Digital + fusepoint | We use data to grow your business.

    10,514 followers

    Triangulation. Why do we need 3 methods to measure the impact of media? We use measurement to... - Identify what worked in the past - Optimize the present - Forecast/Plan the future Unfortunately, there is no single tool that can do everything. But you can use the following methods together: 1.) Media Mix Modeling (MMM) 2.) Experiments (Geo Tests, Lift Studies) 3.) Multi-touch Attribution (MTA) Let's break down what each is good for. 1.) Media Mix Modeling (MMM) This considers your media (impressions, spend) and models it to your outcome (revenue, leads, profit). It answers which factors, channels, and tactics impact that outcome. Pros - Holistic, can measure all channels - Calculates incrementality - Can give you a baseline - Can measure lag (ad-stock effect) - Privacy-proof - Incorporated factors beyond media Cons - Not granular - Can be technically challenging to run We use MMM for... ✅ Measuring the past ❌ Optimizing the present ✅ Plan the future 2.) Experiments Geo-tests are the most popular. This method finds similar geographies (city, state, DMA). Which allows you to measure the impact of pulsing media up/down/off. Pros - Statistically accurate - Calculates incrementality - Privacy-proof Cons - Time-intensive for many channels - Challenges in smaller countries - Lost revenue from holdouts We use experiments for... ❌ Measuring the past ✅ Optimizing the present ✅ Plan the future 3.) Attribution (MTA) This stitches journeys together at the user-level, and assigned credit to the channels/campaigns that the user engaged (click, view). Tools like Google Analytics or even Meta/Google's internal platforms use attribution. Pros - Data is realtime - Easy to get the data - Visitor/User-level data Cons - Blind to offline/non-click channels - Relies on cookies, not privacy-proof - Does not measure incrementality We use MTA for... ✅ Measuring the past ✅ Optimizing the present ❌ Plan the future So, how do mature brands put this all together (triangulation)? 1.) Measure the past using MMM and MTA - What worked? - Which channels were incremental? - What is our baseline? 2.) Use MTA and Experiments to optimize the present - MTA for campaign-level data in a single platform - Experiments to validate the MMM 3.) Forecast and Plan the future - MMM to model and scenario plan What would you change/add about this approach? #triangulation #measurement #methods

  • View profile for Ben Dutter

    CSO at Power, Founder of fusepoint. Marketing ROI, incrementality, and strategy for hundreds of brands.

    11,340 followers

    Despite what you may think about me, I actually believe touch-based attribution has a clear place in a good measurement system. I like to follow a five-tier hierarchy measurement framework to assess marketing effectiveness and business health (BEATS): • [B]usiness metrics • [E]xperiments • [A]nalyses • [T]racking • [S]urveys They all do different things well, and struggle at others. Ultimately all of marketing measurement is to try to decompose the impact of marketing on the overall business. Like I always say, "the single source of truth is the P&L." If your business is struggling getting cute with sophisticated marketing measurement is a waste of energy and resources. You need to focus on the foundational elements and shift aggressively. Experiments are great for proving overall incrementality in a constructed and "safe" scientific manner. But you can't run experiments on everything, and it's very hard to run highly-granular experiments. Analyses -- such as MMM -- are top-down data aggregators that infer causality. If you lack data saturation, variability, and granularity then it's pretty much little better than a regression you can run in Excel. Because of that MMM is weak at understanding individual ad groups, ads, or keywords. It might be able to split out and estimate impact based on trends or be informed by tests, but, in general the statistical methods just aren't able to detect such minute changes' impact on a big thing like revenue. Which brings us to tracking -- or traditional digital, deterministic attribution. What is attribution really good at? Granularity. And what do performance marketers crave? Granularity. Things that I'm happy to use attribution tracking for: • Comparing ad A vs ad B • Judging creative engagement metrics • Finding new keywords that got clicks • Serving as a "baseline" to apply incrementality coefficients to • Any highly granular, tight, micro optimization Some things to watch out for: • You shouldn't compare ad A in prospecting vs ad B in retargeting • You shouldn't compare prospecting and retargeting at all • You shouldn't compare between funnel stages • You shouldn't compare between channels And most importantly, never ever EVER use attribution to determine budget allocation overall for the business. That's a fool's errand. A good operator knows when to use what tool. It's easy to vilify attribution because it is the source of so much evil in marketing land today, but it still has a solid place in the hierarchy. Just remember to keep it in its place in the pecking order. #mta #mmm #incrementality #attribution

Explore categories