Marketing Mix Modeling Insights

Explore top LinkedIn content from expert professionals.

  • View profile for Scott Zakrajsek

    Head of Data Intelligence @ Power Digital + fusepoint | We use data to grow your business.

    10,514 followers

    The biggest blocker to running an accurate media mix model (MMM) isn't the modeling. It's messy data. To run an MMM you need: - 2-3 years of daily spend data from ALL channels - Conversion data (revenue, leads, transactions) - Basic external factors (seasonality, promotions, etc.) Yes, you can have more inputs, but that's the basics. The model itself is pretty straightforward too if you're slightly technical and willing to read "read the manual". PyMC, Meta's Robyn, or Google's Meridian. These are all open source and well-documented. But lack of clean data is what will grind things to a halt. The most common issues we see when running MMM's for mid-market and enterprise brands. 1. Missing data - "We switched platforms last year and lost the exports" - "We did a big one-off media buy, can't find the amount" - "Our old agency has the Facebook data but won't share it" 2. Bad data structure We want to be able to break down spend by channel, tactic, and funnel stage (at minimum). Your campaigns/adsets/ads should have a consistent structure. - Campaign names like "FB_Prosp_Q1" vs "Meta_Cold_Audience" - Conversions are tracked differently across sales channels - No way to separate branded vs non-branded search - Lumping Search/Disp/PMAX into one big "google" bucket 3. Access issues: - No one/team has access to all data or platform logins - Data scattered across 15+ platforms - Nobody knows who owns what So, if you're thinking about running MMM, start by cleaning up and finding all your data. Otherwise, you'll just be paying agencies to organize it for you (if it's even possible). Quick steps to avoid MMM data delays - Create a Google Sheet that lists all your media platforms (current and past) - Note who in your org has access - Audit campaign names and standardize if needed - Start exporting this data (spend, conversions, rev) - Automate the export to a warehouse (if you have the tech/know-how) What's your biggest MMM data challenge? Drop it in the comments. We've run models for hundreds of mid-market and enterprise brands. By now, we've seen it all (most.)

  • View profile for Ted Lorenzen

    Developing Analytics That Drive Marketing Outcomes @ ScanmarQED

    6,225 followers

    Bayesian solves many problems for marketing mix models. It also creates a few (tanstaafl, and all that). Perhaps the most obvious is that the priors selected matter and selecting priors on MMM parameters (lag, carryover, coefficients) is not always very intuitive for stakeholders. Many bayesian MMM practitioners will use relatively vague priors as a work around, but this reduces (in my opinion, substantially reduces) the value bayesian methods bring to MMM, and also is a bit of a cheat in that sign-constrained effect estimates are actually a very informative prior but are often treated as 'uninformative.' The best practice for using informative priors on those unintuitive parameters is a prior predictive simulation, where priors are selected and then model fitted values are generated on samples pulled from the prior distribution without considering the data-generated likelihood function at all. pymc-marketing put together a nice workbook example of this recent (link in the comments). But this does put bayesian MMM in a bit of pickle -- if I set a prior based on my current knowledge, then run the prior predictive check and decide my prior must be wrong, can I adjust my prior? It's not really a prior belief anymore, in that I guess I didn't really believe it in the first place, yes? I take this be a reflection of the lack of intuition around how transformations and coefficients combine to create predictions -- it's hard to keep track of the effects of adstock and saturation across many drivers. But I think it's also true that I don't actually have a prior belief about the carryover parameter. I have a prior belief about the how long marketing takes to work but not the carryover parameter itself. So, in my mind, the prior predictive check is a conversation I have between my understanding/beliefs about how marketing works AND the arithmetic of the model to help me specify the prior distributions that match my beliefs . . .and predictive performance (a bit). Sometimes that conversation can bring value to marketing decisions in and of itself -- so put another check in the 'pro' column for bayesian marketing mix IF you use prior predictive checks.

  • View profile for Michael Kaminsky

    Recast Co-Founder | Writes about marketing science, incrementality, and rigorous statistical methods

    13,891 followers

    10 common mistakes I’ve seen MMM consultants make: 1. Measuring only in-sample accuracy which encourages over-fitting and bad modeling practices. Modelers should focus on predictive accuracy on data never seen before. 2. Assuming that marketing performance doesn’t change over time. No one believes that marketing performance is constant over time, so why make that assumption? 3. Assuming seasonality is additive and doesn’t interact with marketing performance. This will generate nonsensical results like telling you to advertise sunscreen in the winter and not in the summer! 4. Using automated variable selection to account for multicollinearity. Automatic variable selection methods (including methods like ridge regression and LASSO) don’t make any sense for MMM since they will “randomly” choose one of two correlated variables to get all of the credit. 5. Assuming that promotions and holidays are independent of marketing performance, rather than directly impacted by it. 6. Using hard-coded long-time-shift variables to account for “brand effects” that aren’t actually based in reality. By “assuming” long time shifts for certain channels they can force the model to assign way too much credit to that channel. 7. Allowing the analyst/modeler to make too many decisions that influence the final results. If the modeler is choosing adstock rates and which variables to include in the model, then your “final” model will not show you the true range of possibilities compatible with your data. 8. Assuming channels like branded search and affiliates are independent of other marketing activity rather than driven by it. 9. Only updating infrequently to avoid accountability – if your results are always out of date then no one can hold the model accountable. 10. Forcing the model to show results that stakeholders want to hear instead of what they need to hear. With a sufficiently complex model, you can make the results say anything. Unfortunately, this doesn’t help businesses actually improve their marketing spend.

  • View profile for Thomas Vladeck

    Co-founder of Recast, the most advanced platform to measure marketing effectiveness. Follow me for essays on statistics + marketing.

    5,786 followers

    A huge red flag in MMM: you update your model with new data, and suddenly, everything changes! Channels with stellar ROIs last week are now underperforming. Budget recommendations flip, and nothing feels consistent. When this happens, it generally means your model is not robust. Intuitively, you know that small tweaks to data or assumptions should not cause wild swings in a model’s results. If they do, your MMM isn’t robust to new changes and should not be used for decision-making. At Recast, we’ve spent years figuring out how to assess model stability and robustness correctly. We use a test we call “Stability Loops”, which simulates weekly model refreshes to ensure that small updates do not create unreasonable swings in a model’s results. A cardinal sin we often see at this stage of the modeling process is to "freeze coefficients" in order to hide model instability. It’s important that you don’t lock everything in place so your model can dynamically learn from new data and accurately reflect the reality of marketing performance. Getting this right is a big deal because every recommendation that your MMM makes is tied to its stability. If your model is unstable, you’re flying blind. If this resonates, check out Episode 5 of How to Build an MMM. We deep-dive into all things model stability and discuss how we test for it at Recast. The link is in the comments! And for more on Recast, check us out here! https://lnkd.in/eBTYYU2j

  • View profile for Jen Boland

    Senior Director @DonorVoice | Founder of Satisfyly | 🧠 Innovating How Nonprofits Measure & Motivate Giving

    2,129 followers

    I made my media mix model lie and then I made it lie again. My PyMC-based MMM had beautiful R-squared scores and impressive MAPEs. It even nailed the train-test splits. But guess what? The results were still completely misleading. How could I tell? Because the outputs failed the sniff test. Channels known from real-world experience to drive revenue weren't showing up as impactful, and some minor channels were inflated beyond reality. Good-looking statistical measures don’t guarantee an accurate reflection of your marketing reality, especially if your data isn't telling the whole story. Here's what actually went wrong: My model lacked enough meaningful variation—or "signal"—in key marketing channels. Without clear fluctuations in spend and impressions, even sophisticated Bayesian models like PyMC can't accurately infer each channel's true incremental impact. They end up spreading credit randomly or based on spurious correlations. Here’s what I do differently now: I always start client engagements with a signal audit. Specifically, this means: * Reviewing historical spend patterns and ensuring sufficient spend variation across weeks or regions. * Checking for collinearity between channels (e.g., Google Search branded and non-branded), which can cause misleading attribution. * Identifying channels stuck in “steady state” spending—these need deliberate experimentation to create fluctuation. Once the audit flags weak-signal channels, I run deliberate, controlled lift tests (such as holdout tests or incrementality experiments) to create the necessary data variation. Only after these signal issues are fixed and lift tests integrated do I trust the model: * I feed the experimental data into the model * I validate the model against domain knowledge, sanity-checking contributions with known benchmarks and incrementality test results. * And only then do I let the model drive budgeting and channel allocation decisions. Bottom line: Great statistical fit isn't enough. Your model must pass both statistical tests and practical, real-world "sniff tests."

  • View profile for Jonathan Hershaff

    Data Scientist @ Airbnb | ex-Stripe | Causal Inference | Economist | WhatsTheImpact.com

    7,635 followers

    A model with worse MAPE made better business decisions. I ran two Marketing Mix Models (MMM) on simulated data, where I knew the true ROAS, and found that traditional accuracy metrics could actually mislead budget decisions. 🔍 The version with less informed priors had better MAPE and MAE but completely failed at ranking ROAS correctly across channels—a critical mistake for budget allocation--while the version with more informed priors correctly rank-ordered the media channels. The challenge? Accuracy in total bookings ≠ correctly allocating incremental impact across media channels. A model that predicts total sales well can still mislead businesses into suboptimal spending decisions. In my latest video, I break down: https://lnkd.in/e9egnkPk ✅ Why traditional error metrics can mislead MMM evaluations ✅ How adding informed priors improved ROAS rank ordering—despite worse MAPE ✅ Why business decision accuracy matters more than pure prediction accuracy 🎥 Watch here: https://lnkd.in/e9egnkPk How do you evaluate your marketing models beyond just prediction accuracy? Would love to hear your thoughts! 👇 #datascience #marketinganalytics #MMM #causalinference #bayesianstatistics #marketingmixmodel

  • View profile for Jack Lindberg

    Product & Marketing @ Shalion | Digital Shelf & Retail Media Analytics

    5,239 followers

    Incrementality in Retail Media: Key Insights Following up on my previous post about key questions for incrementality models, here’s what strong answers look like: 1.Bias Handling Approach: We use propensity score matching and covariate balancing to ensure test and control groups are comparable. Why It Matters: These methods create fair comparisons between groups exposed and not exposed to marketing, ensuring accurate assessments. 2.Core Assumptions Approach: Our model assumes SUTVA (Stable Unit Treatment Value Assumption) and no hidden confounders, which we rigorously test. Why It Matters: Ensures one customer's behavior doesn't influence another's, enhancing result reliability. 3.Causal Inference Techniques Approach: We apply difference-in-differences, synthetic control methods, and regression discontinuity designs as appropriate. Why It Matters: These techniques isolate the true impact of marketing efforts from other variables. 4.Visual Models Approach: We use Directed Acyclic Graphs (DAGs) to map causal relationships and identify confounders, refining them with domain experts. Why It Matters: DAGs visualize complex factor interactions, clarifying causal pathways. 5.Data Granularity Approach: We leverage transaction-level data with privacy-preserving techniques and apply ecological inference for aggregated data. Why It Matters: Detailed data enables precise incrementality estimates; ecological inference aids insights from group data. 6.Handling Unusual Data Approach: We employ multiple imputation for missing data, robust regression for outliers, and sensitivity analyses for anomalies. Why It Matters: These methods address real-world data issues, ensuring data integrity. 7.Model Validation Approach: We perform A/B tests, backtesting, out-of-sample validation, and compare with traditional marketing mix models. Why It Matters: Validates our model’s accuracy and reliability across different scenarios. 8.Time-Based Adjustments Approach: We incorporate Bayesian structural time series models to account for seasonality, trends, and external events. Why It Matters: Captures temporal patterns like holiday spikes and market shifts. 9.Sample Size Requirements Approach: We conduct power analyses and use adaptive sampling to balance statistical significance and cost-efficiency. Why It Matters: Ensures sufficient data for reliable insights without resource waste. 10.Model Flexibility Approach: Our model utilizes transfer learning to adapt to various campaign types and objectives, from awareness to conversion. Why It Matters: Enables consistent measurement across diverse marketing strategies. #RetailMedia #Incrementality #MarketingAnalytics #DataScience

  • View profile for Henry Innis

    Co-Founder at Mutinex | We're Hiring!

    6,207 followers

    I've seen another great MMM company (one I deeply respect!) talk a fair bit about freezing coefficients lately. And I think this is a topic work tackling. We see this a whole heap in MMM and the daily refresh space happens to be a space where it's rife as well. The essence of it is this: - Many MMM vendors freeze co-efficients - What that means is they in-effect 'freeze' the model as new data comes in - That gives you results you are used to but not results you need Transparency counts here. The best thing to do is ask your MMM vendor to supply the coefficients; analyse your ROIs over time properly and to ensure there are stability and perturbation tests being done on top of the usual fit tests/predictive tests/causality tests. I do think this is going to be increasingly important. To give some examples of what I've seen in market: - Vendors with frozen co-efficients (e.g the model's co-effs are frozen as new data is introduced) - Vendors with weighted co-efficients (e.g the model's co-effs are not time-varying) Both of these are bad outcomes. To give marketers the underlying theory here - this assumes that your new creative, your buying optimisations, all the work your team does week to week has little to no effect. This doesn't reflect the real world but also makes it hard for you to figure out what's truly working, which completely defeats the point. As a great MMM person once told me: "Use data to know, rather than to show".

  • View profile for Pranav Piyush

    CEO @ Paramark | Marketing measurement that CFOs & CMOs trust

    15,447 followers

    MTA vs. MMM. The ultimate debate and a surprise at the end. If we're going to engage in a principled debate, we first need to lay out some definitions. Attribution = the action of regarding something as being CAUSED by a person or thing MTA = multi touch attribution MMM = marketing mix modeling MTA: - Stitch DIGITAL events to one identity (person, sale, or account) - Distribute credit based on heuristics or arbitrarily (W, U, time-decay) - No correlational or causal analyses, maybe some markov chains MMM: - Collect impressions/costs across DIGITAL and ALL OTHER channels (paid, owned, earned, brand, performance, digital, physical...) - Multi-variate regression analyses to detect correlation - Now powered by machine learning to automate/scale So, here are my 5 key observations on MTA and MMM: 1. There's nothing wrong with tracking digital touch points. Behavioral analytics is useful for several jobs to be done. Calling it ATTRIBUTION is where the problem starts. At best, it helps you improve conversion rates on your digital properties. 2. There are significant gaps in what people call MTA. Here are some examples. A. See an ad on LinkedIn (or Meta or TikTok) but don't click immediately because I'm busy. Google the brand and click on the first link. MTA credits "SEO or SEM". B. Click an ad on mobile but don't go through the flow. I remember to open up the website on my laptop and complete the action. MTA says Direct. C. See a billboard or hear a podcast ad. MTA goes huh??? I can keep going but you get the point. MTA is coincidence, if you treat it as attribution. 3. MMM considers daily impressions/costs across every channel. No UTMs, click ids, and identity or device resolution. It then builds a statistical model that helps you understand the correlation of each channel/strategy and your business metrics. I.e. when impressions or investments in a channel goes up, what was the corresponding increase in your outcomes? Note: I'm not saying MMM is perfect. It has limitations but compared to "MTA", it's far superior. 4. MMMs are NOT causal. Anybody denying this is spreading falsehoods. So, strictly speaking, calling MMM attribution is also not right. BUT, MMM has a massive leg up on MTA. Why? Two reasons: A. Correlation is better than coincidence. B. Equalizes the playing field for all channels. No such thing as "hard to measure channel". C. Reduces subjectivity in assigning credit to channels (relying on stats and not heuristics). 5. So, if both MTA and MMM are not causal, what's the real answer? In science, the ultimate way to understand cause / effect is experimentation. This needs to be a core competency of marketing teams. So, based on all of this, my strong recommendation to every sophisticated team is to build a culture of Experimentation and MMM. For younger teams, double down on Experimentation. That's all you really need. More on this in the coming days. So... what do you think I'm missing in my argument here?

  • View profile for Brian Krebs

    Founder of CalPal: Centralizing calendar and time management for independent professionals

    2,861 followers

    "How do I get organizational buy-in on marketing mix modeling (#MMM)?" I get this question all the time. It's one of the areas that I've dedicated the most effort to in the last few years. A model is only as good as the concrete impact it makes. Over time, I've developed a 3-prong strategy. 1. Education Educate all stakeholders about MMM. Don't only focus on the benefits! Be transparent. For example, for some companies, especially larger ones or those in spaces that tend to have sporadic/incomplete data, it takes some real effort to get the input data ready for MMM. I've found that it helps to set the stage with a brief overview of how MMM works, but for most teams, digging into the science isn't necessary. Avoid technical terms where possible, and explain them where necessary. Focus on how MMM will impact the audience. Most people want to know about the part they play in the process (what do I have to do?), the benefits conferred to their jobs (why should I do it?), and the time it will take (when will it happen?). 2. Experimentation By grounding the model in truth, you improve your chances of buy-in considerably. Provide education on how lift tests will be run and converted to priors. Demonstrate how the MMM can be calibrated with priors (show the differences between the outputs of the pre- and post-calibration models and how the latter is closer to the learned truth). I've found that the success of this prong often rests on education (going back to prong 1). If your organization is already sophisticated in terms of experimentation, it is easier to show that the tests they already trust can synergize amazingly with MMM. Barring that, you'll want to ensure that lift testing education is a top priority. Otherwise, you're grounding the MMM in a truth that no one believes in. 3. Actionability MMM gets adopted in few organizations that don't have a strong actionability strategy. Reports and graphs are great, and there are several useful standard ones you get from most open source projects and MMM vendors (contribution, ad stock, saturation curve, etc.), but unless you have experienced MMM practitioners, your teams won't know what to do with them. Shift goalposts away from the model and toward measurable KPI impact. This goes back to prong 2: fostering a culture of experimentation. Don't just use the MMM to create a plan and present it to the CMO! Start by constraining and prioritizing the actions in the plan based on organizational goals and risk tolerance. Design each action as a separate quasi-experiment. Use causal inference (e.g., Google's open source CausalImpact) to measure actual KPI impact. Then, for each, compare the expected (based on the MMM) and actual impact, calibrate the model, refactor the plan based on the calibrated model, and execute the next action. Keep the CMO involved with reports that show clear KPI improvement at each step in the plan. Do you have something that works for you? Please share in the comments!

Explore categories