10 common mistakes I’ve seen MMM consultants make: 1. Measuring only in-sample accuracy which encourages over-fitting and bad modeling practices. Modelers should focus on predictive accuracy on data never seen before. 2. Assuming that marketing performance doesn’t change over time. No one believes that marketing performance is constant over time, so why make that assumption? 3. Assuming seasonality is additive and doesn’t interact with marketing performance. This will generate nonsensical results like telling you to advertise sunscreen in the winter and not in the summer! 4. Using automated variable selection to account for multicollinearity. Automatic variable selection methods (including methods like ridge regression and LASSO) don’t make any sense for MMM since they will “randomly” choose one of two correlated variables to get all of the credit. 5. Assuming that promotions and holidays are independent of marketing performance, rather than directly impacted by it. 6. Using hard-coded long-time-shift variables to account for “brand effects” that aren’t actually based in reality. By “assuming” long time shifts for certain channels they can force the model to assign way too much credit to that channel. 7. Allowing the analyst/modeler to make too many decisions that influence the final results. If the modeler is choosing adstock rates and which variables to include in the model, then your “final” model will not show you the true range of possibilities compatible with your data. 8. Assuming channels like branded search and affiliates are independent of other marketing activity rather than driven by it. 9. Only updating infrequently to avoid accountability – if your results are always out of date then no one can hold the model accountable. 10. Forcing the model to show results that stakeholders want to hear instead of what they need to hear. With a sufficiently complex model, you can make the results say anything. Unfortunately, this doesn’t help businesses actually improve their marketing spend.
Common Mistakes in Marketing Mix Modeling
Explore top LinkedIn content from expert professionals.
Summary
Marketing mix modeling (MMM) is a technique used to measure the impact of various marketing activities on sales and other business outcomes, but poor execution can lead to significant data misinterpretations and misguided decisions. To avoid costly errors, it's crucial to understand and address common pitfalls in modeling practices.
- Prioritize predictive accuracy: Focus on how well the model predicts outcomes with unseen data instead of relying solely on metrics like in-sample accuracy, which can lead to overfitting and distorted results.
- Ensure data variability: Examine historical marketing spend for fluctuations and address any steady-state spending patterns, as insufficient data variation can result in inaccurate channel impact estimations.
- Validate model stability: Regularly test how small updates in data affect outcomes to ensure consistent and reliable recommendations, preventing erratic shifts in insights.
-
-
I made my media mix model lie and then I made it lie again. My PyMC-based MMM had beautiful R-squared scores and impressive MAPEs. It even nailed the train-test splits. But guess what? The results were still completely misleading. How could I tell? Because the outputs failed the sniff test. Channels known from real-world experience to drive revenue weren't showing up as impactful, and some minor channels were inflated beyond reality. Good-looking statistical measures don’t guarantee an accurate reflection of your marketing reality, especially if your data isn't telling the whole story. Here's what actually went wrong: My model lacked enough meaningful variation—or "signal"—in key marketing channels. Without clear fluctuations in spend and impressions, even sophisticated Bayesian models like PyMC can't accurately infer each channel's true incremental impact. They end up spreading credit randomly or based on spurious correlations. Here’s what I do differently now: I always start client engagements with a signal audit. Specifically, this means: * Reviewing historical spend patterns and ensuring sufficient spend variation across weeks or regions. * Checking for collinearity between channels (e.g., Google Search branded and non-branded), which can cause misleading attribution. * Identifying channels stuck in “steady state” spending—these need deliberate experimentation to create fluctuation. Once the audit flags weak-signal channels, I run deliberate, controlled lift tests (such as holdout tests or incrementality experiments) to create the necessary data variation. Only after these signal issues are fixed and lift tests integrated do I trust the model: * I feed the experimental data into the model * I validate the model against domain knowledge, sanity-checking contributions with known benchmarks and incrementality test results. * And only then do I let the model drive budgeting and channel allocation decisions. Bottom line: Great statistical fit isn't enough. Your model must pass both statistical tests and practical, real-world "sniff tests."
-
You’re making 7-figure decisions off MMMs that weren’t built, validated, or interpreted by real experts, and that should scare you. Most teams don’t realize this. They trust the slide. The chart. The ROAS curve. They forget that modeling is fragile, and without the right hands on it, dangerously easy to get wrong. Would you get heart surgery from a guy who says, "I'm not a full-time surgeon, but I’ve read the playbook and watched some YouTube videos"? No? So why are you trusting your growth strategy to someone who built a media mix model as a side-gig? Most in-house teams and agencies mean well. But MMM isn’t something you just figure out. And when it’s not your full-time focus, you miss things like: • No check for multicollinearity • No backtesting or posterior predictive checks • No out-of-sample validation • No support for impression-based tactics like podcasts or influencer posts • No adjustments for seasonality, promotions, or macro shifts • No process for updating the model as your strategy changes These aren’t just things to scare you. These are the risks of trusting false MMM outputs. And here’s the part no one wants to say out loud: Your agency is probably running your data through Meta’s free Robyn package or Google’s Meridian (which, to be fair, is actually pretty good) Then charging you $5,000 to $10,000 to tell you what to cut and what to scale. That’s fine as long as you understand the results may be confidently wrong. And you know the right questions to ask them. If you don't know the right questions, start with these: • How do you handle multicollinearity between similar channels? • What does your backtesting or validation process look like? • Can the model measure tactics like podcasts or influencers? • What non-media variables are included to control for outside effects? Media mix modeling isn’t about the open-source package that is used. It’s about judgment. About knowing what assumptions must be challenged and what blind spots quietly distort the answer. If your model is built by people who don’t do this full time, don’t be surprised when it quietly sends your business in the wrong direction. Alright, I am closing LinkedIn for the day, I some freelance heart surgery gigs I have to prep for. 🩺
-
A huge red flag in MMM: you update your model with new data, and suddenly, everything changes! Channels with stellar ROIs last week are now underperforming. Budget recommendations flip, and nothing feels consistent. When this happens, it generally means your model is not robust. Intuitively, you know that small tweaks to data or assumptions should not cause wild swings in a model’s results. If they do, your MMM isn’t robust to new changes and should not be used for decision-making. At Recast, we’ve spent years figuring out how to assess model stability and robustness correctly. We use a test we call “Stability Loops”, which simulates weekly model refreshes to ensure that small updates do not create unreasonable swings in a model’s results. A cardinal sin we often see at this stage of the modeling process is to "freeze coefficients" in order to hide model instability. It’s important that you don’t lock everything in place so your model can dynamically learn from new data and accurately reflect the reality of marketing performance. Getting this right is a big deal because every recommendation that your MMM makes is tied to its stability. If your model is unstable, you’re flying blind. If this resonates, check out Episode 5 of How to Build an MMM. We deep-dive into all things model stability and discuss how we test for it at Recast. The link is in the comments! And for more on Recast, check us out here! https://lnkd.in/eBTYYU2j
-
Ever watched a market mix model bend reality to fit a senior exec’s hunch? That’s a bad prior at work. In Bayesian MMM we start with beliefs (priors) and let data update them. Done right, priors guide us toward plausible answers fast. Done wrong, they blindfold the model and force it into bad answers. Where it goes off the rails... In a few examples. Firstly, self-serving priors: An external party bakes in a high TV elasticity so the post-analysis screams “double your GRPs.” Secondly, internal wish-casting: A BI team hard-codes “brand search drives 40 % of sales” because it always has in last-click. So how can you keep your MMM models honest? Interrogate the priors. Ask exactly which distributions are pinned down and why. “Industry benchmarks” without proof is not an answer. Stress-test them. Swap in weak priors and compare ROI swings. More than ±20 %? Your prior is steering the ship. Demand hold-out accuracy. A model that can’t predict next month isn’t worth your budget. Bad priors are going to be the bane of the modern Bayesian MMM stack. Treat them like any other financial assumption - and challenge until they break or prove themselves. Models should be stable, fast and subject to scrutiny. Anything less is going to turn MMM into MTA 2.0. And that's bad for everyone.