I made my media mix model lie and then I made it lie again. My PyMC-based MMM had beautiful R-squared scores and impressive MAPEs. It even nailed the train-test splits. But guess what? The results were still completely misleading. How could I tell? Because the outputs failed the sniff test. Channels known from real-world experience to drive revenue weren't showing up as impactful, and some minor channels were inflated beyond reality. Good-looking statistical measures don’t guarantee an accurate reflection of your marketing reality, especially if your data isn't telling the whole story. Here's what actually went wrong: My model lacked enough meaningful variation—or "signal"—in key marketing channels. Without clear fluctuations in spend and impressions, even sophisticated Bayesian models like PyMC can't accurately infer each channel's true incremental impact. They end up spreading credit randomly or based on spurious correlations. Here’s what I do differently now: I always start client engagements with a signal audit. Specifically, this means: * Reviewing historical spend patterns and ensuring sufficient spend variation across weeks or regions. * Checking for collinearity between channels (e.g., Google Search branded and non-branded), which can cause misleading attribution. * Identifying channels stuck in “steady state” spending—these need deliberate experimentation to create fluctuation. Once the audit flags weak-signal channels, I run deliberate, controlled lift tests (such as holdout tests or incrementality experiments) to create the necessary data variation. Only after these signal issues are fixed and lift tests integrated do I trust the model: * I feed the experimental data into the model * I validate the model against domain knowledge, sanity-checking contributions with known benchmarks and incrementality test results. * And only then do I let the model drive budgeting and channel allocation decisions. Bottom line: Great statistical fit isn't enough. Your model must pass both statistical tests and practical, real-world "sniff tests."
Overcoming Challenges in Marketing Mix Model Implementation
Explore top LinkedIn content from expert professionals.
Summary
Overcoming challenges in marketing mix model (MMM) implementation starts with understanding that these models analyze how different marketing activities impact business performance. The process often faces obstacles like insufficient data, the need for experimentation, and organizational alignment, but addressing these issues can unlock valuable insights for smarter budgeting decisions.
- Conduct a data audit: Ensure your historical data includes sufficient variation in marketing spend and clean, structured information across all channels to avoid misleading model outcomes.
- Prioritize experimentation: Use lift tests or geo-based experiments to generate reliable insights and calibrate your model for accurate attribution and decision-making.
- Engage stakeholders early: Educate your team about the MMM process and its impact on their roles, while focusing on actionable strategies to drive measurable business results.
-
-
"How do I get organizational buy-in on marketing mix modeling (#MMM)?" I get this question all the time. It's one of the areas that I've dedicated the most effort to in the last few years. A model is only as good as the concrete impact it makes. Over time, I've developed a 3-prong strategy. 1. Education Educate all stakeholders about MMM. Don't only focus on the benefits! Be transparent. For example, for some companies, especially larger ones or those in spaces that tend to have sporadic/incomplete data, it takes some real effort to get the input data ready for MMM. I've found that it helps to set the stage with a brief overview of how MMM works, but for most teams, digging into the science isn't necessary. Avoid technical terms where possible, and explain them where necessary. Focus on how MMM will impact the audience. Most people want to know about the part they play in the process (what do I have to do?), the benefits conferred to their jobs (why should I do it?), and the time it will take (when will it happen?). 2. Experimentation By grounding the model in truth, you improve your chances of buy-in considerably. Provide education on how lift tests will be run and converted to priors. Demonstrate how the MMM can be calibrated with priors (show the differences between the outputs of the pre- and post-calibration models and how the latter is closer to the learned truth). I've found that the success of this prong often rests on education (going back to prong 1). If your organization is already sophisticated in terms of experimentation, it is easier to show that the tests they already trust can synergize amazingly with MMM. Barring that, you'll want to ensure that lift testing education is a top priority. Otherwise, you're grounding the MMM in a truth that no one believes in. 3. Actionability MMM gets adopted in few organizations that don't have a strong actionability strategy. Reports and graphs are great, and there are several useful standard ones you get from most open source projects and MMM vendors (contribution, ad stock, saturation curve, etc.), but unless you have experienced MMM practitioners, your teams won't know what to do with them. Shift goalposts away from the model and toward measurable KPI impact. This goes back to prong 2: fostering a culture of experimentation. Don't just use the MMM to create a plan and present it to the CMO! Start by constraining and prioritizing the actions in the plan based on organizational goals and risk tolerance. Design each action as a separate quasi-experiment. Use causal inference (e.g., Google's open source CausalImpact) to measure actual KPI impact. Then, for each, compare the expected (based on the MMM) and actual impact, calibrate the model, refactor the plan based on the calibrated model, and execute the next action. Keep the CMO involved with reports that show clear KPI improvement at each step in the plan. Do you have something that works for you? Please share in the comments!
-
Here's how we help CMOs practically apply incrementality testing: at fusepoint we run hundreds of incrementality tests and marketing mix models, focused on understanding the true contribution of marketing activities (across paid, owned, and earned) to the overall business -- both top and bottom line. During our discovery process, I talk with a lot of executives or marketing practitioners that say things like: • I get incrementality in theory, but how do I apply it? • How does an MMM make an impact in my day to day? • What do I communicate to my team/agency on ad buying? • How do we actually optimize our campaigns with this insight? So we've developed a simple process to take advantage of the full feedback loop of MMM + incrementality test in your daily workflow. Understand Your Baseline =================== Run an initial MMM that helps paint the picture of channel contribution. This helps you understand: • How much revenue is coming from marketing • Are those marketing activities profitable overall • Are you over/under allocated in any priority channels The outcome of that analysis gives you a series of hypotheses across media allocation (you're not at the point where you're needing to "optimize" anything in-platforms yet). Run Incrementality Tests ================== Models are fallible and very easy to get wrong, so we place them lower on the authority hierarchy than proper incrementality tests. • Prioritize the biggest spending channels or initiatives • Consider doing a "full media holdout" across all • Focus on speed-to-impact tests Generally for our clients this is likely to be a test on Meta and/or Google right at the get go. If our MMM comes back that Meta has a 0.5x iROAS then we'd want to test that urgently to validate the model. Optimize Budget Allocation ==================== After the tests are run (and about 80% of the time they pretty closely match the MMM, but either way you want to update with those priors now that you have them) you'll want to change your budgets. • Usually this is a CUT to channels since most brands are over-spent • A lot of the time cutting the biggest "performance" tactic ~25% • Identify what iROAS or coefficients you need to be profitable The biggest levers in the early days of testing is going to be big budget swings. Lots of little tactical in-platform tweaks definitely help, but not as much as massive budget swings. Apply Incrementality Coefficients ======================== Now that you know the relationship between your deterministic attribution and your incremental contribution, you can come up with incrementality coefficients to give to your media buying teams. • Platform attribution of Meta: 1x ROAS • Actual incremental impact of Meta: 2x iROAS • Incrementality coefficient: 2x That way you can say "we need to be at a 4x iROAS to be profitable, or a 2x in platform ROAS. Optimize from there." Repeat Monthly/Quarterly (based on Size) ============================== As above.
-
You need crucial signals from your ad platform – price and saturation. At LiftLab, we’ve developed two methods to do that: 1️⃣ Agile Mix Model We’ve pioneered this model. It involves examining all your historical data simultaneously. Some also call it the next-generation or the high-frequency marketing mix model. This approach is now becoming a best practice in the industry. This two-stage model focuses on understanding the impact of spending on prices. - The first signal is CPC changes. For instance, consider the case of search advertising: We know that as you invest more in an auction, the CPC increases. But we need to know to what degree does it increase – you can't be directional anymore. - The second is saturation. We also need to know how saturated these ad platforms become as you keep spending, as they will ultimately run out of their best prospects to show your ads to. The mixed models are designed to tease those two signals from your data. 2️⃣ Match market tests or Experimentation The challenge with the Agile Mix Model is that the historical data can sometimes be low signal data. Let's consider a recent example: a customer who allocated $300 per day on Snapchat. After a few months, they upped it to $375 daily, maintaining this for the next two months. That's a very flat line. There's not enough data for anybody's model to work with. I challenge anyone building a model to say they can get an answer from that. So, what's the move? You could just shrug and move on or choose to refine your data strategies. Our strategy? Enter experiments. These are often known as randomized controlled trials. In our case, they are geo-based tests or, more specifically, match market tests. It doesn't imply picking one market versus another. You have to choose a basket of markets in one and a basket of markets in another. We have algorithms to do that. The result? You make data. We control the environment, fluctuating the spend among different markets while accumulating a mountain of evidence to determine how spending impacts pricing and saturation across different media. What we learn from that experiment is routed back into our model. #marketingeffectiveness #agilemixmodel #marketing
-
The biggest blocker to running an accurate media mix model (MMM) isn't the modeling. It's messy data. To run an MMM you need: - 2-3 years of daily spend data from ALL channels - Conversion data (revenue, leads, transactions) - Basic external factors (seasonality, promotions, etc.) Yes, you can have more inputs, but that's the basics. The model itself is pretty straightforward too if you're slightly technical and willing to read "read the manual". PyMC, Meta's Robyn, or Google's Meridian. These are all open source and well-documented. But lack of clean data is what will grind things to a halt. The most common issues we see when running MMM's for mid-market and enterprise brands. 1. Missing data - "We switched platforms last year and lost the exports" - "We did a big one-off media buy, can't find the amount" - "Our old agency has the Facebook data but won't share it" 2. Bad data structure We want to be able to break down spend by channel, tactic, and funnel stage (at minimum). Your campaigns/adsets/ads should have a consistent structure. - Campaign names like "FB_Prosp_Q1" vs "Meta_Cold_Audience" - Conversions are tracked differently across sales channels - No way to separate branded vs non-branded search - Lumping Search/Disp/PMAX into one big "google" bucket 3. Access issues: - No one/team has access to all data or platform logins - Data scattered across 15+ platforms - Nobody knows who owns what So, if you're thinking about running MMM, start by cleaning up and finding all your data. Otherwise, you'll just be paying agencies to organize it for you (if it's even possible). Quick steps to avoid MMM data delays - Create a Google Sheet that lists all your media platforms (current and past) - Note who in your org has access - Audit campaign names and standardize if needed - Start exporting this data (spend, conversions, rev) - Automate the export to a warehouse (if you have the tech/know-how) What's your biggest MMM data challenge? Drop it in the comments. We've run models for hundreds of mid-market and enterprise brands. By now, we've seen it all (most.)