Too many teams treat MMM like it’s just a report card. We think it should be more like a tool that gives you a starting point in a larger system of learning: Plan → Experiment → Validate → Optimize A good MMM acts as a hypothesis engine that pushes this system forward by highlighting where your team needs to learn more. That means it shouldn’t be just a backwards-looking tool. In addition to helping measure marketing performance, when done right, MMM helps you surface uncertainty in your forecasts and flag channels with high upside but low confidence – the ones that deserve testing next. Example: You’re comparing three budget scenarios. Budget mix B shows the highest potential for conversions, but also comes with much higher uncertainty than budget A or budget C. Why? Because the model has more uncertainty in the performance of a few key channels in that mix. That’s not necessarily a problem: it’s an opportunity for exploration. Those priority channels in budget mix B become your testing roadmap for incrementality, geolift, holdout, go-dark, or other types of experiments. We think of marketing budget optimization as an “explore and exploit” problem. We need to keep learning so that we can feed those learnings back into the marketing budget and ultimately drive more profit. This process never stops. Since marketing performance is always changing, good marketing measurement is never “done”. For more information, check us out here: https://lnkd.in/e7BKrBf4
How to Understand Media Mix Modeling
Explore top LinkedIn content from expert professionals.
Summary
Media mix modeling (MMM) is a statistical tool that helps marketers understand how different marketing channels contribute to overall business outcomes, such as revenue or conversions. It provides insights into past performance and identifies opportunities for testing and optimizing future campaigns.
- Start with a baseline: Run an initial MMM to identify how each channel impacts revenue and profitability, which sets the stage for building hypotheses and exploring underperforming areas.
- Incorporate real-world validation: Use incrementality tests like holdouts or geolift experiments to confirm the MMM's predictions and refine its data inputs for better accuracy.
- Iterate consistently: Treat MMM as a dynamic tool; regularly update it with new data and experiments to adapt to changing marketing conditions and continuously improve your decision-making.
-
-
I made my media mix model lie and then I made it lie again. My PyMC-based MMM had beautiful R-squared scores and impressive MAPEs. It even nailed the train-test splits. But guess what? The results were still completely misleading. How could I tell? Because the outputs failed the sniff test. Channels known from real-world experience to drive revenue weren't showing up as impactful, and some minor channels were inflated beyond reality. Good-looking statistical measures don’t guarantee an accurate reflection of your marketing reality, especially if your data isn't telling the whole story. Here's what actually went wrong: My model lacked enough meaningful variation—or "signal"—in key marketing channels. Without clear fluctuations in spend and impressions, even sophisticated Bayesian models like PyMC can't accurately infer each channel's true incremental impact. They end up spreading credit randomly or based on spurious correlations. Here’s what I do differently now: I always start client engagements with a signal audit. Specifically, this means: * Reviewing historical spend patterns and ensuring sufficient spend variation across weeks or regions. * Checking for collinearity between channels (e.g., Google Search branded and non-branded), which can cause misleading attribution. * Identifying channels stuck in “steady state” spending—these need deliberate experimentation to create fluctuation. Once the audit flags weak-signal channels, I run deliberate, controlled lift tests (such as holdout tests or incrementality experiments) to create the necessary data variation. Only after these signal issues are fixed and lift tests integrated do I trust the model: * I feed the experimental data into the model * I validate the model against domain knowledge, sanity-checking contributions with known benchmarks and incrementality test results. * And only then do I let the model drive budgeting and channel allocation decisions. Bottom line: Great statistical fit isn't enough. Your model must pass both statistical tests and practical, real-world "sniff tests."
-
Here's how we help CMOs practically apply incrementality testing: at fusepoint we run hundreds of incrementality tests and marketing mix models, focused on understanding the true contribution of marketing activities (across paid, owned, and earned) to the overall business -- both top and bottom line. During our discovery process, I talk with a lot of executives or marketing practitioners that say things like: • I get incrementality in theory, but how do I apply it? • How does an MMM make an impact in my day to day? • What do I communicate to my team/agency on ad buying? • How do we actually optimize our campaigns with this insight? So we've developed a simple process to take advantage of the full feedback loop of MMM + incrementality test in your daily workflow. Understand Your Baseline =================== Run an initial MMM that helps paint the picture of channel contribution. This helps you understand: • How much revenue is coming from marketing • Are those marketing activities profitable overall • Are you over/under allocated in any priority channels The outcome of that analysis gives you a series of hypotheses across media allocation (you're not at the point where you're needing to "optimize" anything in-platforms yet). Run Incrementality Tests ================== Models are fallible and very easy to get wrong, so we place them lower on the authority hierarchy than proper incrementality tests. • Prioritize the biggest spending channels or initiatives • Consider doing a "full media holdout" across all • Focus on speed-to-impact tests Generally for our clients this is likely to be a test on Meta and/or Google right at the get go. If our MMM comes back that Meta has a 0.5x iROAS then we'd want to test that urgently to validate the model. Optimize Budget Allocation ==================== After the tests are run (and about 80% of the time they pretty closely match the MMM, but either way you want to update with those priors now that you have them) you'll want to change your budgets. • Usually this is a CUT to channels since most brands are over-spent • A lot of the time cutting the biggest "performance" tactic ~25% • Identify what iROAS or coefficients you need to be profitable The biggest levers in the early days of testing is going to be big budget swings. Lots of little tactical in-platform tweaks definitely help, but not as much as massive budget swings. Apply Incrementality Coefficients ======================== Now that you know the relationship between your deterministic attribution and your incremental contribution, you can come up with incrementality coefficients to give to your media buying teams. • Platform attribution of Meta: 1x ROAS • Actual incremental impact of Meta: 2x iROAS • Incrementality coefficient: 2x That way you can say "we need to be at a 4x iROAS to be profitable, or a 2x in platform ROAS. Optimize from there." Repeat Monthly/Quarterly (based on Size) ============================== As above.
-
Over the course of my career, I’ve seen a lot of media mix models. Here’s what I’ve found (the answer might surprise you): A raw media mix model by itself is no more accurate than someone sitting in Excel trying to figure out the right budget allocations. Why? Because at the end of the day, it's just correlation, not causation. You're essentially creating thousands of random blends of media spend and seeing which one correlates most with business output. But correlation doesn't equal causation, and that's where things can get weird. I've seen models suggest crazy things: -> "10x your spend on this channel, it'll be your most profitable!" But when you run incrementality tests, you find out you actually need to cut that channel in half. Without proper guidance, media mix models can be more dangerous than helpful. That's why I'm a big fan of combining incrementality testing with media mix modeling. It removes the risk factors associated with correlation on testable channels. You're only dealing with the risk of over-correlation on channels you can't test, like influencers. The industry is moving in this direction, and it's exciting to see. 3 years ago, incrementality testing rarely came up. Now it’s in every conversation. But there are still *tons* of companies out there using raw, out-of-the-box media mix models. And their results are all over the place depending on who's behind the wheel. The future of measurement is about combining incrementality testing with media mix modeling. It's not perfect, but it's a big step towards more accurate and actionable insights. What’s your take? #marketingscience #measurement #marketing
-
MTA vs. MMM. The ultimate debate and a surprise at the end. If we're going to engage in a principled debate, we first need to lay out some definitions. Attribution = the action of regarding something as being CAUSED by a person or thing MTA = multi touch attribution MMM = marketing mix modeling MTA: - Stitch DIGITAL events to one identity (person, sale, or account) - Distribute credit based on heuristics or arbitrarily (W, U, time-decay) - No correlational or causal analyses, maybe some markov chains MMM: - Collect impressions/costs across DIGITAL and ALL OTHER channels (paid, owned, earned, brand, performance, digital, physical...) - Multi-variate regression analyses to detect correlation - Now powered by machine learning to automate/scale So, here are my 5 key observations on MTA and MMM: 1. There's nothing wrong with tracking digital touch points. Behavioral analytics is useful for several jobs to be done. Calling it ATTRIBUTION is where the problem starts. At best, it helps you improve conversion rates on your digital properties. 2. There are significant gaps in what people call MTA. Here are some examples. A. See an ad on LinkedIn (or Meta or TikTok) but don't click immediately because I'm busy. Google the brand and click on the first link. MTA credits "SEO or SEM". B. Click an ad on mobile but don't go through the flow. I remember to open up the website on my laptop and complete the action. MTA says Direct. C. See a billboard or hear a podcast ad. MTA goes huh??? I can keep going but you get the point. MTA is coincidence, if you treat it as attribution. 3. MMM considers daily impressions/costs across every channel. No UTMs, click ids, and identity or device resolution. It then builds a statistical model that helps you understand the correlation of each channel/strategy and your business metrics. I.e. when impressions or investments in a channel goes up, what was the corresponding increase in your outcomes? Note: I'm not saying MMM is perfect. It has limitations but compared to "MTA", it's far superior. 4. MMMs are NOT causal. Anybody denying this is spreading falsehoods. So, strictly speaking, calling MMM attribution is also not right. BUT, MMM has a massive leg up on MTA. Why? Two reasons: A. Correlation is better than coincidence. B. Equalizes the playing field for all channels. No such thing as "hard to measure channel". C. Reduces subjectivity in assigning credit to channels (relying on stats and not heuristics). 5. So, if both MTA and MMM are not causal, what's the real answer? In science, the ultimate way to understand cause / effect is experimentation. This needs to be a core competency of marketing teams. So, based on all of this, my strong recommendation to every sophisticated team is to build a culture of Experimentation and MMM. For younger teams, double down on Experimentation. That's all you really need. More on this in the coming days. So... what do you think I'm missing in my argument here?
-
"Did MMM completely miss the point?" That's what Professor Fader asked me. And when someone who helped pioneer media mix modeling in the 80s and 90s says that, you pay attention. His point was that the questions many ask of MMM might be too simple. Instead of just looking at whether media spend moves sales up or down, we should ask things like: - What's happening with actual customers? - Are we acquiring net new ones? - Is marketing just shifting when existing customers would buy anyway? Traditional MMM looks at everything in aggregate. Sure, we might see that TV drives sales - but we don't get much insight into customer-level behavior. This hits close to home. We've wrestled with this challenge at Recast, and while we can’t solve the whole granularity problem, we have built ways to get more specific in our recommendations: - Modeling KPIs that split new vs. returning customers - Measuring how marketing changes purchase timing (ie. are there pull-forward or pull-backward effects in the business?) - Using lift tests to verify the causal impact of marketing activity on business outcomes But Prof. Fader's point still stands: we need to think harder about customer behavior, not just total sales. Here's my take: MMM isn't perfect. It won't tell you everything about how to operate your business or how customers make decisions. But as a tool for helping to optimize a marketing mix, to identify under-performing channels and areas for opportunity, it’s a great tool to have in your marketing stack.