Talk about stability in media mix modeling might not go far enough - we should be focused on robustness instead. Here’s what I mean: Robustness means your model's core insights don't change dramatically when you make small changes to your underlying assumptions or data. If minor tweaks completely change your results, this should be a big warning sign that something is very wrong with your model. It’s actually really easy to hide a fragile model - and many vendors do this - by: • Only running the model once on a fixed dataset • Refusing to refresh the model with new data • "Freezing coefficients" to force consistency These are all red flags. They mean practitioners are avoiding transparency because they know their models will produce wildly different results with even minor changes. Recast takes the opposite approach. We update our models weekly – not because we think you should dramatically change your marketing budget every week, but as ongoing evidence of a model’s robustness. This deliberately puts our neck on the line every week, so our customers can hold us accountable if their model’s robustness and forecasting ability ever come into question. And don’t get me wrong: sometimes we do see instability or lack of robustness in models but the Recast system is designed so that we can identify this issue and fix it, rather than sweep it under the rug. If your MMM vendor resists running their model on updated data, you should ask yourself why. What are they afraid you might discover? For more on how Recast builds robust MMMs, visit us here: https://lnkd.in/e9epGM74
Importance of Transparency in Marketing Mix Modeling
Explore top LinkedIn content from expert professionals.
Summary
Transparency in marketing mix modeling (MMM) is about openly showing how models are built, the assumptions made, and how they adapt to new data, ensuring reliable insights for better marketing decisions.
- Ask for updated models: Make sure your MMM vendor refreshes models regularly to reflect new data and maintain accuracy in predicting what works in your campaigns.
- Demand open reporting: Insist on clear explanations of assumptions, coefficients, and test results to avoid hidden biases and ensure you can trust the insights.
- Validate with real-world tests: Use lift tests and actual marketing results to confirm that the MMM predictions align with what happens in practice.
-
-
I've seen another great MMM company (one I deeply respect!) talk a fair bit about freezing coefficients lately. And I think this is a topic work tackling. We see this a whole heap in MMM and the daily refresh space happens to be a space where it's rife as well. The essence of it is this: - Many MMM vendors freeze co-efficients - What that means is they in-effect 'freeze' the model as new data comes in - That gives you results you are used to but not results you need Transparency counts here. The best thing to do is ask your MMM vendor to supply the coefficients; analyse your ROIs over time properly and to ensure there are stability and perturbation tests being done on top of the usual fit tests/predictive tests/causality tests. I do think this is going to be increasingly important. To give some examples of what I've seen in market: - Vendors with frozen co-efficients (e.g the model's co-effs are frozen as new data is introduced) - Vendors with weighted co-efficients (e.g the model's co-effs are not time-varying) Both of these are bad outcomes. To give marketers the underlying theory here - this assumes that your new creative, your buying optimisations, all the work your team does week to week has little to no effect. This doesn't reflect the real world but also makes it hard for you to figure out what's truly working, which completely defeats the point. As a great MMM person once told me: "Use data to know, rather than to show".
-
Statistical models are never assumption-free. Whether you hide these assumptions or make them explicit, they’re always there shaping your results. Assumptions are critical and can determine how well your model reflects the actual dynamics of your business. But there’s a catch: they need to be set before the model sees any data. If you tweak your assumptions after seeing the results, you may be building a biased model where you just “back into” the assumptions that give you the results you want. In Episode 3 of How to Build an MMM, we deep-dive into how to configure your media mix model thoughtfully and transparently. For example, when setting priors, the goal is to balance two things: ruling out impossible scenarios and keeping the assumptions uninformative enough to let the data do its job. We also talk about how to translate high-level business knowledge into specific parameters that the model can work with so the results are both rigorous and actionable. Proper configuration is one of the critical factors that separates a useful MMM from one that’s just numbers on a screen. I really enjoyed making this episode and hope you find it useful - link is in the comments!