Just because "google" shows up in attribution doesn't mean it's what driving buyers to buy. Ask "how did you hear about us" in a free-text required field upon conversion and you'll get the real stuff: -Social media (LinkedIn, Tik Tok, Reddit, Instagram, etc.) -Podcasts (owned, earned, paid) -Communities (Slack, discord, private groups, etc.) -Referrals / Word of Mouth (colleagues, friends, investors, etc.) -3rd party events (e.g. I saw your CEO speak in Belgium last summer) ^^These insights will RARELY or NEVER show up in attribution software. Most B2B companies never ask this question. And most B2B companies don't actually know what's creating their demand. #attribution #revenue #sales #marketing #b2b p.s. This is not meant to be a replacement to digital touchpoint based attribution. It's a different measurement strategy used for a different purpose - to know what buyers report as the most *impactful* touches. p.p.s. Self reported attribution is a *directional* insight that you get directly from customers. Many marketing activities will not get measured by touchpoint based digital attribution and we need another strategy to measure these - podcast, social media, connected TV, Out-of-Home (OOH), referrals, influencer marketing, word of mouth, etc. p.p.p.s. Most companies don't get value from self-reported attribution because they don't use it properly. Require it for all declared intent submissions. Copy it from the lead/contact to opportunity object. Track the results against qualified pipeline and revenue, not just "leads". p.p.p.p.s. Self-reported attribution is 1 of 6 different measurement strategies we use at Passetto to analyze the impact of all GTM Investments. A one-size fits all approach of using touchpoint based digital attribution to measure all Marketing, Sales, and SDR investments is a losing strategy.
Measuring Business Impact
Explore top LinkedIn content from expert professionals.
-
-
"Which channel is working best?" It's the wrong question. And one that could cost you millions. Let's look at a $100M fashion brand, Scott's Shackets. Their performance marketing team is celebrating. - Paid search is crushing. Half the media budget, 4x ROAS. - Display is a 2x ROAS. - Paid social somewhere in between. Successful channels, right? They run a simple test, turning off paid search in randomized geographies for a month. Total revenue drop? Only 10%. Wait...what? Their "4x ROAS" channel was just taking credit for sales that would have happened anyway. Customers who were already going to buy (a sweet shacket of courser) just happened to click a Google ad on their way to purchase. Some portion of that paid search spend is WASTE. In reality: - Some campaigns actually really drive incremental sales - Others harvest existing demand - An attribution model won't tell you the difference Instead of asking: "What's the ROAS of this channel?" Start asking: - "How many of these sales are truly new?" - "What happens when we turn things off?" - "Where are we just buying revenue we already had?" Yes, this is harder than looking at a pretty dashboard with channel-by-channel ROAS. But it's even harder to explain to your board why you're spending millions on marketing but not GROWING the business. So where do you start? - Run regular geo-testing or holdout tests - Measure total business impact, not channel metrics - Accept that some of your highest "ROAS" activities might be wasting money I've seen brands cut their marketing spend by 30% using this approach without a drop in revenue. ♻️ Share this with a marketer who needs it 🔔 Follow me for more updates on Scott's Shackets
-
Your training budget is bleeding money. Here's why: You're measuring the wrong thing. Most manufacturers track: → Hours in training sessions → Certificates earned → Courses completed → Knowledge tests passed But here's the brutal truth: Training is a COST until it's applied. I've seen teams ace Six Sigma exams, then go back to the same wasteful processes. I've watched operators get certified in TPM, then ignore equipment maintenance schedules. I've met managers who can recite lean principles but can't eliminate a single bottleneck. The problem isn't the training. The problem is the gap between learning and doing. The Real ROI Formula: Training Cost ÷ Measurable Floor Improvement = Actual ROI If the denominator is zero, your ROI is zero. No matter how much you spent. No matter how good the training was. Here's the system that actually works: STEP 1: Identify Your Losses First ↳ What's costing you money right now? ↳ Downtime? Defects? Delays? Waste? ↳ Quantify the pain before you buy the solution STEP 2: Map Skills to Losses ↳ Which skills would directly impact these losses? ↳ Root cause analysis for quality issues? ↳ Preventive maintenance for downtime? ↳ Value stream mapping for delays? STEP 3: Assess Current Capabilities ↳ Who has these skills already? ↳ Where are the gaps in your workforce? ↳ Don't train everyone in everything STEP 4: Train with a Target ↳ Before any training: "We will apply this to solve X problem" ↳ Set a specific improvement goal ↳ Timeline for implementation STEP 5: Apply Immediately ↳ The window between learning and doing should be days, not months ↳ Start with a pilot project ↳ Measure the impact STEP 6: Scale What Works ↳ If it worked on one line, expand it ↳ If it didn't work, understand why ↳ Refine and try again The shocking reality: Most training fails not because of poor content. It fails because of poor application. Your operators know what to do. They just don't do what they know. The question isn't: "What should we learn next?" The question is: "What have we learned that we're not using yet?" That podcast on lean you listened to last week? Apply one concept today. That Six Sigma training from last month? Start a small improvement project tomorrow. Because untapped knowledge isn't potential. It's waste. What's one thing your team learned recently that they haven't applied yet?
-
CX Should Be Measured Like a P&L—Not a Sentiment Score We keep measuring Customer Experience with smiley faces, stars, and survey scores. But here’s the reality: If you can’t tie CX to revenue, retention, or cost savings—it’s not strategic. Too many CX teams report on sentiment. Fewer can show the business impact of improving the experience. Want a seat at the executive table? Start thinking like a P&L owner: ✅ Reduce onboarding friction → Faster time-to-revenue ✅ Improve digital containment → Lower cost-to-serve ✅ Decrease churn triggers → Higher customer lifetime value This is how you move from “nice to have” to business critical. Sentiment is a signal. Value is the outcome. 💬 How are you measuring CX in your org? Can you show the CFO how experience drives ROI?
-
Incrementality testing is crucial for evaluating the effectiveness of marketing campaigns because it helps marketers determine the true impact of their efforts. Without this testing, it's difficult to know whether observed changes in user behavior or sales were actually caused by the marketing campaign or if they would have occurred naturally. By measuring incrementality, marketers can attribute changes in key metrics directly to their campaign actions and optimize future strategies based on concrete data. In this blog written by the data scientist team from Expedia Group, a detailed guide is shared on how to measure marketing campaign incrementality through geo-testing. Geo-testing allows marketers to split regions into control and treatment groups to observe the true impact of a campaign. The guide breaks the process down into three main stages: - The first stage is pre-testing, where the team determines the appropriate geographical granularity—whether to use states, Designated Market Areas (DMAs), or zip codes. They then strategically select a subset of available regions and assign them to control and treatment groups. It's crucial to validate these selections using statistical tests to ensure that the regions are comparable and the split is sound. - The second stage is the test itself, where the marketing intervention is applied to the treatment group. During this phase, the team must closely monitor business performance, collect data, and address any issues that may arise. - The third stage is post-test analysis. Rather than immediately measuring the campaign's lift, the team recommends waiting for a "cooldown" period to capture any delayed effects. This waiting period also allows for control and treatment groups to converge again, confirming that the campaign's impact has ended and ensuring the model hasn’t decayed. This structure helps calculate Incremental Return on Advertising spending, answering questions like “How do we measure the sales directly driven by our marketing efforts?” and “Where should we allocate future marketing spend?” The blog serves as a valuable reference for those looking for more technical insights, including software tools used in this process. #datascience #marketing #measurement #incrementality #analysis #experimentation – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gWKzX8X2
-
Most marketers can’t prove their value. That’s a problem. And it’s our fault. I just finished reading the AMA’s latest report on marketing skills for 2025 and beyond. One stat jumped off the page: 𝗖𝗠𝗢𝘀 𝘀𝗮𝘆 𝘁𝗵𝗲𝗶𝗿 𝗯𝗶𝗴𝗴𝗲𝘀𝘁 𝗰𝗼𝗺𝗽𝗲𝘁𝗲𝗻𝗰𝘆 𝗴𝗮𝗽𝘀 𝗮𝗿𝗲 𝗶𝗻 𝗱𝗮𝘁𝗮 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀 𝗮𝗻𝗱 𝗽𝗿𝗼𝘃𝗶𝗻𝗴 𝗥𝗢𝗜. Think about that. The very thing marketing leaders get judged on—showing impact and driving revenue is the thing many marketers struggle with most. That’s not a skills gap. T͟h͟a͟t͟’s͟ ͟a͟n͟ ͟e͟x͟i͟s͟t͟e͟n͟t͟i͟a͟l͟ ͟c͟r͟i͟s͟i͟s͟ ͟f͟o͟r͟ ͟o͟u͟r͟ ͟f͟u͟n͟c͟t͟i͟o͟n͟.͟ 𝗧𝗵𝗶𝘀 𝗶𝘀 𝗮 𝗟𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 The problem isn’t marketers. It’s how we train them. We reward creativity but ignore financial acumen. We track clicks but not revenue. We hire for storytelling but not statistical thinking. Meanwhile, CEOs and CFOs are losing patience. If marketing can’t prove its impact, it won’t get the budget or the seat at the table. That’s why so many marketing teams feel stuck in a cycle of: 🚩 Chasing vanity metrics 🚩 Failing to connect activity to revenue 🚩 Struggling to make the case for investment 𝗧𝗵𝗲 𝗙𝗶𝘅 Marketing needs a mindset shift. 𝗧𝗵𝗲 𝗯𝗲𝘀𝘁 𝘁𝗲𝗮𝗺𝘀 𝗜’𝘃𝗲 𝘀𝗲𝗲𝗻 𝗮𝗿𝗲 𝗱𝗼𝗶𝗻𝗴 𝘁𝗵𝗿𝗲𝗲 𝘁𝗵𝗶𝗻𝗴𝘀 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁𝗹𝘆: 1️⃣ 𝗧𝗲𝗮𝗰𝗵𝗶𝗻𝗴 𝗳𝗶𝗻𝗮𝗻𝗰𝗲: They train marketers to speak the language of business, including pipeline, LTV, CAC, and EBITDA. 2️⃣ 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗱𝗮𝘁𝗮 𝗺𝘂𝘀𝗰𝗹𝗲: They invest in analytics tools AND teach teams how to use them. No more "we don’t do numbers." 3️⃣ 𝗧𝘆𝗶𝗻𝗴 𝗲𝗳𝗳𝗼𝗿𝘁𝘀 𝘁𝗼 𝗼𝘂𝘁𝗰𝗼𝗺𝗲𝘀: They focus on business impact, not marketing output. Less “leads generated,” more “revenue influenced.” If marketers want more credibility, they need to own this problem. If leaders want better marketers, they must fix how they develop them. CMOs, what are you doing to close the gap? Drop your thoughts below. ⤵️ #chiefmarketingofficer #marketing #GTM
-
83% of leaders demand ROI above all else. Yet most track metrics that destroy growth. I met with a prospect last week who was celebrating 20,000 website visitors. Problem: Not a single one converted to a lead. Most marketing leaders are dangerously attached to metrics that look impressive but drive zero leads or revenue. They're not measuring what matters. They're measuring what's easy. The cost isn't just wasted budget. It's lost revenue growth. Each stage serves a purpose: 1. Traffic Metrics 🌊 → Shows movement, not money → Clicks and sessions → Website traffic trends → CTR and bounce rates 2. Awareness KPIs 👀 → Measures mindshare growth → Social engagement depth → Brand mention velocity → Content consumption time 3. Lead Metrics 💸 → Actually drives business → Qualified leads generated → Pipeline contribution → Customer acquisition cost The framework for success is more than just a KPI it's how it connects to the end goals: Revenue Connection 🎯 → Cost per qualified opportunity → Pipeline velocity by channel → Marketing-influenced revenue Executive Clarity 👔 → Clear metrics your CEO understands → Example: "Marketing sourced 42% of Q2 pipeline" → Impact: Secured additional budget mid-year by channel Attribution Accuracy 📊 → Captures true customer journey → Maps touchpoints to conversion → Shows what actually drives sales Leading Indicators ⚡ → Predicts future revenue → Flags opportunities early → Guides resource allocation 💥 Actionable takeaways: 1. Audit your dashboards: Sort KPIs by funnel stage 2. Stop mixing metrics: Traffic ≠ Awareness ≠ Revenue 3. Align team goals: Everyone needs to know which metrics matter when What KPIs do you measure for success? 👇 ___ ♻️ Share this with a marketing leader drowning in meaningless metrics ➕ Follow Jennelle McGrath for more frameworks that drive real results
-
I met a sales team that tracks 27 different metrics. But none of them matter. They measure: - Calls made - Emails sent - Meetings booked - Demos delivered - Talk-to-listen ratio - Response time - Pipeline coverage But they all miss the most important number: How often prospects share your content with others. This hit me yesterday. We analyzed our last 200 deals: Won deals: Champion shared content with 5+ stakeholders Lost deals: Champion shared with fewer than 2 people It wasn't about our: - Product demos - Discovery questions - Pricing strategy - Negotiation skills It was about whether our champion could effectively sell for us. Think about your current pipeline: Do you know how many people have seen your proposal? Do you know which slides your champion shared internally? Do you know who viewed your pricing? Most sales leaders have no idea. They're optimizing metrics that don't drive decisions. Look at your CRM right now. I bet it tracks: ✅ When YOU last emailed a prospect ❌ When THEY last shared your content ✅ How many calls YOU made ❌ How many stakeholders viewed your materials ✅ When YOU sent a proposal ❌ How much time they spent reviewing it We've built dashboards to measure everything except what actually matters. The real sales metric that predicts closed deals: Internal Sharing Velocity (ISV) How quickly and widely your champion distributes your content to other stakeholders. High ISV = Deals close Low ISV = Deals stall We completely rebuilt our sales process around this insight: - Redesigned all content to be shareable, not just readable - Created spaces where champions could easily distribute information - Built analytics to measure exactly who engaged with what - Trained reps to optimize for sharing, not for responses Result? Win rates up 35%. Sales cycles shortened by 42%. Forecasting accuracy improved by 60%. Stop obsessing over your activity metrics. Start measuring how effectively your champions sell for you. If your CRM can't tell you how often your content is shared internally, you're operating in the dark. And that's why your forecasts are always wrong. Your move.
-
I get it. Brand feels intangible, hard to prove, and frustrating to justify in executive meetings and boardrooms. It's been the story of my life for almost twenty years. So, last week, I shared a brand score framework to hopefully help. I'm sharing it again to provide a little more context to the deliverable. This guide breaks down the why, how, and what next of brand measurement. Why Is Measuring Brand So Hard? Most leaders know brand is important. “Oh yeah, brand is the rizz.” But the same people talking about rizz expect immediate results—revenue, efficiency, valuation. The challenge? (1) Brand impact is long-term, while execs focus on short-term revenue. (2) Brand influence on sales is indirect but still real. (3) Brand must align with financial KPIs or risk losing investment. Marketing needs a better way to prove brand value. How Brand Ties to Business Outcomes: Brand doesn’t just "exist"—it affects acquisition, retention, and pricing power. Here's how to connect it to financial impact: Increase Branded Search Traffic >>> Lower CAC Orangic Website Traffic Growth >>> Higher inbound pipeline Social Engagement Growth >>> More efficient sales cycles Customer Advocacy & Reviews >>> Higher deal velocity & expansion $$ Brand Awareness + PR >>> higher valuation multiples Share of Voice & Analyst >>> Increase inbound interest NPS >>> Higher retention Brand-building’s impact compounds over time. Use predictive modeling to show future value. Here are some ideas: Branded CAC vs. Non-Branded CAC – Show that branded inbound leads cost less over time by comparing CAC trends. Sales Cycle Compression Model – Measure the reduction in sales cycle duration for accounts exposed to brand content. Brand Awareness & Future Revenue Impact – Track branded search traffic increases and their correlation to pipeline growth. Okay... back to the brand score, we want to measure across six weighted categories: Brand Awareness, Brand Trust & Reputation, Brand Differentiation, Brand Engagement, Brand Consistency, and Brand Perceived Value. And it's super important to measure across all six pillars. Check out the image for more context on weighting and what to measure. How to Calculate Your Brand Score: (1) Score each category on a 1-10 scale using internal and external data. (2) Apply weights and calculate a final Brand Score out of 100. (3) Track progress over time and compare with competitors. Brand measurement isn’t a "nice to have". It’s the key to unlocking categories and growth. This is also new for me, so I would love feedback on whether anyone has implemented a version of this.
-
A colleague of mine asked a great question recently: “Our display ads show solid view-through conversions, but how do I know if we’re spending too much or too little? Some of these conversions would happen anyway.” It’s a question I get a lot — and one that cuts right to the heart of modern measurement. Here’s what I told him: 1. View-through conversions ≠ incrementality. Just because someone saw your ad and later bought doesn’t mean the ad *caused* the sale. Many of those users might have converted anyway. So before increasing spend, it’s critical to know: *What’s the true lift?* 2. Incrementality testing is essential. The best marketers run geo holdouts, sophisticated A/B tests, or randomly selected matched market experiments. These give you a clean read on whether your display ads are actually *driving* results — or just taking credit. 3. Leading indicators matter too. One sophisticated client I have uses AI to track marketing metrics as leading indicators of effectiveness: increases in brand measures, branded search activity, CLV shifts among exposed audiences. These signals tell you if you’re moving in the right direction *before* the conversions show up. 4. Ask better questions, not just measure more. Don’t settle for surface-level metrics. Align your measurement to business impact. That means understanding how different channels contribute to awareness, consideration, and — most importantly — profitable growth. Efficiency metrics like CTR or ROAS don’t tell the whole story. The smartest brands go deeper. Art+Science Analytics Institute | University of Notre Dame | University of Notre Dame - Mendoza College of Business | University of Illinois Urbana-Champaign | University of Chicago | D'Amore-McKim School of Business at Northeastern University | ELVTR | Grow with Google - Data Analytics #Analytics #DataStorytelling