More than ever, trust is driving growth. Buyers don’t want content; they want knowledge and confidence in the people behind the brand. It's why credibility and authenticity cut through the social media noise, and why video and creators are having their moment. As Tequia Burt, at the LinkedIn Marketing Collective, says, "Trust has become the most valuable outcome in B2B - and the clearest path to ROI... But measurement gaps are holding many marketers back from scaling their impact." So how do we measure trust? There’s no single metric, and that’s okay. Social media metrics are indicators and signals that connect to the bigger picture. You'll need a plan that captures several key performance indicators over time. Duarte Garrido, Rachel Harris, and I wrote about this back in February 2023. It still stands https://lnkd.in/evAfYmrJ In fact, LinkedIn's new B2B Marketing Benchmark report reinforces this and also emphasises video, which has quickly become foundational in B2B marketing. Combining the metrics we've outlined, and using the Trust Flywheel and Trust Funnel from LinkedIn's framework will help you move your trust measurement forward. How are you tying your social media metrics to trust?
Measuring Brand Awareness
Explore top LinkedIn content from expert professionals.
-
-
Mark Zuckerberg just outlined a future where Meta's AI handles everything from creative generation to campaign optimization to purchase decisions. His vision: businesses connect their bank accounts, state their objectives, and "read the results we spit out." The technical architecture he's describing would fundamentally reshape how advertising technology works. But there's a critical flaw in this approach that creates an opportunity for the next generation of advertising infrastructure. The trust problem isn't just about measurement transparency—though agency executives are rightfully skeptical of platforms "checking their own homework." The deeper issue is institutional knowledge transfer and real-time brand governance. Enterprise brands have decades of learned context about what works, what doesn't, and what could damage their reputation. This isn't just about brand safety filters. It's about nuanced understanding of seasonal messaging, competitive positioning, cultural sensitivities, and customer journey orchestration that can't be reverse-engineered from campaign performance data alone. If AI truly automates the entire advertising stack, brands will need their own AI agents—not just dashboards or approval workflows, but intelligent systems that can negotiate with vendor AI in real-time. Think of it as API-level conversation between two AI systems where the brand's AI has veto power over creative decisions, placement choices, and budget allocation. This creates fascinating technical challenges: How do you architect AI-to-AI communication protocols that maintain brand governance while enabling real-time optimization? How do you build systems that can incorporate institutional knowledge without exposing competitive advantages to vendor platforms? We're talking about building advertising technology that functions more like autonomous diplomatic negotiation than traditional campaign management. For platform companies pushing toward full automation, the question becomes whether they're building systems that enterprise clients can actually trust with their brands and budgets. For independent technology builders, there's an opportunity to create the middleware that makes AI-powered advertising actually viable for sophisticated marketers. The future of advertising isn't just about better algorithms—it's about building trust architectures that let those algorithms work together.
-
Here's the new rule of GTM for 2025: it's about about TRUST not DISTRACTION. In 2024 and earlier, most companies were STILL playing the volume game: More cold emails More ads More noise But here's what I learned building partner programs at WeWork and Amex: 1. Identify Trusted Advocates Customers are more likely to trust recommendations from voices they already know and respect. Who influences our target audience? Who already has their attention and trust? These could be industry leaders, complementary solution providers, or niche communities. Build partnerships with those who already have a strong connection to your ideal customers. 2. Collaborate to Add Value, Not Noise Instead of interrupting your audience with another cold email or ad, collaborate with partners to create meaningful, value-driven touch points. - Co-host a webinar addressing a shared customer pain point. - Develop a joint white paper showcasing both brands’ expertise. - Offer bundled solutions that make life easier for the customer. 3. Leverage Existing Trust to Open Doors Partners are amplifiers AND bridges. They help you cross the “river of distraction” and reach customers without the noise. A well-placed introduction or co-branded recommendation carries far more weight than another outbound message. 4. Measure the Shift from Interruption to Influence If trust-building is your new GTM focus, your success metrics need to change too. Track things like: - Partner-Sourced Leads: Leads generated through trusted partner referrals. - Engagement Rates: How customers interact with co-created content or campaigns. - Pipeline Velocity: How quickly partner-driven deals progress compared to direct sales efforts. Breaking through the noise requires genuine relationships. It's no longer about whose voice is the loudest, it’s whose voice your audience already trusts. The future isn't about interruption and distraction. It's about trust.
-
Confessions of a Media Auditor. Part 2. We are regularly asked by brand leaders to audit their streaming investments. Almost every time, we see the same pattern: platform dashboards look outstanding while brand search, site sessions, and sales remain flat. This isn’t conspiracy — it’s the byproduct of a persistent information gap in streaming buys. Fragmented supply, fuzzy definitions of “CTV,” and incentive-driven metrics combine to tell a story of success that doesn’t match business reality. Too often, this gap is reinforced by agencies—sometimes masquerading “ad tech platforms” with managed-service offerings—that fail to educate clients on the true variability of inventory. Whether through omission or oversimplification, that lack of clarity amounts to obfuscation. The outcome is predictable: smart brands, lured by the promise of cheap CPMs, eventually conclude: CTV doesn’t work. But before we accept that conclusion, we ask five questions: 1. What inventory are you actually buying? 2. Which devices are your ads truly delivered on? 3. What % of impressions reached your primary audience? 4. Did you budget enough reach to separate signal from noise in your leading indicators? 5. Did you deliver enough frequency to command attention? Today, let’s start with #1. Here’s a real client report excerpt (redacted). Look closely: - “RON” and generic labels (e.g., “OTT Sports”) → broad, exchange-based supply where exact programming is unknown. - CPMs under $6 → far more likely a mix of mobile/desktop/app video and long-tail FAST channels than true large-screen CTV. - Attractive CPAs. Useful in lower-funnel harvesting, but as a north star for CTV it’s a vanity metric. It flatters platforms because it rewards clicks from users already in-market, inflates performance with cheap placements (often mobile/app units), and says little about incremental brand impact. What to prioritize instead: ➡️ Share of impressions on large-screen CTV (not just “OTT”). ➡️ On-target % against your primary audience. ➡️ Reach & frequency sufficiency for that audience. ➡️ Incremental branded search/site-visit lift (vs. baseline). 👉 If your CTV CPMs are in the single digits, you’re not buying CTV.
-
University rankings often dominate the conversation in higher education, but do they really measure what matters most? After working in this space, I’ve started questioning the metrics that drive decisions. Rankings focus on things like research publications, student placements, and enrollment growth, but they miss the mark on what truly matters - real impact. For instance, research publication numbers look impressive, but what about the real-world applications of that research? Are universities measuring the community challenges addressed through academic projects, or how faculty-student collaborations are solving industry problems? And while student placement stats are important, they don’t tell us whether those jobs actually leverage what graduates learned and lead to long-term satisfaction. Instead of focusing solely on rankings, universities should measure tangible impact - like alumni success, social innovation, and real-world application of research. After all, it’s the institutions making a real difference in communities and industries that should be celebrated, not just the ones at the top of a list. But this goes beyond just universities - how do we all measure success in life, work, and growth? Is it the visible achievements or the impact we’re creating? P.S - Sharing few pictures from our institute - KR Mangalam where we make sure that the impact created is tangible.
-
🧑⚖️ Do Law Firm Rankings Really Reflect Quality of Legal Representation? 🏆 When facing a legal dispute, choosing the right law firm can make all the difference—but most rankings focus on reputation and prestige, not actual results. This creates a major information gap that favors those with insider knowledge. What if there was a more data-driven way to assess law firm performance? 📝 Example: Consider a client choosing between a prestigious firm with excellent marketing and a smaller firm with better actual case outcomes. Traditional rankings might miss this crucial performance difference, but data-driven analysis reveals the true picture. 🔬 To address this challenge, researchers from St. Gallen and MIT including Robert Mahari have developed a breakthrough solution: a new ranking system based on litigation outcomes. Their approach helps level the playing field by giving all litigants access to objective performance data. 🔍 Key Findings: 📈 The study analyzes over 310K US civil lawsuits to build an outcome-based ranking that predicts law firm performance 🎯 Traditional reputation-based rankings often fail to predict actual litigation success ⚡ The new system achieves up to 10% higher accuracy in forecasting case outcomes 📊 The approach is specifically designed to reduce information asymmetry, giving clients better data for choosing legal representation 🎓 Conclusion: This research represents a significant step toward democratizing legal services, suggesting that the future of law firm selection lies in verifiable performance metrics rather than historical reputation alone. 🔗 Full Paper: https://lnkd.in/eAaYYnzw
-
The cookie is crumbling but data-driven marketing isn’t going anywhere. 🍪❌ With third-party cookies on their way out, brands that relied heavily on them for targeting and personalization face a big challenge: how to reach the right audience without losing relevance. 👉 The answer lies in First-Party Data Strategies. Instead of borrowing data, brands must now own their customer relationships and collect insights directly through trusted interactions. Some key shifts we’re seeing: 🔹 Value exchange: Customers share data when they see clear benefits (personalized offers, loyalty rewards, exclusive content). 🔹 Omnichannel collection: Websites, apps, email, and offline touchpoints are now crucial data sources. 🔹 Privacy-first personalization: Respecting consent and transparency builds long-term trust while enabling targeted campaigns. 🔹 Smarter tech: CDPs and AI tools help unify, analyze, and activate first-party data for real-time decision-making. In short, the brands that master first-party data will win in the post-cookie era. It’s not about tracking, it’s about trust. 💡 Question for you: How is your organization preparing for a cookieless future? LinkedIn LinkedIn News LinkedIn for Marketing LinkedIn for Learning LinkedIn News India
-
Charles Darwin's finches and Fortune 500 companies have a lot in common. In the Galapagos, finches with the right beak shape for their environment thrive. Those that can't adapt? They don't survive. The same principle applies to corporate reputation. Companies that struggle to maintain public perception - and trust - struggle to stay in business. They don’t know how to adapt to their reputational surroundings. The problem is most companies don’t even know how to measure reputation accurately - let alone take action based on what they learn. For decades, businesses relied on reputation surveys and rankings to gauge their success - but surveys are limited due to inherent bias, standardized or general questions, and one-word responses that lack nuanced explanations. Surveys show what someone thinks, not the specific actions someone takes. In other words, you can't understand evolution by counting how many finches are on each island. You understand evolution by studying why some survive and others don't. That’s the change we need to make in how we measure corporate reputation: We need to tie our analysis to specific outcomes - like customer and employee satisfaction - to understand why some businesses survive and some do not. Traditional reputation rankings tell you where you stand today. But studying your company's adaptive traits shows you how to win tomorrow. Corporate communication includes corporate responsibility, which means reputation is not about your rank. It's about understanding which attributes matter in your competitive environment. Stop chasing rankings. Start studying what makes your company adaptable.
-
🔍 A New Lens for Evaluating Research Integrity: The Research Integrity Risk Index (RI2)! While traditional university rankings often emphasize productivity and citation metrics, the RI2 Index takes a bold step in a new direction: highlighting institutional risks to research integrity. 📉 The RI2 is built on two key components: 1️⃣ Retraction Risk – Measures the number of retracted articles (due to fraud, plagiarism, ethical violations, or manipulation) per 1,000 publications over the past two years. 2️⃣ Delisted Journal Risk – Captures the share of publications in journals removed from Scopus or Web of Science for breaching publishing standards. 📊 The 2025 analysis (see attached file) reveals that some high-output institutions face serious integrity risks that are completely overlooked by conventional rankings. ⚠️ Important Reminder for Researchers and Academic Leaders: Reputation and visibility in research take years to build—but can be destroyed in a moment by a single act of unethical publishing. Publishing in a questionable journal or being associated with manipulated research not only harms your reputation but can also damage the standing of your entire institution. ✅ Integrity matters. Metrics like RI2 help universities, research managers, and ethics committees detect hidden vulnerabilities and adopt responsible research practices. 📥 The June 2025 RI2 rankings file is available at https://lnkd.in/dHJrrct2. I strongly encourage academic leaders and policymakers to explore it carefully and reflect on how their institution measures up. Thanks, Lokman Meho, for developing the RI2 ranking and explaining it at: Meho, L. I. (2025). Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global Research. arXiv:2505.06448. #ResearchIntegrity #RI2 #ResponsibleResearch #ScientificEthics #RetractionRisk #PredatoryJournals #ResearchVisibility #AcademicPublishing #Bibliometrics