𝗞𝘆𝗼𝘁𝗼 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗜𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗥𝗲𝗰𝗮𝗽 3️⃣: 𝗪𝗵𝗮𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 𝘁𝗼 𝗱𝗲𝗺𝗼𝗰𝗿𝗮𝗰𝘆 𝘄𝗵𝗲𝗻 𝗮𝗻𝘆𝗼𝗻𝗲 𝗰𝗮𝗻 𝗰𝗮𝗹𝗹 𝘁𝗵𝗲 𝘁𝗿𝘂𝘁𝗵 𝗮 𝗹𝗶𝗲? The public square is shifting. Not just what we believe, but whether we believe anything at all. AI-driven media tools are giving us unprecedented power to create, edit, and share content at scale. That’s a massive opportunity for journalism, civic engagement, and storytelling. But it also fuels a dangerous dynamic: The Liar’s Dividend. 📌 The Liar’s Dividend explained: When people know realistic fakes exist, every inconvenient truth can be dismissed as "fake" - even when it’s real. It’s not just misinformation. It’s the weaponization of doubt. Why this matters for executives in media & governance: 🔹 Loss of trust = loss of influence: newsrooms and institutions risk a collapse in public confidence. 🔹 Verification becomes a bottleneck: "Evidence" will face higher scrutiny, legal challenges, and longer verification cycles. 🔹 Platforms face accountability pressure: Expect rising demands for AI provenance tracking, watermarking, and content authenticity frameworks. 🔹 Polarization will deepen: competing realities harden ("reality silos"), reducing consensus on even basic facts. 💡 Key insights to act on now: 1️⃣ Invest in trust infrastructure: content provenance, cryptographic signatures, independent verification networks. 2️⃣ Evolve editorial standards: prepare for both proving truth and countering false dismissal. 3️⃣ Educate audiences: transparency about verification builds more trust than perfection. 4️⃣ Plan for disinformation crises: treat them as inevitable, not hypothetical. 5️⃣ Forge cross-sector alliances: no single newsroom, company, or regulator can solve this alone. The next chapter of media isn’t just about fighting falsehoods. It’s about defending the legitimacy of truth itself. If you’re in media leadership, policy, or governance, this is your moment to build the resilience the public will rely on. Thank you Nathaniel Persily for your amazing session on the future of AI in media and democracy! 💬 How are you preparing for the Liar’s Dividend in your strategy? 🔁 Share this with someone shaping the future of information. #AI #Media #Leadership, Kyoto University, Kyoto University Center for Interdisciplinary Studies of Law and Policy (KILAP), Verena Krawarik
Building trust through media provenance
Explore top LinkedIn content from expert professionals.
Summary
Building trust through media provenance means using digital tools and technical standards to trace the origin and history of media, like images and videos, so people can confirm their authenticity. As synthetic content becomes more convincing, knowing where information comes from is key to rebuilding trust in what we see online.
- Adopt verification tools: Use features like content credentials and verification badges to help audiences identify authentic media and trustworthy profiles.
- Prioritize transparency: Clearly show when media has been created or modified by AI, making it easier for users to understand the origins and edits of digital content.
- Build trust systems: Shift from relying on personal perception to using technical systems—like cryptographic proofs and provenance frameworks—to confirm what’s real.
-
-
Advancements in AI have made it increasingly difficult to distinguish between what is real and what is not, and inauthenticity is a growing problem. Given this week is International Fraud Awareness Week, I wanted to highlight a few ways our teams are working to protect members from inauthentic interactions on LinkedIn. 1. Detecting and removing fake accounts - Fake accounts are the root of a lot of harm on the internet and often use AI-generated profile images to disguise themselves and do harm through scams and fraud. Last year, our teams collaborated with University of California, Berkeley’s Professor, Hany Farid, to develop a new approach for detecting a common type of AI-generated profile photos with 99.6% accuracy. Earlier this year, we shared an update on the work and introduced a new concept for a model that can detect AI-generated profile images produced by a variety of different generative algorithms. You can read the latest on our research and approach here: https://lnkd.in/J8dskW 2. Helping foster content authenticity and transparency - Digital information is a critical part of our everyday lives, and we want our members to be able to accurately identify AI-generated or AI-edited images and videos. A few months ago, LinkedIn started rolling out Content Credentials, the Coalition for Content Provenance and Authenticity (C2PA)’s technical standard. Content Credentials show up as a “Cr” icon on images and videos that contain C2PA metadata. When you click the icon, you'll be able to trace the origin of the AI-created media, including the source and history of the content and whether it was created or edited by AI. You can learn more about LinkedIn’s adoption of C2PA standards here: https://lnkd.in/eGRhdcEc 3. Features to signal trust - Building trust is the first step to any opportunity, whether it’s a new job, opportunity, or connection. LinkedIn’s verification feature allows members to display verified information, including identity, workplace, and educational institutions. Having a verification badge can help give others more confidence to interact because specific information has been confirmed, which helps build credibility with your audiences. When you see a verification badge on LinkedIn, you can use this information to make informed decisions about the people, companies, and jobs you interact with on LinkedIn. Learn more about the latest on LinkedIn verification: https://lnkd.in/e3Q_FeqY While bad actors are relentless in their efforts, I am so proud to be part of this mission-driven team continuing to disrupt their plans, equip our members with more tools, and keep Linkedin safe, trusted and professional.
-
Fighting disinformation is a key challenge today. There's too much focus on watermarking fake content. Here's a better approach👇 On Monday, BBC News released its first article featuring “How we verified this” - a new provenance feature that verifies the origin of a piece of content. (link in comments) ✅ This helps readers distinguish between trustworthy content and media that could have been altered or faked. The BBC’s R&D team has been working on Content Credentials since 2019, joining forces with Adobe to create C2PA, an open standard for media provenance. Since then, media and technology companies such as OpenAI, the NYT, Meta, Microsoft and Synthesia have signed up to C2PA. While public discourse tends to focus on watermarking fake content, the BBC is doing the opposite and I completely agree with their approach! 🔎 Verifying trusted media sources is so much more useful than trying to tag the fake. Why? 👉 C2PA’s cryptographic binding breaks if the content is edited in any way. This makes tagging trusted media much more effective, as the reader can be certain that content tagged by the BBC has not been tampered with. This is the exact opposite to the consensus approach of tagging fake content and hoping bad actors don’t scrape the watermark... A big win for fighting disinformation and a true testament to the public good of the BBC. All major publications should adopt C2PA so we know what media to trust!
-
Trust has always been the glue of any functioning society, but historically, it was rooted in direct human perception: we trusted what we could see, hear, feel, and verify with our own senses, as well as the reputation and consistency of others. The digital era already strained this: when most interactions moved online, we lost our full sensory toolkit and leaned almost entirely on visual perception, the image, the video, the text, to decide what’s real. It worked because we assumed photos don’t lie, videos show what happened, and a “realistic” look signals authenticity. Generative AI breaks that last pillar. When you can’t trust your eyes alone, because anything can be synthetically created to look “real”, the burden shifts from perception to verification. So the new trust model is: • Not what you see is what you get, but what you can prove is what you can trust. • Not your senses, but the systems you rely on: provenance, credentials, reputation, technical proofs. • Not a passive act, but an active practice: constant checking, validating, and re-checking. In this sense, the big shift isn’t that trust is new, it’s that its foundation is moving from our senses to our systems. We’ve never had to outsource trust to technology at this scale before. That’s what’s fundamentally different now. #TrustInTheDigitalAge #ContentAuthenticity #VerifyDontTrust #SeeingIsNotBelieving #ProvenanceMatters #visualcontent #visualtech