Building Trust in AI Applications

Explore top LinkedIn content from expert professionals.

  • View profile for Bertalan Meskó, MD, PhD
    Bertalan Meskó, MD, PhD Bertalan Meskó, MD, PhD is an Influencer

    The Medical Futurist, Author of Your Map to the Future, Global Keynote Speaker, and Futurist Researcher

    359,048 followers

    BREAKING! The FDA just released this draft guidance, titled Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations, that aims to provide industry and FDA staff with a Total Product Life Cycle (TPLC) approach for developing, validating, and maintaining AI-enabled medical devices. The guidance is important even in its draft stage in providing more detailed, AI-specific instructions on what regulators expect in marketing submissions; and how developers can control AI bias. What’s new in it? 1) It requests clear explanations of how and why AI is used within the device. 2) It requires sponsors to provide adequate instructions, warnings, and limitations so that users understand the model’s outputs and scope (e.g., whether further tests or clinical judgment are needed). 3) Encourages sponsors to follow standard risk-management procedures; and stresses that misunderstanding or incorrect interpretation of the AI’s output is a major risk factor. 4) Recommends analyzing performance across subgroups to detect potential AI bias (e.g., different performance in underrepresented demographics). 5) Recommends robust testing (e.g., sensitivity, specificity, AUC, PPV/NPV) on datasets that match the intended clinical conditions. 6) Recognizes that AI performance may drift (e.g., as clinical practice changes), therefore sponsors are advised to maintain ongoing monitoring, identify performance deterioration, and enact timely mitigations. 7) Discusses AI-specific security threats (e.g., data poisoning, model inversion/stealing, adversarial inputs) and encourages sponsors to adopt threat modeling and testing (fuzz testing, penetration testing). 8) And proposed for public-facing FDA summaries (e.g., 510(k) Summaries, De Novo decision summaries) to foster user trust and better understanding of the model’s capabilities and limits.

  • View profile for Simon Philip Rost
    Simon Philip Rost Simon Philip Rost is an Influencer

    Chief Marketing Officer | GE HealthCare | Digital Health & AI | LinkedIn Top Voice

    42,791 followers

    No Trust, No Transformation. Period. AI is becoming ready for the healthcare frontlines. But without trust, it stays in the demo room. At every conference, HIMSS, HLTH Inc., Society for Imaging Informatics in Medicine (SIIM), and even yesterday’s HLTH Europe’s Transformation Summit tech dazzles. AI, cloud, interoperability...are ready to take the stage. And yet, one thing lingers in every room: TRUST. We celebrate the breakthroughs and innovation, but quietly wonder: Will clinicians actually adopt this? Will patients accept it? It’s unmistakable…If we don’t solve the trust gap, digital tools remain in demo stage, not becoming an adopted solution! This World Economic Forum & Boston Consulting Group (BCG) white paper was mentioned yesterday at the health transformation summit by Ben Horner and was heavily discussed during our round table conversation at the summit. It lays out a bold vision for building trust in health AI and it couldn’t come at a more urgent time. Healthcare systems are under pressure, and AI offers real promise. But without trust, that promise risks falling flat. Here are some of the key points summarized by AI from the report “Earning Trust for AI in Health”: • Today’s regulatory frameworks are outdated: They were built for static devices, not evolving AI systems. • AI governance must evolve: Through regulatory sandboxes, life-cycle monitoring, and post-market surveillance. • Technical literacy is key: Many health leaders don’t fully understand AI’s risks or capabilities. That must change. • Public–private partnerships are essential: To co-develop guidelines, test frameworks, and ensure real-world impact. • Global coordination is lacking: Diverging regulations risk limiting access and innovation, especially in low-resource settings. Why it matters: AI will not transform healthcare unless we embed trust, transparency, and accountability into every layer from data to IT deployment. That means clinicians/hcps need upskilling, regulators need new tools, and innovators must be part of the solution, not just the source of disruption. The real innovation? Building systems that are as dynamic as the technology itself. Enjoy the read and let me know your thoughts…

  • View profile for Don Collins

    Data Analytics That Creates Impact, Not Burnout | Your Work Should Matter

    16,014 followers

    Anyone can ship a chart. Trusted analysts earn influence. Trust isn’t a vibe. It’s observable. Here are 20 signs of a data analyst you can trust 👇 1. They document their methodology transparently ↳ Every stakeholder can follow their analytical journey 2. They admit when they don’t know something ↳ “I need to investigate this further” builds more trust than guessing 3. They validate data quality before sharing insights ↳ Trust starts with clean, verified information 4. They communicate uncertainty honestly ↳ Express confidence levels and margin of error upfront 5. They follow up on previous recommendations ↳ Track whether their insights actually drove results 6. They explain their assumptions clearly ↳ Make their thinking process completely visible 7. They anticipate data limitations ↳ Proactively address what the analysis cannot prove 8. They use consistent definitions across reports ↳ Ensure metrics mean the same thing every time 9. They provide multiple scenarios when forecasting ↳ Present best case, worst case, and most likely outcomes 10. They cite their data sources religiously ↳ Full transparency on where every number originates 11. They avoid cherry-picking favorable results ↳ Present complete findings, even when inconvenient 12. They explain complex concepts in simple terms ↳ Technical accuracy doesn’t require technical jargon 13. They provide actionable next steps ↳ Never leave stakeholders wondering “what do we do now?” 14. They seek feedback and incorporate it genuinely ↳ Show they value others’ perspectives and domain expertise 15. They standardize their reporting formats ↳ Consistency reduces cognitive load for decision-makers 16. They proactively flag potential data issues ↳ Alert stakeholders to collection problems or anomalies 17. They maintain the confidentiality of sensitive data ↳ Respect data privacy and security protocols religiously 18. They provide training on how to interpret their outputs ↳ Empower others to use insights correctly 19. They collaborate with domain experts ↳ Combine analytical skills with business knowledge 20. They respond promptly to questions about their work ↳ Accessibility builds confidence in their expertise Trust isn’t about being perfect. It’s about being transparent, reliable, and genuinely committed to accuracy. Which trust-building practice do you prioritize most as a data analyst? ♻️ Repost to help your network build trusted analytics practices 🔔 Follow for daily insights on building credibility through data

  • View profile for Richard van der Blom

    Helping B2B Sales & Marketing Teams Turn LinkedIn into a Lead Generation & Business Growth Engine | Social Selling Expert | International Keynote Speaker| 4x Investor

    253,448 followers

    Most creators think more likes = more reach. Wrong. I analyzed 1.8M posts to uncover the 11 signals that ACTUALLY boost your LinkedIn visibility. (Spoiler: White space and daily posting aren't on the list) The signals that matter: 👥 NEW FOLLOWERS + CONNECTION REQUESTS When someone follows you AND turns on notifications after reading your post? LinkedIn marks you as trustworthy. That's algorithmic gold. ⏳ DWELL TIME + "SEE MORE" CLICKS Forget white space tricks. Substance keeps people reading. When they click to expand your post, that curiosity signal is worth 10 empty likes. 🔄 CROSS-PLATFORM + PRIVATE SHARES Your post shared in WhatsApp groups or Slack channels? Those are high-trust signals that scream "too valuable not to share." 🌐 2ND/3RD DEGREE ENGAGEMENT Growth doesn't come from your inner circle. When strangers engage + interact with multiple posts? LinkedIn says you're consistently relevant. 💬 SAVES, COMMENTS, ELEMENT CLICKS Yes, likes still count. But saves? Comments? Clicks on polls or carousels? These carry 3-5x more weight. You know what? You're probably optimizing for vanity metrics while the algorithm rewards trust signals. Want the complete breakdown of all 11 positive signals (plus the negative ones killing your reach)? Get the Algorithm Insights Report 2025: https://lnkd.in/eZMq8w_F ✨ FREE updates included - first one drops early October with Q3 algorithm changes Because the algorithm evolves faster than your strategy. And staying ahead isn't optional anymore. Which signal surprised you most?

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    219,264 followers

    𝗧𝗵𝗲 United Nations 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝗮 𝗻𝗲𝘄 𝗿𝗲𝗽𝗼𝗿𝘁 𝗼𝗻 𝗔𝗜 𝗮𝗻𝗱 𝗵𝘂𝗺𝗮𝗻 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁: ⬇️ While the world chases the next frontier model or AGI milestone, the UN cuts deeper: Human development has flatlined (especially in the global South). Progress stalled. Inequality is rising. Trust crumbling. No real bounce-back since Covid. And right in the middle of that — AI shows up. AI could drive a new era. Or it could deepen the cracks. It all comes down to: How societies choose to use AI to empower people — or fail to. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 14 𝗸𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀 𝘁𝗵𝗮𝘁 𝘀𝘁𝗼𝗼𝗱 𝗼𝘂𝘁 𝘁𝗼 𝗺𝗲: ⬇️ 1. Most AI systems today are designed in cultures that don’t reflect the majority world. → ChatGPT answers are most aligned with very high HDI countries. That’s a problem. 2. The real risk isn’t AI superintelligence. It’s “so-so AI.” → Tools that destroy jobs without improving productivity are quietly eroding economies from the inside. 3. Every person is becoming an AI decision-maker. → The future isn’t shaped by OpenAI or Google alone. It’s shaped by how we all choose to use this tech, every day. 4. AI hype is costing us agency. → The more we believe it will solve everything, the less we act ourselves. 5. People expect augmentation, not replacement. → 61% believe AI will "enhance" their jobs. But only if policy and incentives align. 6. The age of automation skipped the global south. The age of augmentation must not. → Otherwise, we widen the digital divide into a chasm. 7. Augmentation helps the least experienced workers the most. → From call centers to consulting, AI boosts performance fastest at the entry-level. 9. Narratives matter. → If all we talk about is risk and control, we miss the transformative potential to reimagine development. 10. Wellbeing among young people is collapsing. → And yes, digital tools (including AI) are a key driver. Especially in high HDI countries. 11. Human connections are becoming more valuable. Not less. → As machines get better at faking it, the real thing becomes rarer — and more needed. 12. Assistive AI is quietly revolutionizing inclusion. → Tools like sign language translation and live captioning are expanding access — but only if they’re accessible. 13. AI benchmarks must change. → We need to measure "how AI advances human development", not just how well it performs on tests. 14. The new divide is not just about access. It’s about how countries "use" AI. → Complement vs. compete. Empower vs. automate. According to the UN: The old question was: “What can AI do?” The better question is: “What will we "choose" to do with it?” More in the comments and report below. Enjoy. 𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗺𝗲𝗮𝗻 𝗳𝗼𝗿 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 — 𝗶𝗻 𝗺𝘆 𝘄𝗲𝗲𝗸𝗹𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲: https://lnkd.in/dbf74Y9E

  • View profile for Roger Dooley

    Keynote Speaker | Author | Marketing Futurist | Forbes CMO Network | Friction Hunter | Neuromarketing | Loyalty | CX/EX | Brainfluence Podcast | Texas BBQ Fan

    25,757 followers

    AI knows LOTS about you. And it's about to set the prices YOU, personally, pay... One of the early movers in AI pricing is Delta Airlines. They plan to expand AI-personalized pricing from 3% to 20% of tickets by year's end. Their president told investors: "We will have a price that's available on that flight, on that time, to you, the individual." Customer Reaction: "Wait, WHAT?" Translation: The algorithm has calculated how much you're likely to pay. Profit-wise, it's working. It's producing "amazingly favorable unit revenues." But what about the customers on the other side of these transactions? Seems like a zero-sum game. Delta's AI knows you. Your credit score. Purchase history. Loyalty status. That discount you almost clicked. How many times you checked the price. Whether you're on an iPhone or Android. Lots more. Here's the psychology they're missing: We're hardwired for fairness. Nobel winner Daniel Kahneman showed people will actually reject profitable deals if they feel unfair. They'll even pay extra to punish companies they perceive as predatory. When customers find out they paid more because AI analyzed their "willingness to pay," trust dies. This isn't yield management where everyone understands prices vary by timing and open capacity. This is weaponized information asymmetry that makes used car dealers look transparent. (More on that in my Forbes CMO Network article, linked in comments.) The irony? Short-term revenue gains could trigger long-term loyalty collapse. Customers who feel manipulated don't just leave. They tell everyone why they left. What's your take: Is AI-personalized pricing the future of commerce or a trust-destroying mistake? Is there a right way to do this? #CustomerPsychology #AIpricing #CustomerExperience #PricingStrategy

  • View profile for Matt Wood
    Matt Wood Matt Wood is an Influencer

    CTIO, PwC

    75,344 followers

    𝔼𝕍𝔸𝕃 field note (2 of 3): Finding the benchmarks that matter for your own use cases is one of the biggest contributors to AI success. Let's dive in. AI adoption hinges on two foundational pillars: quality and trust. Like the dual nature of a superhero, quality and trust play distinct but interconnected roles in ensuring the success of AI systems. This duality underscores the importance of rigorous evaluation. Benchmarks, whether automated or human-centric, are the tools that allow us to measure and enhance quality while systematically building trust. By identifying the benchmarks that matter for your specific use case, you can ensure your AI system not only performs at its peak but also inspires confidence in its users. 🦸♂️ Quality is the superpower—think Superman—able to deliver remarkable feats like reasoning and understanding across modalities to deliver innovative capabilities. Evaluating quality involves tools like controllability frameworks to ensure predictable behavior, performance metrics to set clear expectations, and methods like automated benchmarks and human evaluations to measure capabilities. Techniques such as red-teaming further stress-test the system to identify blind spots. 👓 But trust is the alter ego—Clark Kent—the steady, dependable force that puts the superpower into the right place at the right time, and ensures these powers are used wisely and responsibly. Building trust requires measures that ensure systems are helpful (meeting user needs), harmless (avoiding unintended harm), and fair (mitigating bias). Transparency through explainability and robust verification processes further solidifies user confidence by revealing where a system excels—and where it isn’t ready yet. For AI systems, one cannot thrive without the other. A system with exceptional quality but no trust risks indifference or rejection - a collective "shrug" from your users. Conversely, all the trust in the world without quality reduces the potential to deliver real value. To ensure success, prioritize benchmarks that align with your use case, continuously measure both quality and trust, and adapt your evaluation as your system evolves. You can get started today: map use case requirements to benchmark types, identify critical metrics (accuracy, latency, bias), set minimum performance thresholds (aka: exit criteria), and choose complementary benchmarks (for better coverage of failure modes, and to avoid over-fitting to a single number). By doing so, you can build AI systems that not only perform but also earn the trust of their users—unlocking long-term value.

  • View profile for Okan YILDIZ

    Global Cybersecurity Leader | Innovating for Secure Digital Futures | Trusted Advisor in Cyber Resilience

    71,470 followers

    🛡️ Advanced Threat Modeling: Methodologies & Implementation Strategies Threat modeling is one of the most powerful yet underutilized practices in cybersecurity. As systems grow more complex and interconnected, the ability to anticipate, analyze, and mitigate threats before they materialize is critical for building resilient architectures. That’s why I created this guide: Advanced Threat Modeling: Methodologies and Implementation Strategies for Security Architects. 📌 What’s inside? • Fundamentals & Core Principles → Systematic, attacker-focused, risk-prioritized approaches • Methodologies Deep-Dive → STRIDE, PASTA, DREAD, Attack Trees • Practical Techniques → Data Flow Diagrams (DFDs), trust boundaries, STRIDE-per-element analysis • Integration with DevSecOps → Threat Model as Code, validation with security testing • Tool Comparisons → OWASP Threat Dragon, Microsoft TMT, IriusRisk, ThreatModeler • Case Studies → Financial services & healthcare implementations • Future Trends → AI-enhanced modeling, supply chain focus, cloud-native approaches 💡 Key takeaway: Threat modeling isn’t just a security exercise—it’s a business enabler. Done right, it reduces vulnerabilities, lowers remediation costs, and embeds security into the development lifecycle. 👉 Download the full paper and let’s discuss: How are you integrating threat modeling into your DevSecOps pipelines? #ThreatModeling #CyberSecurity #DevSecOps #RiskManagement #Architecture #ApplicationSecurity #InfoSec #SecurityArchitect

  • View profile for Pascal BORNET

    #1 Top Voice in AI & Automation | Award-Winning Expert | Best-Selling Author | Recognized Keynote Speaker | Agentic AI Pioneer | Forbes Tech Council | 2M+ Followers ✔️

    1,498,336 followers

    🤝 How Do We Build Trust Between Humans and Agents? Everyone is talking about AI agents. Autonomous systems that can decide, act, and deliver value at scale. Analysts estimate they could unlock $450B in economic impact by 2028. And yet… Most organizations are still struggling to scale them. Why? Because the challenge isn’t technical. It’s trust. 📉 Trust in AI has plummeted from 43% to just 27%. The paradox: AI’s potential is skyrocketing, while our confidence in it is collapsing. 🔑 So how do we fix it? My research and practice point to clear strategies: Transparency → Agents can’t be black boxes. Users must understand why a decision was made. Human Oversight → Think co-pilot, not unsupervised driver. Strategic oversight keeps AI aligned with values and goals. Gradual Adoption → Earn trust step by step: first verify everything, then verify selectively, and only at maturity allow full autonomy—with checkpoints and audits. Control → Configurable guardrails, real-time intervention, and human handoffs ensure accountability. Monitoring → Dashboards, anomaly detection, and continuous audits keep systems predictable. Culture & Skills → Upskilled teams who see agents as partners, not threats, drive adoption. Done right, this creates what I call Human-Agent Chemistry — the engine of innovation and growth. According to research, the results are measurable: 📈 65% more engagement in high-value tasks 🎨 53% increase in creativity 💡 49% boost in employee satisfaction 👉 The future of agents isn’t about full autonomy. It’s about calibrated trust — a new model where humans provide judgment, empathy, and context, and agents bring speed, precision, and scale. The question is: will leaders treat trust as an afterthought, or as the foundation for the next wave of growth? What do you think — are we moving too fast on autonomy, or too slow on trust? #AI #AIagents #HumanAICollaboration #FutureOfWork #AIethics #ResponsibleAI

  • View profile for Kuba Szarmach

    Advanced AI Risk & Compliance Analyst @Relativity | Curator of AI Governance Library | CIPM AIGP | Sign up for my newsletter of curated AI Governance Resources (1.700+ subscribers)

    17,343 followers

    🧩 What if someone gave you a step-by-step guide for auditing AI systems—with every legal reference already mapped out? That’s what Dr. Gemma Galdon Clavell, PhD has done with her AI Auditing Checklist, published under the EDPB’s Support Pool of Experts Programme. And I can’t overstate how helpful this resource is. 💡 Why it matters? We keep saying audits are central to trustworthy AI—but very few tools actually show how to do one in practice. This checklist breaks down every stage of the audit, mapping it directly to GDPR and AI Act requirements. It’s not just theory. It’s actionable. Here’s what makes it stand out: ✅ A clear structure across pre-processing, in-processing, and post-processing stages ✅ Templates for model cards, system maps, and documentation ✅ Legal hooks for every question—articles, recitals, and chapters already linked ✅ Detailed prompts for testing bias, fairness, and accountability ✅ Guidance on adversarial audits when internal access isn’t available Reading this, I finally felt like the gap between regulatory intent and practical implementation was being bridged. If you’re tasked with AI governance, compliance, or procurement, this checklist is worth bookmarking. Have you tried using a structured checklist like this in your audit work? What’s helped you most? #ResponsibleAI #AICompliance #GDPR #AIAudits #AIGovernance Did you like this post? Connect or Follow 🎯 Jakub Szarmach Want to see all my posts? Ring that 🔔 __________________________________ Did you like this post? Connect or Follow 🎯 Jakub Szarmach, AIGP, CIPM Want to see all my posts? Ring that 🔔

Explore categories