Link between knowledge and trust in breakthrough tech

Explore top LinkedIn content from expert professionals.

Summary

The link between knowledge and trust in breakthrough technology—such as artificial intelligence and other emerging systems—refers to how people's understanding and confidence in these innovations shape their adoption and impact across society. In simple terms, trust grows when people feel informed and assured that technology is transparent, safe, and aligns with their values.

  • Build understanding: Make it a priority to educate people about how new technologies work and discuss both their potential and their limitations in everyday language.
  • Encourage transparency: Demand clear explanations and accountability from organizations developing or deploying advanced systems to help users feel secure and informed.
  • Promote inclusive dialogue: Invite a range of stakeholders—such as employees, consumers, and regulators—into the conversation to address concerns and ensure technology serves broader societal interests.
Summarized by AI based on LinkedIn member posts
  • View profile for Jan Beger

    Global Head of AI Advocacy @ GE HealthCare

    84,919 followers

    This paper presents findings from a global survey of 48,000 people across 47 countries on public trust, attitudes, and use of AI technologies in 2025. 1️⃣ Over half of people are wary of trusting AI, with greater skepticism in advanced economies; yet 72% still accept AI use. 2️⃣ Two-thirds of people regularly use AI, but most lack formal training and report limited knowledge—especially in advanced economies. 3️⃣ AI benefits like improved efficiency and decision-making are widely experienced, but so are harms including misinformation, job loss, and reduced human connection. 4️⃣ 70% of respondents support stronger AI regulation, but only 43% believe existing laws are adequate, reflecting a governance gap. 5️⃣ Employees report both performance benefits and inappropriate AI use at work, often without proper oversight or training. 6️⃣ 83% of students use AI regularly, but many over-rely on it and hide their usage, risking critical skill development. 7️⃣ Emerging economies show higher AI literacy, trust, and benefit realization than advanced economies, positioning them for faster innovation. 8️⃣ Trust in AI has declined since 2022 despite rising use, driven by growing awareness of AI risks and system limitations. 9️⃣ Trust is highest when AI systems are seen as safe, well-governed, and used by credible institutions like universities and healthcare providers. 🔟 The paper proposes four pathways—knowledge, motivation, uncertainty reduction, and institutional trust—for promoting trusted AI adoption. ✍🏻 Nicole Gillespie, Steven Lockey, Tabi Ward, Alexandria Macdade, Gerard Hassed. Trust, attitudes and use of artificial intelligence: A global study 2025. University of Melbourne and KPMG. 2025. DOI: 10.26188/28822919

  • View profile for Khurshed Dordi

    CEO Coach | Business Growth Advisor | GCC Expert | Ex-MD & CEO / COO | Author

    26,695 followers

    AI won’t fail because of technology. It’ll fail because of culture. Every week, there’s a breakthrough — copilots, agents, real-time personalisation. Tech is moving at lightning speed. But here’s the gap: 👉 75% of knowledge workers already use AI. Yet 60% of leaders admit their company has no clear plan. 👉 Almost every enterprise invests, but only 1% feel “mature.” I spoke to a founder who rolled out an AI assistant. The tool worked flawlessly. Three months later, adoption was stuck below 20%. Why? Nobody knew when to use it. Some feared it made them look replaceable. Others worried they’d be blamed if the AI got it wrong. That’s not a tech issue. That’s a trust issue. Leaders must close three gaps: 🔹 𝗦𝗽𝗲𝗲𝗱 𝘃𝘀. 𝗥𝗲𝗮𝗱𝗶𝗻𝗲𝘀𝘀 – Tech evolves in months; people need clarity and trust to adapt. 🔹 𝗖𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝘃𝘀. 𝗖𝗼𝗺𝗳𝗼𝗿𝘁 – Tools feel useless without context and training. 🔹 𝗔𝗰𝗰𝗲𝘀𝘀 𝘃𝘀. 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 – Adoption sticks only when AI links to people’s goals and growth. Research shows: orgs that learn with AI are 2× better at navigating uncertainty. And 𝗮𝗱𝗼𝗽𝘁𝗶𝗼𝗻 𝘀𝗰𝗮𝗹𝗲𝘀 𝗳𝗮𝘀𝘁𝗲𝘀𝘁 𝘄𝗵𝗲𝗻 𝗹𝗲𝗮𝗱𝗲𝗿𝘀 𝗺𝗮𝗸𝗲 𝗶𝘁 𝗰𝗹𝗲𝗮𝗿: 𝗲𝗺𝗽𝗹𝗼𝘆𝗲𝗲𝘀 𝘄𝗼𝗻’𝘁 𝗯𝗲 𝗽𝗲𝗻𝗮𝗹𝗶𝘀𝗲𝗱 𝗳𝗼𝗿 𝗔𝗜 𝗺𝗶𝘀𝘁𝗮𝗸𝗲𝘀. AI adoption is no longer about implementation. It’s about belief. And belief starts at the top. 💡 𝗖𝗘𝗢 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻: 𝗔𝗿𝗲 𝘆𝗼𝘂 𝗴𝗶𝘃𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗽𝗲𝗼𝗽𝗹𝗲 𝘁𝗼𝗼𝗹𝘀 — 𝗼𝗿 𝘁𝗵𝗲 𝗯𝗲𝗹𝗶𝗲𝗳 — 𝘁𝗼 𝘁𝗵𝗿𝗶𝘃𝗲 𝘄𝗶𝘁𝗵 𝗔𝗜? #AIAdoption #Leadership #DigitalTransformation

  • View profile for Emilio Planas

    Strategic thinker and board advisor shaping alliances and innovation to deliver real-world impact, influence, and economic value.

    3,965 followers

    What if embracing AI wasn’t about how much we know but how much we imagine? The recent analysis by Chiara Longoni, Gil Appel and Stephanie M. Tully reveals a counterintuitive dynamic: individuals with lower AI literacy often show more enthusiasm for AI adoption than those with deeper technical understanding. This paradox, rooted in the sense of “magic” that AI evokes in less informed users, has implications far beyond marketing as it touches how cultural narratives, institutional trust and economic realities shape global AI engagement. Initial enthusiasm can drive adoption but it also introduces long-term challenges in managing expectations and ethical risks, especially when perception is driven more by fascination than understanding. Around the world, openness to AI is shaped not just by knowledge but by trust in institutions, cultural attitudes toward innovation and the political context in which technology is introduced. In high-trust environments, even limited literacy may support adoption. In others, skepticism may persist regardless of technical skill. AI literacy is not static. As people interact with AI at home, at work or in education, their perceptions evolve. Fascination may give way to pragmatism or concern. Higher literacy often brings sharper awareness of job displacement, data extraction or algorithmic bias not disinterest but informed caution. Trust in developers also matters. Users may respond differently to AI from global players like OpenAI or Google versus regional or state actors like Huawei or Mistral. Perceptions vary depending on who is building the system and under what governance. Media narratives from utopian promises to dystopian fears further shape global perceptions. Hype cycles can widen the gap between what AI seems to do and what it actually delivers, especially in regions with limited access to advanced systems. Economic incentives complicate the equation. Freelancers or entrepreneurs may adopt AI quickly for productivity regardless of literacy. Meanwhile, highly literate professionals in vulnerable sectors may resist not due to fear but due to clarity about what’s at stake. As AI becomes embedded in decisions from hiring to healthcare, long-term receptivity will hinge not on mystique but on legitimacy rooted in transparency, fairness and user trust. Because in a fragmented world, AI will not be trusted for how magical it appears but for how responsibly it is made to serve. #aiadoption #trust #technologyethics #geopolitics #futureofwork #digitalliteracy #publicpolicy Harvard Business Review Chiara Longoni Gil Appel Stephanie Tully

  • View profile for Creus Moreira Carlos

    Founder and CEO WISeKey.com NASDAQ:WKEY and SEALSQ.com NASDAQ:LAES | Best-selling Author| Former Cybersecurity UN Expert

    15,066 followers

    We are entering an era where the boundaries between technologies are dissolving. Artificial intelligence, quantum computing, blockchain, biotechnology, and space technologies are converging into powerful ecosystems that transcend traditional sectors. This convergence promises breakthroughs with the potential to transform industries, redefine economies, and extend human capabilities. Yet, it also raises profound questions: how can societies build trust in systems that are increasingly complex, interconnected, and opaque? The fusion of AI and quantum computing will accelerate problem-solving power to levels previously unimaginable. The combination of blockchain and IoT can create tamper-proof ecosystems for connected devices, ensuring transparency and accountability. The integration of space infrastructure with advanced telecommunications is enabling global connectivity, bridging digital divides, and strengthening resilience in critical infrastructures. When orchestrated thoughtfully, these synergies unlock innovations that no single technology could deliver alone. But with great convergence comes great risk. As technologies intertwine, vulnerabilities multiply. A flaw in one layer can cascade across entire ecosystems. Citizens, businesses, and governments are increasingly asking: who controls the data? How is it being used? Can we trust the algorithms that shape decisions about our health, our finances, or our freedoms? Trust has become the true currency of the digital age. Without it, adoption falters, innovation stalls, and society resists. The erosion of trust is visible in the skepticism toward AI systems, in data breaches that undermine confidence in institutions, and in geopolitical battles over technological sovereignty. To navigate this new reality, trust must be designed into technology from the start, not patched on as an afterthought. Transparency is essential to ensure that algorithms and infrastructures are explainable, auditable, and accountable. Security must be embedded through post-quantum cryptography, secure chips, and zero-trust architectures to safeguard the next generation of networks. Ethics must guide innovation, integrating human values to guarantee fairness, dignity, and inclusivity. Governance must evolve toward global cooperation, preventing fragmentation and creating standards that transcend borders. The convergence of technologies has the potential to build a future of unprecedented opportunity. But the foundation of this future must be trust: trust in systems, trust in institutions, and ultimately, trust between people. Technology alone cannot create trust. Trust is a social contract—earned through transparency, maintained through accountability, and strengthened through shared values. If trust becomes the cornerstone of convergence, technology will not only advance but will also elevate humanity. Transhumancode.com

  • View profile for Simon Philip Rost
    Simon Philip Rost Simon Philip Rost is an Influencer

    Chief Marketing Officer | GE HealthCare | Digital Health & AI | LinkedIn Top Voice

    42,791 followers

    No Trust, No Transformation. Period. AI is becoming ready for the healthcare frontlines. But without trust, it stays in the demo room. At every conference, HIMSS, HLTH Inc., Society for Imaging Informatics in Medicine (SIIM), and even yesterday’s HLTH Europe’s Transformation Summit tech dazzles. AI, cloud, interoperability...are ready to take the stage. And yet, one thing lingers in every room: TRUST. We celebrate the breakthroughs and innovation, but quietly wonder: Will clinicians actually adopt this? Will patients accept it? It’s unmistakable…If we don’t solve the trust gap, digital tools remain in demo stage, not becoming an adopted solution! This World Economic Forum & Boston Consulting Group (BCG) white paper was mentioned yesterday at the health transformation summit by Ben Horner and was heavily discussed during our round table conversation at the summit. It lays out a bold vision for building trust in health AI and it couldn’t come at a more urgent time. Healthcare systems are under pressure, and AI offers real promise. But without trust, that promise risks falling flat. Here are some of the key points summarized by AI from the report “Earning Trust for AI in Health”: • Today’s regulatory frameworks are outdated: They were built for static devices, not evolving AI systems. • AI governance must evolve: Through regulatory sandboxes, life-cycle monitoring, and post-market surveillance. • Technical literacy is key: Many health leaders don’t fully understand AI’s risks or capabilities. That must change. • Public–private partnerships are essential: To co-develop guidelines, test frameworks, and ensure real-world impact. • Global coordination is lacking: Diverging regulations risk limiting access and innovation, especially in low-resource settings. Why it matters: AI will not transform healthcare unless we embed trust, transparency, and accountability into every layer from data to IT deployment. That means clinicians/hcps need upskilling, regulators need new tools, and innovators must be part of the solution, not just the source of disruption. The real innovation? Building systems that are as dynamic as the technology itself. Enjoy the read and let me know your thoughts…

  • View profile for James Barry, MD, MBA

    AI Critical Optimist | Experienced Physician Leader | Key Note Speaker | Co-Founder NeoMIND-AI and Clinical Leaders Group | Pediatric Advocate| Quality Improvement | Patient Safety

    4,415 followers

    I frequently read Jan Beger here. He grasps the bigger picture of AI in healthcare. His recent article is deep and thought provoking, getting at the essence of Patient Trust in Healthcare physically, emotionally, and mentally (https://lnkd.in/gNXqtteg). How will AI change Trust? He highlights: “there remains no shared understanding of what trust in AI actually means in clinical care.” “Trust in healthcare is, at its core, a relational experience—an act of opening oneself to another with the belief, or at least the hope, that they will respond with care.” TRUST is THE foundational element for a meaningful, beneficial patient-clinician relationship. Clinicians' trust in AI will be shaped by her or his prior experiences with technology and their perception of how reliable, understandable, and controllable the AI is. Most clinicians, rightfully so, are skeptical of yet another tech innovation that claims to be a reliable solution to a healthcare problem. We have seen too many breakthroughs become burdens. "Optimal trust," is where both humans and AI (or AI developers) recognize their limitations and acknowledge both can make mistakes. Mutual humility will become the foundation of a safer, more resilient healthcare system. Patients' trust is deeply human, resting on emotional vulnerability. Trust will hinge on transparency, understanding how/when AI is used, and confidence in its safety and efficacy. Studies show that patients want to know a fair bit about the AI tool: manufacturer, training data, accuracy, effectiveness, reliability, limitations, and risks (doi: 10.1371/journal.pdig.0000826). Trust in AI differs from trust in people. Unlike trust between humans, which is relational and emotional, trust in AI can be structural, brittle, and cold. Silent updates, inconsistent behavior, and lack of explainability can erode trust, even in systems that once felt reliable. While AI interactions may foster a sense of trust, genuine trust will come from a design based on sound ethical principles, responsible implementation and evaluation, and a clear moral compass with substantial accountability rather than artificial displays of empathy and sycophancy. Every patient encounter with a healthcare system or clinician requires a level of vulnerability. Someone has access not just to your body, but to your uncertainty, fear, pain, and your private story. Trust makes that surrender of yourself as a patient possible. Trust is not abstract. It is forged in moments: 🟢 A nurse’s reassuring smile when discussing a child’s recovery. 🟢 The gentle placement of a doctor's hand on a shoulder while delivering devastating news. 🟢 The unspoken sense that she “sees me, not just my symptoms.” Trust must be earned, co-constructed, and grounded in partnership. As AI becomes more integrated into healthcare, we must realize that no algorithm can truly recognize the sacred vulnerability of another human being. #UsingWhatWeHaveBetter

Explore categories