Trust and power in digital aid systems

Explore top LinkedIn content from expert professionals.

Summary

Trust-and-power-in-digital-aid-systems refers to how people’s confidence and control are balanced when digital tools, like artificial intelligence and data systems, are used to deliver aid and make critical decisions. These discussions explore the importance of transparency, ethical design, and community involvement to make sure digital aid systems serve those in need without creating risks or shifting power unfairly.

  • Build transparency: Make digital aid systems open about how they work and what decisions they make, so everyone involved feels informed and included.
  • Prioritize accountability: Set clear rules and provide oversight for how data is collected and used to protect people’s rights and prevent misuse.
  • Invite community voices: Actively involve local communities in the design and ongoing management of digital aid tools, so solutions address real needs and build trust.
Summarized by AI based on LinkedIn member posts
  • View profile for Jan Beger

    Global Head of AI Advocacy @ GE HealthCare

    84,919 followers

    This paper examines how trust is built or challenged among patients and healthcare professionals using AI-based triage systems in Swedish primary care. 1️⃣ Trust relies on patients’ ability and willingness to provide accurate information during AI-guided symptom reporting. 2️⃣ Some patients exaggerate symptoms to gain attention, driven by fears the AI might dismiss their concerns. 3️⃣ Patients’ digital skills and prior experience with similar tools influenced how effectively they used the AI application. 4️⃣ Concerns about how symptom data is used and stored shaped how openly patients interacted with the AI system. 5️⃣ AI outputs must align with healthcare professionals’ clinical reasoning, especially in complex or nuanced cases. 6️⃣ Experienced professionals were more skeptical of AI suggestions, using them as checks rather than guides, unlike less experienced peers. 7️⃣ The AI’s rigid, symptom-focused questioning often failed to capture patient complexity, limiting trust and utility. 8️⃣ Emotional responses, especially in vulnerable situations, shaped user trust more than cognitive evaluations alone. 9️⃣ Professional oversight was critical—healthcare workers acted as a safeguard against potential AI errors or oversights. 🔟 Both groups emphasized the need for clear roles, responsibilities, and guidelines for interpreting and acting on AI-generated information. ✍🏻 Emilie Steerling, Petra Svedberg, Per Nilsen, Elin Siira, Jens Nygren. Influences on trust in the use of AI-based triage—an interview study with primary healthcare professionals and patients in Sweden. Frontiers in Digital Health. 2025. DOI: 10.3389/fdgth.2025.1565080

  • View profile for Neeraj S.

    Improving AI adoption by 10x | Co-Founder Trust3 AI 🤖

    24,347 followers

    AI without trust is like a supercar without brakes. Powerful but dangerous. Originally posted on Trust3 AI Consider this split: Without Trust Layer: → Black box decisions → Unknown biases → Hidden agendas → Unchecked power With Trust Layer: → Transparent processes → Verified outcomes → Ethical guardrails → Human oversight The difference matters because: - AI touches everything - Decisions affect millions - Stakes keep rising - Trust determines adoption What we need: → Clear audit trails → Explainable outputs → Value alignment → Democratic control Remember: Power without accountability? That's not innovation. That's danger. The future needs both: → AI advancement → Trust infrastructure Which side are you building for?

  • View profile for Afua Bruce

    Author, The Tech That Comes Next | tech + strategy + impact | Executive Advisor | Board Member | Keynote Speaker

    7,236 followers

    Local governments are grappling with how to embrace AI in their work and in their communities. The agencies that recognize AI as one of many tools (with its own limits) and effectively engage with constituents will be better able to use AI in ways that support strong systems in their communities. The City of Long Beach is undertaking a process to implement these concepts. 1️⃣ They started by identifying their responsibility and being transparent about their limitations: "We don’t have all the answers yet. But we’re not shying away from the tough questions. Our commitment is to approach this work with transparency, build trust, and continuously refine our engagement strategies as we learn." 2️⃣ Then, they went to hear what their community members had to say. The City of Long Beach developed and distributed a survey about Gen AI use, distributed both online and in community spaces such as libraries and neighborhood centers. From the City: "A clear takeaway from the survey was the presence of an information and trust gap between the City and residents when it comes to AI...Without this clarity, it is difficult for residents to feel confident that AI is being used responsibly, ethically, and in alignment with community values." 3️⃣ Next, the City built a strategy that addressed what they learned from the community. An initiative now outlines concrete steps to address the information and trust gap, and the City launched a series of five, free community workshops (that offer food and refreshments, will be hosted in ADA-accessible spaces, with interpretation services available) about navigating the new digital age. Aside from seeing some of the values articulated in The Tech That Comes Next clearly put into action here, this process excites me because it highlights that tech/digital interaction with communities presents an opportunity to build or break trust. More on Long Beach's work: https://lnkd.in/eWVcFtuX CC: Małgorzata (Małgosia) Rejniak #PublicInterestTech #AI #CivicTech #CommunityBuilding

  • View profile for Meenakshi (Meena) Das
    Meenakshi (Meena) Das Meenakshi (Meena) Das is an Influencer

    CEO at NamasteData.org | Advancing Human-Centric Data & Responsible AI

    16,099 followers

    After a week of calls pushing back a big market research firm on their analysis strategy for immigrants-related opportunities and then not agreeing to the views of a well-established publication's article on identity, I need to end the week on a better note. What if our commitments to data and AI mean pushing for deepening trust? Even in our smallest actions? What if they could shift power back into the hands of communities? I believe our biggest strength (individually and collectively, as you and me) lies in committing to actions that can rethink the relationship between power, trust, and technology: ●  trust: that is, imagining data systems that don't just extract information but invite participation, collaboration, and shared ownership. ●  power: that is, picturing communities not as subjects of analysis but as architects of the questions, the data, and the solutions. ●  technology: that is, envisioning, say, AI tools that challenge systemic inequities by shining a light on them and giving us the tools to act. Yes, there is more hope than we realize when someone asks, "How do we build with communities instead of for them?" The future of trust and technology must be written by those who dream and imagine better systems—and then work to create them. So, dream and imagine, a few bunch of times + 1. #nonprofits #nonprofitleadership #community

  • View profile for Emily Springer, PhD

    Cut-the-hype AI Expert | Delivering AI value by putting people 1st | Responsible AI Strategist | Building confident staff who can sit, speak, & LEAD the AI table | UNESCO AI Expert Without Borders & W4Ethical AI

    5,211 followers

    🚨 𝗧𝗵𝗲 𝗿𝗶𝘀𝗲 𝗼𝗳 "𝗮𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗵𝘂𝗺𝗮𝗻𝗶𝘁𝗮𝗿𝗶𝗮𝗻𝗶𝘀𝗺"—𝗮 𝘄𝗮𝗿𝗻𝗶𝗻𝗴 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗿𝗲𝗳𝘂𝗴𝗲𝗲 𝗮𝗶𝗱. A powerful new piece from 𝗠𝗘𝗥𝗜𝗣 exposes how digital technologies — biometrics, blockchain, predictive analytics, and even humanitarian robots—are reshaping the way aid is delivered to refugees. But at what cost?  💡 𝗞𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆: The shift toward "data-driven humanitarianism" is not just about efficiency—it’s about 𝗽𝗼𝘄𝗲𝗿. The same digital systems used to "help" refugees are also fueling 𝗺𝗶𝗹𝗶𝘁𝗮𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻, 𝘀𝘂𝗿𝘃𝗲𝗶𝗹𝗹𝗮𝗻𝗰𝗲, 𝗮𝗻𝗱 𝗽𝗿𝗼𝗳𝗶𝘁-𝗺𝗮𝗸𝗶𝗻𝗴 at their expense.  👉 This is why we must 𝗽𝘂𝘀𝗵 𝗳𝗼𝗿 𝗔𝗜 𝗲𝘁𝗵𝗶𝗰𝘀, 𝗮𝗰𝗰𝗼𝘂𝗻𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆, 𝗮𝗻𝗱 𝘀𝘁𝗿𝗼𝗻𝗴𝗲𝗿 𝗱𝗮𝘁𝗮 𝗽𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗼𝗻𝘀 in humanitarian aid. Refugees are 𝗻𝗼𝘁 𝘁𝗲𝘀𝘁 𝘀𝘂𝗯𝗷𝗲𝗰𝘁𝘀 𝗳𝗼𝗿 𝗲𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁𝗮𝗹 𝘁𝗲𝗰𝗵—their rights and dignity must come first.  🔹 In Jordan’s 𝗭𝗮’𝗮𝘁𝗮𝗿𝗶 𝗿𝗲𝗳𝘂𝗴𝗲𝗲 𝗰𝗮𝗺𝗽, Syrian refugees must scan their 𝗶𝗿𝗶𝘀𝗲𝘀 just to buy food. Their transactions are logged on a blockchain, tying their survival to digital systems they have little control over.  🔹 The UN’s 𝗯𝗶𝗼𝗺𝗲𝘁𝗿𝗶𝗰 𝗶𝗱𝗲𝗻𝘁𝗶𝘁𝘆 𝘀𝘆𝘀𝘁𝗲𝗺 (BIMS) has now expanded globally, with 𝗼𝘃𝗲𝗿 𝟯𝟳 𝗺𝗶𝗹𝗹𝗶𝗼𝗻 𝗿𝗲𝗳𝘂𝗴𝗲𝗲𝘀 registered. It tracks life events from 𝗺𝗮𝗿𝗿𝗶𝗮𝗴𝗲 𝘁𝗼 𝗱𝗲𝗮𝘁𝗵, raising major concerns about 𝗱𝗮𝘁𝗮 𝗽𝗿𝗶𝘃𝗮𝗰𝘆, 𝘀𝘂𝗿𝘃𝗲𝗶𝗹𝗹𝗮𝗻𝗰𝗲, 𝗮𝗻𝗱 𝗰𝗼𝗲𝗿𝗰𝗶𝗼𝗻.  🔹 𝗧𝗲𝗰𝗵 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗮𝗻𝗱 𝗴𝗼𝘃𝗲𝗿𝗻𝗺𝗲𝗻𝘁𝘀 are capitalizing on humanitarian crises to develop AI-driven predictive models, autonomous aid delivery, and expansive 𝗱𝗮𝘁𝗮 𝗰𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝘀𝘆𝘀𝘁𝗲𝗺𝘀—but who benefits? With 𝘄𝗲𝗮𝗸 𝗱𝗮𝘁𝗮 𝗽𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗼𝗻 𝗹𝗮𝘄𝘀, anonymized refugee data can be 𝘀𝗼𝗹𝗱, 𝘀𝗵𝗮𝗿𝗲𝗱, 𝗮𝗻𝗱 𝗲𝘅𝗽𝗹𝗼𝗶𝘁𝗲𝗱 long after the crisis ends.  🔹 Meanwhile, 𝗿𝗼𝗯𝗼𝘁𝘀 𝗮𝗻𝗱 𝗔𝗜 are being positioned as the future of humanitarian response, raising troubling questions about 𝗮𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝗶𝗰 𝗯𝗶𝗮𝘀, 𝗰𝗼𝗻𝘀𝗲𝗻𝘁, 𝗮𝗻𝗱 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻-𝗺𝗮𝗸𝗶𝗻𝗴. Are we really comfortable with AI dictating 𝘄𝗵𝗼 𝗴𝗲𝘁𝘀 𝗮𝗶𝗱 𝗮𝗻𝗱 𝘄𝗵𝗼 𝗱𝗼𝗲𝘀𝗻’𝘁?  Check it out for yourself: https://lnkd.in/eG5uEdDC What are your thoughts on AI in humanitarian work? Should digital tools play a larger role, or do they introduce new risks? Let’s discuss! ⬇️ #AIethics #ResponsibleAI #DigitalHumanitarianism

  • View profile for Alex Bendersky

    Head of Innovation | Digital Health Product Strategy | Scaling AI, Data & Value-Based Care Solutions

    17,280 followers

    Continuing to explore trust in AI: ➡️ Trust as a regulatory factor: Trust is critical for adopting AI, influencing the willingness to accept AI-driven decisions and share tasks with it, while distrust limits its usage. ➡️ Dimensions of trust: Trust in AI encompasses technical elements like accuracy, transparency, and safety, and non-technical elements like ethical and legal compliance. ➡️ Challenges of trust: AI's complexity, unpredictability, lack of transparency, biases, and privacy concerns create barriers to trust, often resulting in resistance. ➡️ Trust metrics and measurement: Trust in AI can be evaluated using frameworks that focus on explainability, transparency, fairness, accountability, and robustness. ➡️ Building trust: Strategies for increasing trust include improving AI’s transparency, documenting its processes, addressing ethical concerns, and designing systems that integrate empathy and privacy. ➡️ Distrust factors: Key contributors to distrust include surveillance, manipulation, and concerns over human autonomy and dignity, along with fears about unpredictable futures. ➡️ Equity in trust: Ensuring equitable trust in AI involves addressing biases and creating systems that do not disproportionately affect marginalized groups. ➡️ Future directions: Researchers need to develop robust frameworks to measure trust, integrate cultural diversity into AI designs, and establish ethical guidelines to ensure trustworthy systems. Exposure and experience will lead to a greater trust element.

Explore categories