AI Governance Practices

Explore top LinkedIn content from expert professionals.

  • View profile for Shea Brown
    Shea Brown Shea Brown is an Influencer

    AI & Algorithm Auditing | Founder & CEO, BABL AI Inc. | ForHumanity Fellow & Certified Auditor (FHCA)

    21,951 followers

    New York State DFS is looking for comments on a proposed circular letter that outlines proper risk management for AI systems and external data used in insurance underwriting. The "Proposed Insurance Circular Letter" addresses the use of Artificial Intelligence Systems (AIS) and External Consumer Data and Information Sources (ECDIS) in insurance underwriting and pricing. The key points include: 💡 Purpose and Background: The DFS aims to foster innovation and responsible technology use in the insurance sector. It acknowledges the benefits of AIS and ECDIS, but also highlights potential risks such as reinforcing systemic biases, leading to unfair or discriminatory outcomes. 💡 Definitions and Scope: AIS refers to machine-based systems that perform functions akin to human intelligence, such as reasoning and learning, used in insurance underwriting or pricing. ECDIS includes data used to supplement or proxy traditional underwriting and pricing but excludes specific traditional data sources like MIB Group exchanges, motor vehicle reports, or criminal history searches. 💡 Management and Use: Insurers are expected to develop and manage their use of ECDIS and AIS in a manner that is reasonable and aligns with their business model. 💡 Fairness Principles: Insurers must ensure that ECDIS and AIS do not use or are not based on protected class information, do not result in unfair discrimination, and comply with all applicable laws and regulations. 💡 Data Actuarial Validity: The data used must adhere to generally accepted actuarial practices, demonstrating a significant, rational, and non-discriminatory relationship between the variables used and the risk insured. 💡 Unfair and Unlawful Discrimination: Insurers must establish that their underwriting or pricing guidelines derived from ECDIS and AIS do not result in unfair or unlawful discrimination, including performing comprehensive assessments and regular testing. 💡 Governance and Risk Management: Insurers are required to have a corporate governance framework that provides oversight. This includes board and senior management oversight, formal policies and procedures, documentation, and internal control mechanisms. 💡 Third-Party Vendors: Insurers remain responsible for ensuring that tools, ECDIS, or AIS developed or deployed by third-party vendors comply with all applicable laws and regulations. 💡 Transparency and Disclosure: Insurers must disclose their use of ECDIS and AIS in underwriting and pricing. 📣 Feedback Request: The Department is seeking feedback on the circular letter by March 17, 2024, encouraging stakeholders to contribute to the proposed guidance. #ai #insurance #aigovernance #airiskmanagement Jeffery Recker, Dr. Benjamin Lange, Borhane Blili-Hamelin, PhD, Kenneth Cherrier

  • View profile for Saeed Al Dhaheri
    Saeed Al Dhaheri Saeed Al Dhaheri is an Influencer

    UNESCO co-Chair | AI Ethicist | International Arbitrator I Thought leader | Certified Data Ethics Facilitator | Author I LinkedIn Top Voice | Global Keynote Speaker & Masterclass Leader | Generative AI • Foresight

    24,382 followers

    AI's Impact Is Only As Strong As the Trust Built Upon Robust Governance As we race toward an AI-powered future—where cities are intelligent, services personalized, and economies more efficient—there’s one foundational truth we can’t ignore: Advanced AI outcomes = strong AI governance foundations. Without public trust, the most sophisticated AI systems will stall at the edge of public resistance, compliance obstacles, and regulatory uncertainty. 🏛️ Governments and corporate organizations must lead by example: Embed ethics, transparency, accountability, and oversight into every stage of the AI lifecycle. Investing in these capabilities is as important as investing in the technology itself! Build trust not just through technology, but through responsible design, deployment, and engagement. Recognize that trust is not a given—it’s earned. In the AI economy, it’s not just about innovation. It’s about sustainable innovation rooted in trust. #AI #TrustInAI #AIGovernance #ResponsibleAI #DigitalTrust #FuturesThinking #PublicSectorInnovation #CorporateLeadership #EthicalAI #AIforGood

  • View profile for Ross McCulloch

    Helping charities deliver more impact with digital, data & design - Follow me for insights, advice, tools, free training and more.

    22,833 followers

    Charity Leaders & AI: Where Do We Start? 🤖 I've spent the last few years helping charities embed digital (and increasingly AI) into their core mission. AI was today's topic on the Third Sector Lab x SCVO Digital Senior Leaders Programme with me, John Fitzgerald and Maddie Stark Here's the questions charity leaders need to ask plus a few practical ways to move the conversation from hype to strategy 👇 The Big Questions We Need to Ask❓ - Where is AI already affecting our mission—positively or negatively? - How empowered (or anxious) do our staff and volunteers feel about AI? - Which parts of our work could AI actually improve (reach, impact, efficiency)? - Do we understand the risks—data, ethics, trust? How will we keep our values central? - Who else in our network is experimenting with AI and what are they learning? Five Practical Steps for AI-Ready Leaders 5️⃣ AI Impact Mapping 🗺️ Bring your team together. Map every touchpoint where AI could play a role - from fundraising and supporter comms to governance and frontline service. Pinpoint where the real wins and risks are for your charity. Staff & Volunteer Pulse Check 🩺 Run a session where people role-play different AI scenarios. What opportunities and anxieties bubble up? (Be ready for honest feedback!) Use it as a way to shape your AI literacy and support plans. Debate Real-World AI Use Cases 👥 Share case studies: the good, the bad, and the complex. Chatbots for helplines? Automated grant app sorting? Data-driven supporter segmentation? Debate - don’t sell - the practicalities and ethical red lines. Risk & Governance Tabletop 🎲 Role play as trustees, comms, digital leads, service staff—respond to an data breach as a result of AI usage or staff concerns about AI bias in recruitment. Work out who needs to be in the room when things go wrong, and what new protocols may be needed. Quickfire AI Experiment 🧪 Have your team test a popular AI tool - draft a donor email, summarise a board paper, generate a campaign image. Use Co-Pilot, ChatGPT, Perplexity, Claude, Gemini or whatever tool is most relevant to your needs. Compare notes: What worked, what failed, where was human oversight crucial? Make Space for Messy Conversations 🪢 - Is AI use visible or happening “off the books?” - What would success - or failure - with AI look like for us next year? - How can we work across the sector for stronger, more ethical approaches? - What are the values we refuse to compromise on, no matter what shiny AI tool we see? Don’t Forget: Make It Actionable 💪 - Finish your next senior team meeting with a commitment - Run a staff survey on AI - Pilot a small AI project - Join or create a sector AI peer group If you’ve taken baby steps, had a tough internal debate, or even failed spectacularly, or you just want to share a handy resource - I want to hear about it in the comments 👇

  • View profile for Meenakshi (Meena) Das
    Meenakshi (Meena) Das Meenakshi (Meena) Das is an Influencer

    CEO at NamasteData.org | Advancing Human-Centric Data & Responsible AI

    16,101 followers

    If my friend and project partner, Michelle shared earlier this week 9 questions you can take from the AI Equity Project, I would like to take it a step further. (here attached are the 9 questions she shared earlier this week). Let's talk about your allyship actions (+ useful resources tagged here). Because allyship must be purposeful, meaningful, and powerful, here are 5 ways you can be an ally to the ideas of equitable, responsible, beneficial AI in the nonprofit sector today: ● Advocate within your circles: Your voice matters. If you're on a board or part of a philanthropic organization, you have the power to influence. Encourage decision-makers to embed AI equity in funding guidelines and project criteria. Use resources linked here to make your case. ● Build bridges: Introduce your staff and sector peers to tech partnerships or consultants who prioritize ethics and inclusivity in AI. Your network can be a bridge to equitable innovation. ● Become a fiscal ally: Support research and capacity-building for nonprofits embracing equitable AI. Consider becoming a fiscal sponsor for research projects focused on AI equity or funding scholarships for nonprofit professionals attending AI training. To learn more about the AI Equity Project's sponsorship details, message me to get a copy. ● Boost the signal: Sharing resources is a powerful way to spread knowledge and foster understanding. Share resources like this report or the ones below, and invite people to present to your team, board, or community. Education is the first step to transformation, and in-depth conversations about the future. ● Center community voices: Support your team and sector peers by asking better questions and including diverse voices in AI evaluations. Help them find ways to elevate marginalized perspectives in their decision-making processes. Meaningful change requires all of us to act—together. As the last AI post for the year, tagging projects I have been involved in/following closely this year: ● GivingTuesday's AI Readiness Report: https://lnkd.in/dSXz92_t ● Donor Perceptions about AI (project by Nathan Chappell, MBA, MNA, CFRE and Cherian Koshy): https://lnkd.in/dnpXdrYt ● AI Equity Project (thanks to the Giving Compass team for their partnership in this year's work on this project): https://lnkd.in/gX8AX-eZ Also tagging some people/groups you should follow when thinking about AI (they all make valuable tools, resources, courses, and more….): the GivingTuesday team, Fundraising.AI, Tim Lockie, Brandolon Barnett, Anne Murphy, Beth Kanter, Rachel Kimber, MPA, MS, Joanna Drew, John KenyonDavid Norris… who else am I missing? If you are interested in finding someone in particular in this AI space, send me a message, and I promise to make the connections....(social media tagging is not my best game). Let's continue building an equitable AI ecosystem, one step at a time! #nonprofits

  • View profile for Sharat Chandra

    Blockchain & Emerging Tech Evangelist | Startup Enabler

    46,210 followers

    #DPI : Digital Public Infrastructure can drive a sustainable increase in #revenue collection and build trust in government. -India's adoption of digital public infrastructure has helped reduce the country's income tax return processing time. Trust in government and government effectiveness have a reciprocal relationship. Trust is enhanced when political institutions are strong and governments implement policies and initiatives that are aligned with the public interest and improve people’s daily lives. And governments can be effective only when their citizens trust them enough to comply with laws, thereby creating the space for reforms. Of course, trust in government needs more than just robust digital platforms. But the building of India’s digital platform infrastructure has laid some of the foundations for increasing trust by creating an inclusive platform for citizens to transact digitally and empowering users to have more control over their data. Good digital infrastructure can create trust between any two counterpart actors by introducing tamperproof components for identity, #payments, and #security , which allows citizens and businesses to be certain of the #identity of their counterpart and of the legitimacy of the transaction. This allows the reduction in explicit and implicit costs to citizens when they interact with their government, and for businesses in their transactions with individuals, other businesses, and the government. -Kamya Chandra, Tanushka Vaid, and Pramod Varma's article in  International Monetary Fund 's September 2024 F&D (Finance & Development) Edition

  • View profile for Omer Tene

    Partner, Goodwin

    14,913 followers

    AI vendor management. One of the most pressing challenges companies face these days is vetting and contracting with AI vendors. This comes up in two contexts: (a) vetting solutions from AI vendors that your company considers adopting, and (b) vetting solutions from AI vendors that your vendors have adopted. Where do you start? *** The second question comes up a lot in a GDPR or state privacy law context. Your vendors (processors / service providers) are required by law and/or contract to notify you when they start using a new subprocessor. Consider that companies with hundreds of vendors now get thousands such notices. “We started using ChatGPT”. “We’re now using GitHub Copilot”. What’s a GC or CPO to do with such notices? In many cases they haven’t even approved the use of the same tools internally themselves.... *** When the EU AI Act comes into force, under Article 10(6a) of the Parliament’s draft, the obligations of an AI provider could flow down to deployers of AI: “Where the provider cannot comply with the obligations laid down in this Article because that provider does not have access to the data and the data is held exclusively by the deployer, the deployer may, on the basis of a contract, be made responsible for any infringement of this Article.” All the more reason for companies to *closely* vet the solutions they’re implementing. *** And the new draft CCPA regs on risk assessments, require “A service provider or contractor ... [to] cooperate with the business in its conduct of a risk assessment pursuant to Article 10...” The regs focus on such risk assessments for automated decision making as well as the use of data for training AI.  https://lnkd.in/ejk4fYgQ *** There are a few useful checklists out there. The attached was created by Amber Nicole Ezzell from FPF based on convos with more than 30 experts. https://lnkd.in/e8E_t64W This one from PwC is usefully role based: https://lnkd.in/eZwVAZmB And this one from CNIL provides insight into a regulator’s approach: https://lnkd.in/ewgBrXKj *** I suggest going back to basics: what are the risks to the company’s PII and IP? Is there potential for bias or discrimination? Are there accountability mechanisms, including audit and log trails? Can you ensure explainability?   *** From a contractual perspective: look closely at the definitions of customer data, usage data and confidential information. *** I also recommend consulting with outside counsel. From our vantage point, we see how many companies across industry sectors – both vendors and deployers - cope with and respond to these complex challenges.  

  • View profile for Zoe Amar FCIM

    Director, Zoe Amar Digital|Co-author, Charity Digital Skills Report|Co-chair, Charity AI Task Force|Chair, The Charity Digital Code of Practice|Writer, Third Sector|Trustee|Podcaster at Starts at The Top

    9,499 followers

    I'm excited to share our AI checklist for charity trustees and leaders. Nick Scott and I have developed this free resource to help boards, and those who work with them, start the conversation about #ArtificialIntelligence , review progress and plan for the future, whatever stage you are at. The checklist has been a big team effort and we are so grateful to all the charities and organisations who have helped shape it. We've had some great feedback from charities of different sizes and causes, including Hospice UK Wikimedia UK and Christian Aid. Thank you to everyone who participated in the user testing, including the Charity Commission for England and Wales. There is so much happening in AI at the moment it's really important that we all learn together. We hope you find the checklist useful and would love to hear what you think of it. https://lnkd.in/eUBgGW7i Alongside the checklist, and as requested by the organisations we tested it on, we've published a blog for anyone who is new to AI, covering what AI is, how charities are using it and how to take your first steps with it. I'll put the link to this, along with our launch webinar today, in the comments. #Charities #TrusteesWeek #CharityAIChecklist

  • View profile for Dr.Dinesh Chandrasekar (DC)

    Chief Strategy Officer & Country Head, Centific AI | Nasscom Deep Tech ,Telangana AI Mission & HYSEA - Mentor & Advisor | Alumni of Hitachi, GE & Citigroup | Frontier AI Strategist | A Billion $ before☀️Sunset

    31,171 followers

    #AiDays2025 Round Table : #Community Sourcing for low resource languages In an era where AI is fast shaping the contours of our digital future, VISWAM.AI initiative stands as a timely and transformational one. Their mission to build community-sourced Large Language Models (LLMs), grounded in India’s rich linguistic and cultural diversity, is not just pioneering—it’s redefining how inclusive and ethical AI should be built. By anchoring their work in community participation, linguistic preservation, and ethical co-creation, Viswam.ai offers a people-first approach to AI—moving beyond data extraction to cultural stewardship. Their ambition to mobilize 1 lakh community interns to collect data from underrepresented geographies across India is both bold and brilliant. This isn’t just about building better AI—it’s about building equity, agency, and cultural resilience through AI. 1. Linguistic Equity by Design In India, where linguistic hegemony often privileges English and Hindi, AI systems risk reinforcing this imbalance. The solution? Intentional design. Allocate equal engineering and validation efforts to low-resource languages. Ethical AI must be built on informed consent, community ownership, and fair compensation—because data is not just input, it’s identity and heritage. 2. Decentralized Internship Model By decentralizing AI development, we bridge the urban-rural digital divide. This model should focus on: Capacity building through training in ethics and digital literacy Inclusivity by involving women, Dalit and Adivasi youth Localized platforms using mobile-first tools in native languages Partnerships with Swecha, local NGOs, and institutions serve as trust bridges to ensure mentorship and sustainability. 3. Tools for Low-Resource Languages Many Indian languages are oral-first, with complex dialects and sparse corpora. Community-driven solutions—like collecting voice datasets from folklore, and crowdsourcing annotation—are key. Elders, poets, and storytellers become linguistic technologists, preserving not just language but legacy. 4. Trust & Transparency Bias in AI is structural. To mitigate it: Include diverse dialects and accents in training Conduct bias testing and community validation Promote explainable AI with local language dashboards and storytelling What’s Next? A living white paper on ethics, governance, and technical guidelines A roadmap for the internship program, with toolkits and impact metrics Collaboration with literary and linguistic organizations to enrich model depth VISWAM.AI is planting seeds for an AI movement rooted in language justice, data sovereignty, and community wisdom. Let’s co-create systems that don’t just understand our languages—but respect our voices. DC* Chaitanya Chokkareddy Kiran Chandra Ramesh Loganathan Centific

  • View profile for Giles Lindsay (CITP FIAP FBCS FCMI)

    CIO | CTO | NED | Digital Growth & Innovation Leader | AI & ESG Advocate | Value Creation | Business Agility Thought Leader | Agile Leader | Author | Mentor | Keynote Speaker | Global CIO200 | World100 CTO | CIO100 UK

    8,985 followers

    🔹𝗔𝗜 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲: 𝗟𝗲𝗮𝗱𝗶𝗻𝗴 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝘆 𝗶𝗻 𝟮𝟬𝟮𝟱🔹 In my latest blog, "𝗔𝗜 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲: 𝗛𝗼𝘄 𝗟𝗲𝗮𝗱𝗲𝗿𝘀 𝗖𝗮𝗻 𝗠𝗮𝗻𝗮𝗴𝗲 𝗥𝗶𝘀𝗸𝘀 𝗮𝗻𝗱 𝗨𝗻𝗹𝗼𝗰𝗸 𝗩𝗮𝗹𝘂𝗲", I explore why governance is not just a safeguard but a leadership responsibility, and how leaders can act now to manage risks and unlock sustainable value. AI is powering decisions, shaping outcomes, and driving business results. But without governance, the risks can outweigh the rewards. From frozen bank accounts to deepfake scams, we have seen what happens when oversight is missing. AI governance is not a technical detail. It is a leadership responsibility. 💡𝗪𝗵𝘆 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗠𝗮𝘁𝘁𝗲𝗿𝘀: ✅ Protects trust with customers, regulators, and employees ✅ Ensures AI is fair, transparent, and explainable ✅ Reduces bias, compliance gaps, and reputational harm ✅ Creates long-term value by aligning innovation with ethics 🔍𝗙𝗼𝘂𝗿 𝗣𝗶𝗹𝗹𝗮𝗿𝘀 𝗳𝗼𝗿 𝗟𝗲𝗮𝗱𝗲𝗿𝘀 𝘁𝗼 𝗔𝗻𝗰𝗵𝗼𝗿 𝗢𝗻: 1️⃣ Transparency: Make AI decisions explainable in plain terms 2️⃣ Accountability: Keep clear ownership, never shift blame to “the algorithm” 3️⃣ Fairness: Audit regularly to test for bias and inequality 4️⃣ Security: Guard against data leaks, misuse, or manipulation 📌𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗔𝗰𝘁𝗶𝗼𝗻𝘀 𝘁𝗼 𝗧𝗮𝗸𝗲 𝗡𝗼𝘄: ✔️ Map every AI system in use and rank its significance ✔️ Create cross-functional oversight to avoid blind spots ✔️ Run audits on data, bias, and security ✔️ Train teams to understand how AI works and where it is applied ✔️ Engage regulators and stakeholders to stay ahead of compliance 🌍𝗥𝗲𝗮𝗹-𝗪𝗼𝗿𝗹𝗱 𝗘𝘅𝗮𝗺𝗽𝗹𝗲𝘀: 🏥 Healthcare: AI supports diagnoses, but doctors remain accountable 🛒 Retail: Bias audits prevent recommendation engines from reinforcing old patterns 🏦 Finance: Oversight committees review all AI deployments before launch The leadership opportunity is clear. AI governance is not about slowing progress. It is about creating trusted innovation that lasts. 🔗 𝗙𝘂𝗹𝗹 𝗯𝗹𝗼𝗴 𝗽𝗼𝘀𝘁 𝗵𝗲𝗿𝗲: https://lnkd.in/g7yAtCY9 What is the toughest AI governance challenge you face right now? Share your thoughts below. #Leadership #AI #Governance #Ethics #RiskManagement #ExecutiveLeadership #BusinessAgility

  • View profile for Andres Lehtmets

    Bridging Innovation & Regulation in Insurance | InsurTech, Open Insurance, FIDA, AI Act | Regulatory Strategy & Advice | Training & Public Speaking

    11,788 followers

    Attention, #insurance professionals! The global insurance watchdog, IAIS, has just launched a public consultation on the draft Application Paper on the supervision of artificial intelligence. The application paper covers four broad sections: 1. Governance and accountability: this includes the need to integrate AI into risk management systems, provide human oversight of AI risks and considerations around the use of third parties. 2. Robustness, safety and security: this considers issues related to the robustness, safety and security of AI systems.   3. Transparency and explainability: this section sets out the need for AI outcomes to be explainable and tailored to the need of different stakeholders. 4. Fairness, ethics and redress: this section includes the need for fairness by design, monitoring of outcomes and adequate redress mechanisms. It also highlights the need for supervisors and insurers to consider the broad societal impacts of granular risk pricing on the principle of risk pooling. Feedback on the document is invited by 17 February 2025. Don’t miss this opportunity to share your insights! __________ 👉 For latest InsurTech regulatory and policy news subscribe to my insurtech4good.com newsletter. ♻️ Re-share this to help your colleagues stay ahead in InsurTech.

Explore categories