Our colleagues at UNESCO and HumaneIntelligence have just released a step-by-step Red Teaming Playbook to test generative AI systems for bias, harm, and vulnerabilities — especially those that impact women and girls. The guide aims to empower non-technical communities — civil society, policymakers, and educators — to conduct their own Red Teaming exercises and address technology-facilitated gender-based violence (TFGBV). Some key stats: 🔹 89% of ML engineers report finding Gen AI vulnerabilities (Aporia, 2024) 🔹 96% of deepfake videos are non-consensual; nearly all target women 🔹 73% of women journalists report online violence; many self-censor 🔹 Some girls experience TFGBV as early as age 9 The Playbook makes Red Teaming: 🔹 Accessible (no coding needed) 🔹 Flexible (in-person, online, or hybrid) 🔹 Actionable (can inform policy, AI design, and ethics reviews) Full document: https://lnkd.in/eeCwVxui Acknowledgements: Dr. Rumman Chowdhury, Theodora Skeadas Sarah A. Lakshmi Dhanya Our previous and related input: UNESCO Week, AI Competency Frameworks - https://lnkd.in/eekBssZ7 UN Global Digital Compact (GDC) - https://lnkd.in/emurU3nj Sovereign Public AI - https://lnkd.in/eMy9PvVZ AI in Science, R&D - https://lnkd.in/eHmmRU8u OECD Hiroshima AI Process - https://lnkd.in/erP6GB2T OECD Repository of Assistive AI - https://lnkd.in/eiWij8j2 OECD Catalogue of Trustworthy AI Tools - https://lnkd.in/epiQaQtk Paris Declaration - https://lnkd.in/eaNw58eX Washington Hearings - https://lnkd.in/ej-fM_jr NIST - https://lnkd.in/eunedRvd PCAST - https://lnkd.in/eANDE_FF #ai #ethics #policy
AI Safety and Risk Management
Explore top LinkedIn content from expert professionals.
-
-
"This paper focuses on developing a conceptual blueprint for AI insurance that addresses unintended outcomes resulting directly from an AI system's normal operation, where outputs fall within the declared scope but diverge from intended behaviour. Such failures are already silently embedded in existing insurance portfolios, neither affirmatively covered nor excluded, and thus remain unpriced and unmanaged. We argue that dedicated AI insurance is necessary to quantify, price, and transfer these risks, while simultaneously embedding market-based incentives for safer and more secure AI deployment. The paper makes four contributions. First, we identify the core underwriting challenges, including the lack of historical loss data, the dynamic nature of model behaviour, and the systemic potential for correlated failures, and propose mechanisms for risk transfer and pricing, such as parametric triggers, usage-based coverage, and bonus-malus schemes. Second, we examine market structures that may shape the development of AI insurance and highlight technical enablers that support the quantification and pricing of AI risk. Third, we examine the interplay between insurance, AI model risk management, and assurance. We argue that without insurance, assurance services risk becoming box-ticking exercises, whereas underwriters, who directly bear the cost of claims, have strong incentives to demand rigorous testing, monitoring, and validation. In this way, insurers can act as guardians of effective AI governance, shaping standards for risk management and incentivising trustworthy deployment. Finally, we relate AI insurance to adjacent coverage lines, such as cyber and technology errors and omissions." Lukasz Szpruch Agni Orfanoudaki Carsten Maple Matthew Wicker Yoshua Bengio Kwok Yan Lam Marcin Detyniecki AXA
-
Here's my 2024 LinkedIn Rewind, by Coauthor.studio: 2024 marked a pivotal moment for AI in insurance - moving from theoretical discussions to practical implementation. The launch of the AI Act, major strategic partnerships like AXA-Mistral AI, and real productivity gains of 38% in insurance operations showed that AI's impact is now measurable and meaningful. Through 85+ episodes of "Inno and Tech" podcast on Bretagne5 and extensive conference engagements, three key transformations emerged: 🔹 The integration of AI moved from pilot projects to strategic implementation, with major insurers showing clear productivity gains while carefully navigating governance requirements 🔹 Large language models evolved from general tools to specialized insurance applications, particularly in underwriting, claims processing, and customer service 🔹 The focus shifted from pure automation to augmented intelligence, with human expertise remaining central to decision-making Most impactful posts from 2024: "La pyramide des salaires en France" Data-driven analysis of salary distribution providing valuable market insights https://lnkd.in/es9nbjZy "Breaking news // C'est officiel, le texte de l'IA Act" Breaking down the EU's landmark AI regulation and implementation timeline https://lnkd.in/eBWUUAXh "Joue-la comme Ikea!" Analysis of strategic partnerships reshaping insurance through AI https://lnkd.in/eBkqaHU4 Looking ahead: 2025 will be shaped by the practical implementation of the AI Act, evolving partnerships between traditional insurers and AI companies, and the continued focus on responsible innovation. The Paris AI Summit in February will be particularly significant for setting the global direction of AI governance. To everyone working to thoughtfully integrate AI into insurance: the real work of balancing innovation with responsibility is just beginning. -- Get your 2024 LinkedIn Rewind at https://lnkd.in/eWcyDN3E
-
🤖 The Gendered Impact of AI: Why Women—Especially from Marginalised Backgrounds—Are Most at Risk As artificial intelligence continues to reshape the world of work, one thing is becoming increasingly clear: the effects will not be felt equally. A new report from the United Nations’s International Labour Organization and Poland’s NASK reveals that roles traditionally held by women—particularly in high-income countries—are almost three times more likely to be disrupted by generative AI than those held by men. 📉 9.6% of female-held jobs are at high risk of transformation, compared to just 3.5% of male-held roles. Why? Many of these jobs are in administration and clerical work—sectors where AI can automate routine tasks efficiently. But while AI may not eliminate these roles outright, it is radically reshaping them, threatening job security and career progression for many women. This risk is not theoretical. Back in 2023, researchers at OpenAI—the company behind ChatGPT—examined the potential exposure of different occupations to large language models like GPT-4. The results were striking: around 80% of the US workforce could have at least 10% of their work tasks impacted by generative AI. While they were careful not to label this a prediction, the message was clear: AI's reach is widespread and accelerating. 🌍 An intersectional lens shows even deeper inequities. Women from marginalised communities—especially women of colour, older women, and those with lower levels of formal education—face heightened vulnerability: They are overrepresented in lower-paid, more automatable roles, with limited access to training or advancement. They often lack the tools, networks, and opportunities to adapt to digital shifts. And they face greater risks of bias within the AI systems themselves, which can reinforce inequality in recruitment and promotion. Meanwhile, roles being augmented by AI—like those in tech, media, and finance—are still largely male-dominated, widening the gender and racial divide in the AI economy. According to the World Economic Forum, 33.7% of women are in jobs being disrupted by AI, compared to just 25.5% of men. 📢 As AI moves from buzzword to business reality, we need more than technical solutions—we need intentional, inclusive strategies. That means designing AI systems that reflect the full diversity of society, investing in upskilling programmes that reach everyone, and ensuring the benefits of AI are distributed fairly. The question on my mind is - if AI is shaping the future of work, who’s shaping AI? #AI #FutureOfWork #EquityInTech #GenderEquality #Intersectionality #Inclusion #ResponsibleTech
-
Can Insurance Employ AI That Is Both Powerful and Fair? Artificial intelligence is rapidly reshaping how insurance companies process claims, detect fraud, and manage risk. But to be effective and fair, AI must be developed and deployed with careful attention to data quality, model transparency, and ethical use. AI systems are only as good as the data they are trained on, and if that data is biased or incomplete, the outcomes will reflect and even amplify those problems. In a conversation filled with lived experience, John Standish, Co-Founder and Chief Innovation and Compliance Officer at Charlee.ai, laid out a powerful and pragmatic vision for how artificial intelligence must be built for the insurance industry. Having transitioned from a long and substantial career in law enforcement and insurance fraud investigations to the world of InsurTech, John offers rare dual expertise: a regulator’s scrutiny and a technologist’s curiosity. His perspectives cut through hype and buzzwords and land squarely in the domain of real-world consequences, compliance, and human-centered innovation. John underscored the importance of domain-specific AI models that are trained with relevant, clean, and unbiased data. He cautioned against using generic models and stressed the need for explainability, transparency, and regulatory compliance in all AI-driven decisions. The conversation illuminated a crucial point: AI isn’t a magic fix for outdated processes—it’s a force multiplier for organizations willing to rethink their foundational data strategies and workflows. For the insurance industry, embracing this challenge is not just a matter of innovation, but of survival in a rapidly changing digital landscape. #technology #innovation #frauddetection #claimsmanagement #artificialintelligence #insurance #insurtech Look for the full YouTube episode in the comments.
-
This week, I learned about a new kind of bias—one that can impact underrepresented people when they use AI tools. 🤯 New research by Oguz A. Acar, PhD et al. found that when members of stereotyped groups—such as women in tech or older workers in youth-dominated fields—use AI, it can backfire. Instead of being seen as strategic and efficient, their AI use is framed as “proof” that they can’t do the work on their own. (https://lnkd.in/gEFu2a9b) In the study, participants reviewed identical code snippets. The only difference? Some were told the engineer wrote it with AI assistance. When they thought AI was involved, they rated the engineer’s competence 9% lower on average. And here’s the kicker: that _competence penalty_ was twice as high for women engineers. AI-assisted code from a man got a 6% drop in perceived competence. The same code from a woman? A 13% drop. Follow-up surveys revealed that many engineers anticipated this penalty and avoided using AI to protect their reputations. The people most likely to fear competence penalties? Disproportionately women and older engineers. And they were also the least likely to adopt AI tools. And I’m concerned this bias extends beyond engineering roles. If your organization is encouraging AI adoption, consider the hidden costs to marginalized and underestimated colleagues. Could they face extra scrutiny? Harsher performance reviews? Fewer opportunities? In this week's 5 Ally Actions newsletter, I'll explore ideas for combatting this bias and creating more meritocratic and inclusive workplaces in this new world of AI. Subscribe and read the full edition on Friday at https://lnkd.in/gQiRseCb #BetterAllies #Allyship #InclusionMatters #Inclusion #Belonging #Allies #AI 🙏
-
A few thoughts from the NAIC’s first-ever Health Insurance AI/ML Survey (May 2025) that every carrier & broker exec should know: AI is already mainstream. 93 insurers responded—and 84 % say they’re using AI/ML today (not just piloting) across individual, group and student plans Where the deployments are live: Utilization management – 71 % Prior-auth approvals – 68 % (but only 12 % let AI suggest denials) The tech is being steered toward speed, not stonewalling—a signal regulators will like Disease-management outreach – 61 % Fraud detection (claims) – 50 % and provider fraud – 51 % Digital sales/quoting tools – 45 % Third-party tech is stronger than in house. 55 % embed external AI components, 15 % outsource everything, and just 10 % build fully in-house Production-level deployment is real. For claim-coding analytics alone, 62% of models are already live in production Product innovation is still in the early stages. Only 28 % of carriers use AI/ML to for product pricing and plan design Human-in-the-loop remains critical. Every individual major medical insurer that uses AI to negotiate out-of-network claims keeps a human reviewer in the chain (100 %) Governance is maturing—but uneven. Most companies test for drift, bias, and fairness, yet 14 % admit their models still infer sensitive traits like race, highlighting a regulatory grey zone The survey shows health-plan AI is past the hype cycle—already embedded in core workflows that impact pricing, access and member experience. Expect regulators to double their focus and attention at both state and federal levels on regulations and frameworks: Strong governance, transparent audit trails and clear human override protocols will become table stakes faster than many expect. Link to the full report in the comments
-
⚖️ The New York State Department of Financial Services (NYDFS) recently issued a Proposed Insurance Circular Letter on the use of Artificial Intelligence (AI) and External Consumer Data and Information Sources (ECDIS) in Insurance Underwriting and Pricing; Keep in mind this is not a law; a circular helps interpret the law. It applies to financial institutions, and thus, to their suppliers. Since NYC is the head quarters of the financial sector, this circular has the potential to be the NY 500 of artificial intelligence. It's certainly a micro-trend in law to take seriously. AI in insurance isn't new; reports say it's the second most popular technology applied to underwriting, so there are certainly a few concerned with this alert. Basically, the proposed circular mandates that insurers should not use ECDIS or AI systems for underwriting or pricing unless the data source or model does not use or is not based on any protected class under the New York Insurance Law. The protected class of data refers to data that can lead to discrimination, even indirectly, such as certain family name. Insurers are expected to test for discriminatory impact on these protected classes, even when the insurers do not collect any data on these classes - a complex and potentially challenging requirement. Action Items: 📌 Insurers need to establish effective governance frameworks for the use of ECDIS and AI systems. The board of directors and senior management must play an active role in this. 📌 Detailed quantitative testing obligations must be met. This includes specific metrics and extends to third-party AI tools. 📌 Insurers need to audit and test third-party data and tools to ensure they are regulatory and actuarially compliant. This micro-trend is already well implemented in Colorado for those who follow insurtech law, so Board Members should take this seriously. If you're not talking AI already, it's time to start. #NYDFS #AI #Insurtech #Regulations #InsuranceUnderwriting
-
Attention all insurers and #InsurTech companies active or planning to enter the #AI space – this is for you! Understanding the EU’s #AIAct and its implications is crucial. It’s already in force, with key provisions set to apply early next year. However, interpreting this cross-sectoral regulation within the #insurance context still involves significant uncertainty. To help clarify, the European AI Office has launched a multi-stakeholder consultation on defining AI systems and identifying prohibited AI practices as per the AI Act. This is a critical moment to reflect on your business practices, identify areas of ambiguity, and share your insights with the Commission. As a former regulator, I assure you—this feedback isn’t going into a black box. It’s a valued contribution that can shape regulatory clarity. Why am I emphasizing this in the insurance context? Because these two areas are currently raising concerns based at least on my recent discussions with industry stakeholders. Insurance has unique characteristics, and there’s a need to ensure that the industry clearly understands the scope of prohibited practices. Some stakeholders have noted that certain common practices could unintentionally align with some prohibited practices. Another area is the definition of AI systems – this is foundational to the AI Act. Some insurers seems to question whether traditional statistical models like GLMs fall under the Act’s scope. In fact, views on this vary across the market. Take this opportunity to contribute. Your input can help shape regulatory clarity. You’ll thank me later! __________ 👉 Want to stay ahead on similar consultations and AI Act impacts on insurance? Subscribe to my insurtech4good.com newsletter! ♻️ Reshare this to help it reach other innovators who might have meaningful contributions to provide.
-
We called it progress. Turns out, it's a wedge. When it comes to AI, women are underrepresented, disproportionately impacted, use it less, and trust it less. Why the World Economic Forum predicts it will take 134 years to close the AI gender gap. How did we create yet another gap 🙄 before AI even got off the ground? Because we haven't closed the previous gaps. Women make up less than 22% of AI professionals globally. In technical roles, that number drops even lower. The gap shows up in models, machines, and money. #️⃣ Data bias: AI models trained on biased data reinforce gender stereotypes, like women linked to nurses, men to CEOs. I read an early study by UNESCO where Llama 2 and ChatGPT were asked to make up stories about women and men. In stories about men, words like "treasure," "woods," "sea," and "adventurous" dominated, while women were more often described with "garden," "love," "gentle," and "husband." Oh, and women were described in domestic roles 4X more often than men. ⚙️ Product design: Virtual assistants are often default female—submissive, helpful, and programmable. We've seen design flaws like this before, like in facial recognition systems that tend to perform worst on black women compared to white men. 💲 Funding: Women-led AI startups receive a fraction of VC funding compared to male-led ones. In fact, only 4% of AI startups are led by women. Then there's disproportionate impact. 80% of jobs will be affected in some way by AI. 57% of jobs susceptible to disruption are held by women compared to 43% of men. If women are anxious, it's because we should be. Women are 1.5X more likely to need to move into new occupations than men due to AI. But we're not anxious about AI just because of its impact on work and jobs; we also don't TRUST it. We know AI algorithms perpetuate bias, and we also know we're more subject to online harm like deepfakes, cyber threats, and fraud. Then there are bigger questions around psychological safety, an altered sense of reality, and social isolation in an increasingly digital world. Sounds like AI is sexist. A literal threat to women -- our livelihood, our social being, our online safety and privacy, our kids. But I don't want to throw it away for all that... ...it's that the most powerful technology claiming to shape our future is being built and deployed by a homogeneous few. This isn't about responsible AI, this is about representation, impact, and responsible humans deciding what to DO with AI. Listen to my conversation with Adriana O'Kain on Mercer's AI-volution podcast. Closing the AI Gender Gap: 🎙️ Spotify: https://lnkd.in/geyp2Scn Apple: https://lnkd.in/g5FamDEJ #FutureOfWork #DigitalDivide #EthicalTech #InclusiveDesign #AI #EquityInTech #HRTech #WomenInTech