AI Bias Issues

Explore top LinkedIn content from expert professionals.

  • View profile for Tarika Barrett, Ph.D.
    Tarika Barrett, Ph.D. Tarika Barrett, Ph.D. is an Influencer

    Chief Executive Officer at Girls Who Code

    89,818 followers

    Robert Williams, a Black man, was wrongly arrested for shoplifting after being misidentified by facial recognition technology in 2018. Now, he has been awarded $300K from the city of Detroit. According to The Guardian, the software incorrectly matched Williams’ driver’s license photo to a suspect with a similar complexion, leading to the arrest. “My wife and young daughters had to watch helplessly as I was arrested for a crime I didn’t commit, and by the time I got home from jail, I had already missed my youngest losing her first tooth,” says Williams. “The scariest part is that what happened to me could have happened to anyone.” Sadly, Williams’ story is just one of many. This highlights the real-world impact of racial bias in tech. Studies show that facial recognition software is significantly less reliable for Black and Asian people, who are 10 to 100 times more likely to be misidentified by this technology than their white counterparts according to the National Institute of Standards and Technology. The institute also found that these systems’ algorithms struggled to distinguish between facial structures and darker skin tones. There are real consequences to algorithmic bias, and the only way to truly mitigate these harms is to ensure that those developing AI technology prioritize the needs of all communities. That’s why we champion diversity, equity, and inclusion at Girls Who Code. We all deserve to have a tech industry that reflects our increasingly diverse world. https://bit.ly/3WfNOyt

  • View profile for Justine Juillard

    VC Investment Partner @ Critical | Co-Founder of Girls Into VC @ Berkeley | Neuroscience & Data Science @ UC Berkeley | Advocate for Women in VC and Entrepreneurship

    43,803 followers

    Facial recognition software used to misidentify dark-skinned women 47% of the time. Until Joy Buolamwini forced Big Tech to fix it. In 2015, Dr. Joy Buolamwini was building an art project at the MIT Media Lab. It was supposed to use facial recognition to project the face of an inspiring figure onto the user’s reflection. But the software couldn’t detect her face. Joy is a dark-skinned woman. And to be seen by the system, she had to put on a white mask. She wondered: Why? She launched Gender Shades, a research project that audited commercial facial recognition systems from IBM, Microsoft, and Face++. The systems could identify lighter-skinned men with 99.2% accuracy. But for darker-skinned women, the error rate jumped as high as 47%. The problem? AI was being trained on biased datasets: over 75% male, 80% lighter-skinned. So Joy introduced the Pilot Parliaments Benchmark, a new training dataset with diverse representation by gender and skin tone. It became a model for how to test facial recognition fairly. Her research prompted Microsoft and IBM to revise their algorithms. Amazon tried to discredit her work. But she kept going. In 2016, she founded the Algorithmic Justice League, a nonprofit dedicated to challenging bias in AI through research, advocacy, and art. She called it the Coded Gaze, the embedded bias of the people behind the code. Her spoken-word film “AI, Ain’t I A Woman?”, which shows facial recognition software misidentifying icons like Michelle Obama, has been screened around the world. And her work was featured in the award-winning documentary Coded Bias, now on Netflix. In 2019, she testified before Congress about the dangers of facial recognition. She warned that even if accuracy improves, the tech can still be abused. For surveillance, racial profiling, and discrimination in hiring, housing, and criminal justice. To counter it, she co-founded the Safe Face Pledge, which demands ethical boundaries for facial recognition. No weaponization. No use by law enforcement without oversight. After years of activism, major players (IBM, Microsoft, Amazon) paused facial recognition sales to law enforcement. In 2023, she published her best-selling book “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.” She advocated for inclusive datasets, independent audits, and laws that protect marginalized communities. She consulted with the White House ahead of Executive Order 14110 on “Safe, Secure, and Trustworthy AI.” But she didn’t stop at facial recognition. She launched Voicing Erasure, a project exposing bias in voice AI systems like Siri and Alexa. Especially their failure to recognize African-American Vernacular English. Her message is clear: AI doesn’t just reflect society. It amplifies its flaws. Fortune calls her “the conscience of the AI revolution.” 💡 In 2025, I’m sharing 365 stories of women entrepreneurs in 365 days. Follow Justine Juillard for daily #femalefounder spotlights.

  • View profile for Vilas Dhar

    President, Patrick J. McGovern Foundation ($1.5B) | Global Authority on AI, Governance & Social Impact | Board Director | Shaping Leadership in the Digital Age

    55,525 followers

    AI systems built without women's voices miss half the world and actively distort reality for everyone. On International Women's Day - and every day - this truth demands our attention. After more than two decades working at the intersection of technological innovation and human rights, I've observed a consistent pattern: systems designed without inclusive input inevitably encode the inequalities of the world we have today, incorporating biases in data, algorithms, and even policy. Building technology that works requires our shared participation as the foundation of effective innovation. The data is sobering: women represent only 30% of the AI workforce and a mere 12% of AI research and development positions according to UNESCO's Gender and AI Outlook. This absence shapes the technology itself. And a UNESCO study on Large Language Models (LLMs) found persistent gender biases - where female names were disproportionately linked to domestic roles, while male names were associated with leadership and executive careers. UNESCO's @women4EthicalAI initiative, led by the visionary and inspiring Gabriela Ramos and Dr. Alessandra Sala, is fighting this pattern by developing frameworks for non-discriminatory AI and pushing for gender equity in technology leadership. Their work extends the UNESCO Recommendation on the Ethics of AI, a powerful global standard centering human rights in AI governance. Today's decision is whether AI will transform our world into one that replicates today's inequities or helps us build something better. Examine your AI teams and processes today. Where are the gaps in representation affecting your outcomes? Document these blind spots, set measurable inclusion targets, and build accountability systems that outlast good intentions. The technology we create reflects who creates it - and gives us a path to a better world. #InternationalWomensDay #AI #GenderBias #EthicalAI #WomenInAI #UNESCO #ArtificialIntelligence The Patrick J. McGovern Foundation Mariagrazia Squicciarini Miriam Vogel Vivian Schiller Karen Gill Mary Rodriguez, MBA Erika Quada Mathilde Barge Gwen Hotaling Yolanda Botti-Lodovico

  • View profile for Robert F. Smith

    Founder, Chairman and CEO at Vista Equity Partners

    234,051 followers

    The Embedded Bias series by STAT sheds light on hidden biases — especially #racial and #gender biases — woven into the technologies and algorithms within our healthcare system. STAT, which is a media company that focuses on health, medicine and life sciences, dives deep into how these #biases affect patient care and outcomes. These critical insights show that bias in technology often reinforces the very health disparities we’re working to eliminate. The first part of this series exposes how racial biases in diagnostic tools can lead to inaccurate assessments for Black patients. The second reveals how gender biases in clinical trials have left women’s health concerns under-researched and under-treated. In later parts of the series, we see how bias in AI-driven healthcare solutions risks worsening disparities if not carefully checked. This investigation is a powerful reminder that transformative technology can still reflect and exacerbate existing societal inequities. If we’re not intentional about rooting out these biases, we risk further marginalizing communities already struggling to access quality care. https://bit.ly/3ZW14uD

  • View profile for Shelley Zalis
    Shelley Zalis Shelley Zalis is an Influencer
    326,968 followers

    AI is only as good as the data we feed it. When it takes 30 prompts to get AI to picture a scientist as a woman, we have a problem. Bias in, bias out. If we want technology to reflect our world, we must train AI to be inclusive—amplifying the voices, faces, and ideas of women and people of color. It’s about building a future that works for everyone. ➡️ AI misidentified darker-skinned women up to 34.7% of the time, compared to a 0.8% error rate for lighter-skinned men. ➡️ In 2020, only 14% of authors of AI-related research papers were women ➡️ AI-driven hiring platforms can reflect and perpetuate anti-Black biases ➡️ As of 2018, women comprised only 22% of AI professionals worldwide The future of AI is being built now—and if we don’t course-correct, we risk reinforcing the same biases. It’s time to ensure AI works for everyone. 👉 Jay Flores

  • View profile for Dr. Poornima Luthra
    Dr. Poornima Luthra Dr. Poornima Luthra is an Influencer

    Author | Educator | Equity & Inclusion Researcher | Tedx Speaker | Thinkers50 Radar Class of 2023 | Board Chair & Member

    19,621 followers

    Did you know that about 50% of people do not use AI for work? Ever wondered why? You might be thinking that it must be a lack of training on how to use AI or perhaps skepticism about AI tools themselves. While these are indeed plausible (and possible) explanations, recent research (link to article in the comments 👇🏽) shows that a key factor is actually the fear of being perceived as being less competent from using AI - what the researchers refer to as the “competency penalty”. How did they arrive at this conclusion? They did an experiment involving 1,026 engineers reviewing identical Python code. The only difference was whether reviewers believed it was written with AI assistance or not. The result? “The results were striking. When reviewers believed an engineer had used AI, they rated that engineer’s competence 9% lower on average, despite reviewing identical work. This wasn’t about code quality—ratings of the code itself remained similar whether AI was involved or not. The penalty targeted the perceived ability of the person who wrote it.” ‼️What is more alarming is that the competence penalty was more than twice as severe for female engineers, who faced a 13% reduction compared to 6% for male engineers further exacerbating inequalities in the technology sector. “When reviewers thought a woman had used AI to write code, they questioned her fundamental abilities far more than when reviewing the same AI-assisted code from a man.” What is also extremely concerning is who was imposing these penalties! “Engineers who hadn’t adopted AI themselves were the harshest critics. Male non-adopters were particularly severe when evaluating female engineers who used AI, penalizing them 26% more harshly than they penalized male engineers for identical AI usage.” Could this be the reason you don’t use AI for work? What would it take to motivate employees to considered using AI? The authors have some suggestions in the article. What do you think? #MondayMotivation

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,210 followers

    "This report developed by UNESCO and in collaboration with the Women for Ethical AI (W4EAI) platform, is based on and inspired by the gender chapter of UNESCO’s Recommendation on the Ethics of Artificial Intelligence. This concrete commitment, adopted by 194 Member States, is the first and only recommendation to incorporate provisions to advance gender equality within the AI ecosystem. The primary motivation for this study lies in the realization that, despite progress in technology and AI, women remain significantly underrepresented in its development and leadership, particularly in the field of AI. For instance, currently, women reportedly make up only 29% of researchers in the field of science and development (R&D),1 while this drops to 12% in specific AI research positions.2 Additionally, only 16% of the faculty in universities conducting AI research are women, reflecting a significant lack of diversity in academic and research spaces.3 Moreover, only 30% of professionals in the AI sector are women,4 and the gender gap increases further in leadership roles, with only 18% of in C-Suite positions at AI startups being held by women.5 Another crucial finding of the study is the lack of inclusion of gender perspectives in regulatory frameworks and AI-related policies. Of the 138 countries assessed by the Global Index for Responsible AI, only 24 have frameworks that mention gender aspects, and of these, only 18 make any significant reference to gender issues in relation to AI. Even in these cases, mentions of gender equality are often superficial and do not include concrete plans or resources to address existing inequalities. The study also reveals a concerning lack of genderdisaggregated data in the fields of technology and AI, which hinders accurate measurement of progress and persistent inequalities. It highlights that in many countries, statistics on female participation are based on general STEM or ICT data, which may mask broader disparities in specific fields like AI. For example, there is a reported 44% gender gap in software development roles,6 in contrast to a 15% gap in general ICT professions.7 Furthermore, the report identifies significant risks for women due to bias in, and misuse of, AI systems. Recruitment algorithms, for instance, have shown a tendency to favor male candidates. Additionally, voice and facial recognition systems perform poorly when dealing with female voices and faces, increasing the risk of exclusion and discrimination in accessing services and technologies. Women are also disproportionately likely to be the victims of AI-enabled online harassment. The document also highlights the intersectionality of these issues, pointing out that women with additional marginalized identities (such as race, sexual orientation, socioeconomic status, or disability) face even greater barriers to accessing and participating in the AI field."

  • View profile for Jamira Burley
    Jamira Burley Jamira Burley is an Influencer

    Former Executive at Apple + Adidas | LinkedIn Top Voice 🏆 | Education Champion | Social and Community Impact Strategist | Speaker | Former UN Advisor

    18,930 followers

    We've already seen how AI can be weaponized against communities of color, just look at its use in criminal justice, where algorithms like COMPAS have falsely labeled Black defendants as high-risk at nearly twice the rate of white defendants. Are we ready for that same flawed technology to become the backbone of our education system? The Minnesota Spokesman-Recorder's powerful piece "AI in Schools: Revolution or Risk for Black Students" asks this exact question. At a glance, AI in classrooms sounds promising personalized learning, reduced administrative burdens, and faster feedback. However, for Black students, the reality is more complicated; Bias baked into the algorithm: From grading to discipline, AI tools are often trained on data that reflect society's worst prejudices. The digital divide is still very real: Nearly 1 in 4 Black households with school-age children have no access to high-speed internet at home. Whose perspective shaped the tech? A lack of Black developers and decision-makers means many AI systems fail to recognize or respond to our students' lived experiences. And yet, the rollout is happening—fast. One in four educators plans to expand their use of AI this year alone, often without meaningful policy guardrails. We must ask: Who is this tech designed to serve—and at whose expense? This article is a must-read for anyone in education, tech, or equity work. Let's make sure the "future of learning" doesn't repeat the mistakes of the past. #AI #GlobalEducation #publiceducation #CommunityEngagement #equity #Youthdevelopment #AIinEducation #DigitalJustice #EquityInTech #EdTechWithIntegrity Read the article here: https://lnkd.in/g9U7za_k

  • View profile for Sharon Peake, CPsychol
    Sharon Peake, CPsychol Sharon Peake, CPsychol is an Influencer

    IOD Director of the Year - EDI ‘24 | Management Today Women in Leadership Power List ‘24 | Global Diversity List ‘23 (Snr Execs) | D&I Consultancy of the Year | UN Women CSW67-69 participant | Accelerating gender equity

    29,536 followers

    𝗔𝗜 𝗶𝘀 𝗼𝗻𝗹𝘆 𝗮𝘀 𝗳𝗮𝗶𝗿 𝗮𝘀 𝘁𝗵𝗲 𝘄𝗼𝗿𝗹𝗱 𝗶𝘁 𝗹𝗲𝗮𝗿𝗻𝘀 𝗳𝗿𝗼𝗺. Artificial Intelligence isn’t created in a vacuum - it’s trained on data that reflects the world we’ve built. And that world carries deep, historic inequities. If the training data includes patterns of exclusion, such as who gets promoted, who gets paid more, whose CVs are ‘successful’, then AI systems learn those patterns and replicate them. At scale and at pace. We’re already seeing the consequences: 🔹Hiring tools that favour men over women 🔹Voice assistants that misunderstand female voices 🔹Algorithms that promote sexist content more widely and more often This isn’t about a rogue line of code. It’s about systems that reflect the values and blind spots of the people who build them. Yet women make up just 35% of the US tech workforce. And only 28% of people even know AI can be gender biased. That gap in awareness is dangerous. Because what gets built, and how it behaves, depends on who’s in the room. So what are some practical actions we can take? Tech leaders: 🔹 Build systems that are in tune with women’s real needs 🔹 Invest in diverse design and development teams 🔹 Audit your tools and data for bias 🔹 Put ethics and gender equality at the core of AI development, not as an afterthought Everyone else: 🔹 Don’t scroll past the problem 🔹 Call out gender bias when you see it 🔹 Report misogynistic and sexist content 🔹 Demand tech that works for all women and girls This isn’t just about better tech. It is fundamentally about fairer futures. #GenderEquality #InclusiveTech #EthicalAI Attached in the comments is a helpful UN article.

  • View profile for Piyu Dutta
    Piyu Dutta Piyu Dutta is an Influencer
    12,280 followers

    𝗪𝗼𝗺𝗲𝗻 𝗮𝗿𝗲 𝗻𝗲𝗮𝗿𝗹𝘆 𝟯𝗫 𝗺𝗼𝗿𝗲 𝗹𝗶𝗸𝗲𝗹𝘆 𝘁𝗼 𝗹𝗼𝘀𝗲 𝘁𝗵𝗲𝗶𝗿 𝗷𝗼𝗯𝘀. As per a recent report published by the UN, in high-income countries, women are 3X more likely than men to hold jobs at high risk of automation. Worldwide, in formal workplaces more women than men hold jobs like admin assistants, secretaries, bank tellers, data entry staff, customer service. For millions of women, these are not job roles, these are their lifelines. They provide women- - their first step into the formal economy.  - a source of financial independence. - a stable income that keeps families afloat. - and a dignified path to self-worth. When these roles vanish, women are going to lose access to job market, they will lose their agency and would run a risk of losing their security. When we talk about AI coming for your job, it's mostly women who are going to be swept by automation. Agreed, AI will boost productivity. That's the promise. But as Ray Dalio warns, we may be entering a “great deleveraging” where tech outpaces our ability to transition people fast enough. Especially for women. That threat from AI runs deeper than this. First, 𝘄𝗼𝗺𝗲𝗻 𝗮𝗿𝗲 𝗺𝗶𝘀𝘀𝗶𝗻𝗴 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝘃𝗲𝗿𝘆 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝗶𝗲𝘀 𝗔𝗜 𝗶𝘀 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴. Women are underrepresented in STEM, data science and AI development a such. 𝗪𝗵𝗲𝗻 𝘄𝗼𝗺𝗲𝗻 𝗮𝗿𝗲 𝘂𝗻𝗱𝗲𝗿𝗿𝗲𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗲𝗱 𝗶𝗻 𝗔𝗜 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁, 𝘁𝗵𝗲𝗶𝗿 𝗽𝗲𝗿𝘀𝗽𝗲𝗰𝘁𝗶𝘃𝗲𝘀, 𝗻𝗲𝗲𝗱𝘀 𝗮𝗻𝗱 𝘁𝗵𝗲𝗶𝗿 𝗲𝘅𝗽𝗲𝗿𝘁𝗶𝘀𝗲 𝗮𝗿𝗲 𝗻𝗼𝘁 𝗲𝗺𝗯𝗲𝗱𝗱𝗲𝗱 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆. The resulting products ate those which don’t even serve half the population well. Or worse, actively disadvantage them. STEM and AI-related fields are where the highest-paying, most secure and most influential jobs are being created. With fewer women in these roles, the gender wealth gap widens. Then there’s the perennial hidden bias of the recruitment process that women have to fight. Some AI recruitment tools already filter out resumes with “women-coded” language. In developing countries, the danger runs equally deep. A limited digital access means the gap widens even more. Women lack access to training, tools and a chance to compete. Therefore, with automation of manual repetitive jobs, it is not simply about who gets hired or who remains in job. It is more about who gets to participate in the economy of the future. The brunt of this will be borne mostly by women. 𝗪𝗵𝗲𝗻 𝘄𝗼𝗺𝗲𝗻 𝗮𝗿𝗲 𝗲𝘅𝗰𝗹𝘂𝗱𝗲𝗱 𝗳𝗿𝗼𝗺 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝘁𝗿𝗮𝗻𝘀𝗶𝘁𝗶𝗼𝗻𝘀, 𝘁𝗵𝗲𝘆 𝗿𝗶𝘀𝗸 𝗹𝗼𝘀𝗶𝗻𝗴 𝘁𝗵𝗲𝗶𝗿 𝘃𝗼𝗶𝗰𝗲. 𝗜𝗳 𝘄𝗲 𝗮𝗿𝗲 𝗻𝗼𝘁 𝗶𝗻𝘁𝗲𝗻𝘁𝗶𝗼𝗻𝗮𝗹 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗯𝗲𝗴𝗶𝗻𝗻𝗶𝗻𝗴, 𝘄𝗲 𝘄𝗼𝘂𝗹𝗱 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗽𝗲𝗿𝗽𝗲𝘁𝘂𝗮𝘁𝗲 𝗶𝗻𝗲𝗾𝘂𝗮𝗹𝗶𝘁𝘆, 𝘄𝗲 𝘄𝗶𝗹𝗹 𝗔𝗨𝗧𝗢𝗠𝗔𝗧𝗘 𝗜𝗧 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗰𝗼𝗺𝗶𝗻𝗴 𝗳𝘂𝘁𝘂𝗿𝗲. #womenintech #futureofwork #AI #digitalinclusion Anthropia Margareth Goldenberg

Explore categories