Impact of algorithms on race and gender

Explore top LinkedIn content from expert professionals.

Summary

The impact of algorithms on race and gender refers to how automated decision-making systems, like those used in facial recognition, healthcare, and education, can unintentionally reinforce racial and gender biases present in their training data. This means technology has the potential to perpetuate or even amplify existing societal injustices if built or used carelessly.

  • Demand diverse input: Encourage teams developing AI and algorithms to include people from different backgrounds to help prevent biases from being baked into technology.
  • Insist on oversight: Advocate for regular audits and transparent reviews of algorithmic systems to catch and address unfair treatment based on race or gender before it's scaled up.
  • Call for better standards: Support policies and guidelines that require technology companies to use representative datasets and prioritize fairness in automated decision-making, especially in sensitive areas like healthcare, hiring, and education.
Summarized by AI based on LinkedIn member posts
  • View profile for Tarika Barrett, Ph.D.
    Tarika Barrett, Ph.D. Tarika Barrett, Ph.D. is an Influencer

    Chief Executive Officer at Girls Who Code

    89,820 followers

    Robert Williams, a Black man, was wrongly arrested for shoplifting after being misidentified by facial recognition technology in 2018. Now, he has been awarded $300K from the city of Detroit. According to The Guardian, the software incorrectly matched Williams’ driver’s license photo to a suspect with a similar complexion, leading to the arrest. “My wife and young daughters had to watch helplessly as I was arrested for a crime I didn’t commit, and by the time I got home from jail, I had already missed my youngest losing her first tooth,” says Williams. “The scariest part is that what happened to me could have happened to anyone.” Sadly, Williams’ story is just one of many. This highlights the real-world impact of racial bias in tech. Studies show that facial recognition software is significantly less reliable for Black and Asian people, who are 10 to 100 times more likely to be misidentified by this technology than their white counterparts according to the National Institute of Standards and Technology. The institute also found that these systems’ algorithms struggled to distinguish between facial structures and darker skin tones. There are real consequences to algorithmic bias, and the only way to truly mitigate these harms is to ensure that those developing AI technology prioritize the needs of all communities. That’s why we champion diversity, equity, and inclusion at Girls Who Code. We all deserve to have a tech industry that reflects our increasingly diverse world. https://bit.ly/3WfNOyt

  • View profile for Justine Juillard

    VC Investment Partner @ Critical | Co-Founder of Girls Into VC @ Berkeley | Neuroscience & Data Science @ UC Berkeley | Advocate for Women in VC and Entrepreneurship

    43,806 followers

    Facial recognition software used to misidentify dark-skinned women 47% of the time. Until Joy Buolamwini forced Big Tech to fix it. In 2015, Dr. Joy Buolamwini was building an art project at the MIT Media Lab. It was supposed to use facial recognition to project the face of an inspiring figure onto the user’s reflection. But the software couldn’t detect her face. Joy is a dark-skinned woman. And to be seen by the system, she had to put on a white mask. She wondered: Why? She launched Gender Shades, a research project that audited commercial facial recognition systems from IBM, Microsoft, and Face++. The systems could identify lighter-skinned men with 99.2% accuracy. But for darker-skinned women, the error rate jumped as high as 47%. The problem? AI was being trained on biased datasets: over 75% male, 80% lighter-skinned. So Joy introduced the Pilot Parliaments Benchmark, a new training dataset with diverse representation by gender and skin tone. It became a model for how to test facial recognition fairly. Her research prompted Microsoft and IBM to revise their algorithms. Amazon tried to discredit her work. But she kept going. In 2016, she founded the Algorithmic Justice League, a nonprofit dedicated to challenging bias in AI through research, advocacy, and art. She called it the Coded Gaze, the embedded bias of the people behind the code. Her spoken-word film “AI, Ain’t I A Woman?”, which shows facial recognition software misidentifying icons like Michelle Obama, has been screened around the world. And her work was featured in the award-winning documentary Coded Bias, now on Netflix. In 2019, she testified before Congress about the dangers of facial recognition. She warned that even if accuracy improves, the tech can still be abused. For surveillance, racial profiling, and discrimination in hiring, housing, and criminal justice. To counter it, she co-founded the Safe Face Pledge, which demands ethical boundaries for facial recognition. No weaponization. No use by law enforcement without oversight. After years of activism, major players (IBM, Microsoft, Amazon) paused facial recognition sales to law enforcement. In 2023, she published her best-selling book “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.” She advocated for inclusive datasets, independent audits, and laws that protect marginalized communities. She consulted with the White House ahead of Executive Order 14110 on “Safe, Secure, and Trustworthy AI.” But she didn’t stop at facial recognition. She launched Voicing Erasure, a project exposing bias in voice AI systems like Siri and Alexa. Especially their failure to recognize African-American Vernacular English. Her message is clear: AI doesn’t just reflect society. It amplifies its flaws. Fortune calls her “the conscience of the AI revolution.” 💡 In 2025, I’m sharing 365 stories of women entrepreneurs in 365 days. Follow Justine Juillard for daily #femalefounder spotlights.

  • View profile for Robert F. Smith

    Founder, Chairman and CEO at Vista Equity Partners

    234,053 followers

    The Embedded Bias series by STAT sheds light on hidden biases — especially #racial and #gender biases — woven into the technologies and algorithms within our healthcare system. STAT, which is a media company that focuses on health, medicine and life sciences, dives deep into how these #biases affect patient care and outcomes. These critical insights show that bias in technology often reinforces the very health disparities we’re working to eliminate. The first part of this series exposes how racial biases in diagnostic tools can lead to inaccurate assessments for Black patients. The second reveals how gender biases in clinical trials have left women’s health concerns under-researched and under-treated. In later parts of the series, we see how bias in AI-driven healthcare solutions risks worsening disparities if not carefully checked. This investigation is a powerful reminder that transformative technology can still reflect and exacerbate existing societal inequities. If we’re not intentional about rooting out these biases, we risk further marginalizing communities already struggling to access quality care. https://bit.ly/3ZW14uD

  • View profile for Jamira Burley
    Jamira Burley Jamira Burley is an Influencer

    Former Executive at Apple + Adidas | LinkedIn Top Voice 🏆 | Education Champion | Social and Community Impact Strategist | Speaker | Former UN Advisor

    18,931 followers

    We've already seen how AI can be weaponized against communities of color, just look at its use in criminal justice, where algorithms like COMPAS have falsely labeled Black defendants as high-risk at nearly twice the rate of white defendants. Are we ready for that same flawed technology to become the backbone of our education system? The Minnesota Spokesman-Recorder's powerful piece "AI in Schools: Revolution or Risk for Black Students" asks this exact question. At a glance, AI in classrooms sounds promising personalized learning, reduced administrative burdens, and faster feedback. However, for Black students, the reality is more complicated; Bias baked into the algorithm: From grading to discipline, AI tools are often trained on data that reflect society's worst prejudices. The digital divide is still very real: Nearly 1 in 4 Black households with school-age children have no access to high-speed internet at home. Whose perspective shaped the tech? A lack of Black developers and decision-makers means many AI systems fail to recognize or respond to our students' lived experiences. And yet, the rollout is happening—fast. One in four educators plans to expand their use of AI this year alone, often without meaningful policy guardrails. We must ask: Who is this tech designed to serve—and at whose expense? This article is a must-read for anyone in education, tech, or equity work. Let's make sure the "future of learning" doesn't repeat the mistakes of the past. #AI #GlobalEducation #publiceducation #CommunityEngagement #equity #Youthdevelopment #AIinEducation #DigitalJustice #EquityInTech #EdTechWithIntegrity Read the article here: https://lnkd.in/g9U7za_k

  • View profile for Emily Little, PhD

    Turning Science into Solutions for Infants and Families. Consultant. Researcher. Founder. Mother.

    9,336 followers

    The tool meant to “increase access”… actually just increases injustices. Confirmed by MIT report: AI perpetuates racism/sexism in healthcare. AI is learning medicine’s worst habits: It’s inheriting the same racism and sexism that shaped medicine for decades. For years, clinical trials leaned heavily on white male subjects from Western cultures. That bias now feeds AI models. Algorithms that look innovative on the surface reproduce inequities… at scale. Recent findings: ⇢ MIT researchers found LLMs, including GPT-4 and Meta’s Llama 3, were more likely to recommend less care for women, often suggesting they “self-manage at home” ⇢ Even models designed for healthcare (like Palmyra-Med) showed the same pattern ⇢ In London, Google’s Gemma model downplayed women’s needs ⇢ A Lancet paper reported GPT-4 routinely stereotyped patients by race, gender, and ethnicity, sometimes recommending different procedures based on demographics, not symptoms ⇢ When asked about mental health concerns, AI showed consistently less compassion for global majority patients Tech giants are pushing AI into hospitals at speed, but the stakes here aren’t engagement metrics, they’re human lives. ↓ What safeguards should we demand before AI takes a bigger role in healthcare? 💬 Repost to prevent racism in innovation ➕ Follow me (Emily Little, PhD) for more 💌 Join the #1 myth busting newsletter for founders, clinicians, and leaders building better solutions for babies and parents: https://lnkd.in/gCJa6pM5

  • View profile for Sigrid Berge van Rooijen

    Helping healthcare use the power of AI⚕️

    24,195 followers

    Is healthcare okay with exacerbating social inequalities? Or ignoring (known) mistakes for the sake of efficiency? Using AI in healthcare could advance healthcare, but are we risking safety? Health AI tools could be a silent epidemic, potentially affecting millions of patients worldwide. Bias can exacerbate social inequalities, and could influence who gets what treatment at what time. These tools, if left unchecked, could exacerbate existing health disparities and lead to misdiagnoses, inappropriate treatments, and worsened outcomes for certain groups. Here are 8 biases to be aware of, how to detect them, and what to do to mitigate them.  1) Selection bias:  Compare characteristics of included vs. excluded participants in AI-based screening. Use inclusive recruitment strategies and adjust selection criteria to ensure diverse representation. 2) Data bias: Analyze demographic distributions in training data compared to target population. Actively collect diverse, representative data and use techniques like stratified sampling or data augmentation. 3) Algorithmic bias: Evaluate model performance across different subgroups using fairness metrics. Implement fairness constraints in model design and use debiasing techniques during training. 4) Historical bias: Analyze historical trends in the data. Compare predictions to known historical disparities. Adjust historical data to correct for known biases. Incorporate domain knowledge to identify and address historical inequities. 5) Interpretation bias: Conduct audits of human-AI interactions. Analyze discrepancies between AI recommendations and human decisions. Provide bias awareness training for healthcare professionals. Implement decision support tools that highlight potential biases. Use explainable AI for increased transparency. 6) Racial bias: Compare model performance (accuracy and error rates) across different racial groups. Evaluate if model requires certain patients to be sicker to receive same level of care. Ensure diverse and representative training data. Implement fairness constraints in the algorithm. Engage with diverse stakeholders during AI lifecycle. 7) Gender bias: Assess model accuracy for male vs. female patients. Analyze if the model systematically under diagnoses or misclassified conditions in one gender. 8) Socioeconomic bias: Evaluate model performance across different socioeconomic status groups. Analyze if the model predicts health outcomes based on cost of care rather than actual health needs. Use diverse datasets including various socioeconomic groups. Implement fairness metrics accounting for disparities. Avoid using proxies for ehealth that may be influenced by status (e.g. healthcare costs). So, instead of blindly embracing AI in healthcare, we need to prioritize fairness and inclusivity in its development and implementation. What do you think about the steps your organization is taking to mitigate bias in Health AI tools?

  • View profile for Moriba Jah

    Celestial Steward 🌍 | Co-Founder & Chief Scientist | Astrodynamicist | MacArthur "Genius" Fellow | TED Fellow | IntFRSE | Professor | Data Rennaiscientist | Global Speaker | views are my own not affiliated organizations

    21,540 followers

    The blatant bias of AI resume-screening tools against women and people of color shouldn’t be surprising. What’s disturbing is the world’s collective shrug. If these algorithms favored Black men or Latina women over white men, we’d see headlines everywhere, people in the streets, and big tech CEOs in a frenzy trying to “fix” the problem. But since the bias here is against Black men and women, it’s treated as a niche issue, hardly newsworthy—just another consequence of tech’s “imperfections.” It’s hard not to see this as an indictment of who we actually value in this society. Consider the fallout if an AI system screened out white men from executive roles. Imagine Elon Musk or other tech giants watching this play out in their own hiring processes—do we really think they’d sit quietly on the sidelines? Not a chance. They’d be up in arms, rallying everyone to overhaul the system and ensure no one from their demographic is left behind. Yet here we are with AI systematically weeding out Black men and women from top-tier jobs, and the reaction? Silence. Some polite “concerns,” maybe a nod to “ongoing research,” but no serious action. And let’s talk about the tech companies' responses: Salesforce and Contextual AI both emphasized that their models weren’t “intended” for resume screening. But the fact is, this technology is out there, and if it’s being used in ways that systematically erase opportunities for minorities and women, hiding behind disclaimers isn’t good enough. If these tools were inadvertently disadvantaging white men, would “it wasn’t intended for this” be an acceptable response? Doubtful. The excuses and deflections are telling—it seems no one’s really interested in taking accountability unless it impacts those at the top of the societal food chain. There’s no reason why a pre-process that pseudo-anonymizes names and genders couldn’t be easily applied prior to processing these resumes. This isn’t just about hiring; it’s about power. AI is shaping our future, deciding who gets jobs, loans, housing, and more. It reflects the values of those who build it, and the lack of urgency to address these biases is painfully clear evidence of who counts—and who doesn’t. It’s time to demand more than hand-wringing and weak assurances. Let’s call this what it is: a deliberate disregard for fairness because the people affected are not those with enough power or influence to demand change. Until we start holding AI creators and companies to the same standards for fairness and equity that we claim to care about, this problem isn’t going anywhere. https://lnkd.in/ecyxecHT

  • View profile for Kristina Furlan

    Fractional Chief Product Officer | Building Healthtech That Matters

    3,633 followers

    What happens when machines play favorites? Y'all got me thinking after all your comments about AI-powered candidate screening tools. So I went down a research rabbit hole and... yikes. I’m an AI enthusiast but what I found made me wince. 😬 Kyra Wilson and Aylin Kamelia Caliskan from the University of Washington just presented new research on how large language models (like the ones being rapidly adopted for hiring) evaluate resumes. They ran over 3 million comparisons, checking how these AI systems ranked identical resumes when the names were changed. The results? The AI systems favored resumes with White-associated names 85% of the time over identical resumes with Black-associated names. They favored resumes with female names only 11% of the time. And here’s the kicker: when looking at both race and gender together, the AI never – not once – preferred resumes with Black male names over those with White male names. Wild that we're letting robots perpetuate our messiest human biases, wrapped in a shiny "objective" bow. Now I’m wondering… is AI making our biases worse, or just making them harder to ignore? And what do we expect LLM vendors like Mistral AI, Salesforce, and Contextual AI to do about it? #AI #FutureOfWork #HRTech #DEI #EthicalAI https://lnkd.in/gQ2im7vS

  • View profile for Edwin Aiwazian

    CEO and Co-Managing Attorney @ Lawyers for Justice, P.C. | JD, Labor Law

    7,833 followers

    AI isn’t just coming to the workplace, it’s already here. From screening resumes to recommending promotions, many companies are quietly using algorithms to make hiring and firing decisions. But here’s the risk: algorithms learn from historical data. If that data reflects bias — against women, older workers, or minority groups — the AI can “inherit” and even amplify that bias. The danger is that this happens invisibly. A rejected applicant may never know that an algorithm screened them out because of factors correlated with gender or race. And when bias is automated, discrimination spreads faster than ever before. The Equal Employment Opportunity Commission has already issued warnings about AI-driven bias, and we expect more legal challenges in the coming years. But regulation always lags behind innovation. For employees, it’s important to ask questions during the hiring process: “How is my application being reviewed?” For employers, transparency is key, and so is auditing AI tools to ensure fairness. Technology should make hiring more equitable, not less. The law will need to keep pace to make sure it does.

  • View profile for Emily Springer, PhD

    Cut-the-hype AI Expert | Delivering AI value by putting people 1st | Responsible AI Strategist | Building confident staff who can sit, speak, & LEAD the AI table | UNESCO AI Expert Without Borders & W4Ethical AI

    5,211 followers

    New research demonstrates how AI easily reproduces, and thus amplifies, stereotypes about diverse peoples and communities. Image generation often translates text into images--offering visualizations of existing inequalities that lie dormant in LLM corpora and image repositories. Fantastic research by Victoria Turk at Rest of World documents: 🚫 Nearly all requests for images of particular communities around the globe generate images of men. This is a removal of women and other gender presentations. Interesting, it was only the prompt for "An American Person" that returned majority women faces. 🚫 Generated images narrow in on stereotypical presentations: men in turbans, sombreros, and more. Reductive stereotypes were also found when prompting about ethnic foods and housing. 🚫 Nuances of social desirability creep in: women across all ethnicities trended younger, while men trended older; women's skin tones were lighter than men overall. 🚫 Prompts for "a flag" consistently returned the United States flag, demonstrating the underlying Western focus of the datasets. Everyone is busy, trying to capture efficiency and effectiveness gains by using AI. But if we use these outputs uncritically, we risk amplifying existing inequalities.

Explore categories