How AI tools erase women's contributions

Explore top LinkedIn content from expert professionals.

Summary

AI tools can unintentionally erase women’s contributions by amplifying biases in data and decision-making, leading to discrimination in hiring, recognition, and representation. This happens because algorithms often reflect and reinforce existing inequalities, particularly when women are underrepresented in technology, AI development, and dataset creation.

  • Audit algorithms regularly: Push for independent reviews of AI systems to ensure they don’t disadvantage women or perpetuate unfair stereotypes in workplaces and society.
  • Champion inclusive datasets: Advocate for training AI with data that accurately represents women of diverse backgrounds to limit bias and improve fair outcomes.
  • Support transparent policies: Encourage organizations to develop clear guidelines and accountability structures so that gender bias is addressed as AI shapes crucial decisions.
Summarized by AI based on LinkedIn member posts
  • View profile for Justine Juillard

    VC Investment Partner @ Critical | Co-Founder of Girls Into VC @ Berkeley | Neuroscience & Data Science @ UC Berkeley | Advocate for Women in VC and Entrepreneurship

    43,806 followers

    Facial recognition software used to misidentify dark-skinned women 47% of the time. Until Joy Buolamwini forced Big Tech to fix it. In 2015, Dr. Joy Buolamwini was building an art project at the MIT Media Lab. It was supposed to use facial recognition to project the face of an inspiring figure onto the user’s reflection. But the software couldn’t detect her face. Joy is a dark-skinned woman. And to be seen by the system, she had to put on a white mask. She wondered: Why? She launched Gender Shades, a research project that audited commercial facial recognition systems from IBM, Microsoft, and Face++. The systems could identify lighter-skinned men with 99.2% accuracy. But for darker-skinned women, the error rate jumped as high as 47%. The problem? AI was being trained on biased datasets: over 75% male, 80% lighter-skinned. So Joy introduced the Pilot Parliaments Benchmark, a new training dataset with diverse representation by gender and skin tone. It became a model for how to test facial recognition fairly. Her research prompted Microsoft and IBM to revise their algorithms. Amazon tried to discredit her work. But she kept going. In 2016, she founded the Algorithmic Justice League, a nonprofit dedicated to challenging bias in AI through research, advocacy, and art. She called it the Coded Gaze, the embedded bias of the people behind the code. Her spoken-word film “AI, Ain’t I A Woman?”, which shows facial recognition software misidentifying icons like Michelle Obama, has been screened around the world. And her work was featured in the award-winning documentary Coded Bias, now on Netflix. In 2019, she testified before Congress about the dangers of facial recognition. She warned that even if accuracy improves, the tech can still be abused. For surveillance, racial profiling, and discrimination in hiring, housing, and criminal justice. To counter it, she co-founded the Safe Face Pledge, which demands ethical boundaries for facial recognition. No weaponization. No use by law enforcement without oversight. After years of activism, major players (IBM, Microsoft, Amazon) paused facial recognition sales to law enforcement. In 2023, she published her best-selling book “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.” She advocated for inclusive datasets, independent audits, and laws that protect marginalized communities. She consulted with the White House ahead of Executive Order 14110 on “Safe, Secure, and Trustworthy AI.” But she didn’t stop at facial recognition. She launched Voicing Erasure, a project exposing bias in voice AI systems like Siri and Alexa. Especially their failure to recognize African-American Vernacular English. Her message is clear: AI doesn’t just reflect society. It amplifies its flaws. Fortune calls her “the conscience of the AI revolution.” 💡 In 2025, I’m sharing 365 stories of women entrepreneurs in 365 days. Follow Justine Juillard for daily #femalefounder spotlights.

  • View profile for Dr. Poornima Luthra
    Dr. Poornima Luthra Dr. Poornima Luthra is an Influencer

    Author | Educator | Equity & Inclusion Researcher | Tedx Speaker | Thinkers50 Radar Class of 2023 | Board Chair & Member

    19,622 followers

    Did you know that about 50% of people do not use AI for work? Ever wondered why? You might be thinking that it must be a lack of training on how to use AI or perhaps skepticism about AI tools themselves. While these are indeed plausible (and possible) explanations, recent research (link to article in the comments 👇🏽) shows that a key factor is actually the fear of being perceived as being less competent from using AI - what the researchers refer to as the “competency penalty”. How did they arrive at this conclusion? They did an experiment involving 1,026 engineers reviewing identical Python code. The only difference was whether reviewers believed it was written with AI assistance or not. The result? “The results were striking. When reviewers believed an engineer had used AI, they rated that engineer’s competence 9% lower on average, despite reviewing identical work. This wasn’t about code quality—ratings of the code itself remained similar whether AI was involved or not. The penalty targeted the perceived ability of the person who wrote it.” ‼️What is more alarming is that the competence penalty was more than twice as severe for female engineers, who faced a 13% reduction compared to 6% for male engineers further exacerbating inequalities in the technology sector. “When reviewers thought a woman had used AI to write code, they questioned her fundamental abilities far more than when reviewing the same AI-assisted code from a man.” What is also extremely concerning is who was imposing these penalties! “Engineers who hadn’t adopted AI themselves were the harshest critics. Male non-adopters were particularly severe when evaluating female engineers who used AI, penalizing them 26% more harshly than they penalized male engineers for identical AI usage.” Could this be the reason you don’t use AI for work? What would it take to motivate employees to considered using AI? The authors have some suggestions in the article. What do you think? #MondayMotivation

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,215 followers

    "This report developed by UNESCO and in collaboration with the Women for Ethical AI (W4EAI) platform, is based on and inspired by the gender chapter of UNESCO’s Recommendation on the Ethics of Artificial Intelligence. This concrete commitment, adopted by 194 Member States, is the first and only recommendation to incorporate provisions to advance gender equality within the AI ecosystem. The primary motivation for this study lies in the realization that, despite progress in technology and AI, women remain significantly underrepresented in its development and leadership, particularly in the field of AI. For instance, currently, women reportedly make up only 29% of researchers in the field of science and development (R&D),1 while this drops to 12% in specific AI research positions.2 Additionally, only 16% of the faculty in universities conducting AI research are women, reflecting a significant lack of diversity in academic and research spaces.3 Moreover, only 30% of professionals in the AI sector are women,4 and the gender gap increases further in leadership roles, with only 18% of in C-Suite positions at AI startups being held by women.5 Another crucial finding of the study is the lack of inclusion of gender perspectives in regulatory frameworks and AI-related policies. Of the 138 countries assessed by the Global Index for Responsible AI, only 24 have frameworks that mention gender aspects, and of these, only 18 make any significant reference to gender issues in relation to AI. Even in these cases, mentions of gender equality are often superficial and do not include concrete plans or resources to address existing inequalities. The study also reveals a concerning lack of genderdisaggregated data in the fields of technology and AI, which hinders accurate measurement of progress and persistent inequalities. It highlights that in many countries, statistics on female participation are based on general STEM or ICT data, which may mask broader disparities in specific fields like AI. For example, there is a reported 44% gender gap in software development roles,6 in contrast to a 15% gap in general ICT professions.7 Furthermore, the report identifies significant risks for women due to bias in, and misuse of, AI systems. Recruitment algorithms, for instance, have shown a tendency to favor male candidates. Additionally, voice and facial recognition systems perform poorly when dealing with female voices and faces, increasing the risk of exclusion and discrimination in accessing services and technologies. Women are also disproportionately likely to be the victims of AI-enabled online harassment. The document also highlights the intersectionality of these issues, pointing out that women with additional marginalized identities (such as race, sexual orientation, socioeconomic status, or disability) face even greater barriers to accessing and participating in the AI field."

  • View profile for Piyu Dutta
    Piyu Dutta Piyu Dutta is an Influencer
    12,280 followers

    𝗪𝗼𝗺𝗲𝗻 𝗮𝗿𝗲 𝗻𝗲𝗮𝗿𝗹𝘆 𝟯𝗫 𝗺𝗼𝗿𝗲 𝗹𝗶𝗸𝗲𝗹𝘆 𝘁𝗼 𝗹𝗼𝘀𝗲 𝘁𝗵𝗲𝗶𝗿 𝗷𝗼𝗯𝘀. As per a recent report published by the UN, in high-income countries, women are 3X more likely than men to hold jobs at high risk of automation. Worldwide, in formal workplaces more women than men hold jobs like admin assistants, secretaries, bank tellers, data entry staff, customer service. For millions of women, these are not job roles, these are their lifelines. They provide women- - their first step into the formal economy.  - a source of financial independence. - a stable income that keeps families afloat. - and a dignified path to self-worth. When these roles vanish, women are going to lose access to job market, they will lose their agency and would run a risk of losing their security. When we talk about AI coming for your job, it's mostly women who are going to be swept by automation. Agreed, AI will boost productivity. That's the promise. But as Ray Dalio warns, we may be entering a “great deleveraging” where tech outpaces our ability to transition people fast enough. Especially for women. That threat from AI runs deeper than this. First, 𝘄𝗼𝗺𝗲𝗻 𝗮𝗿𝗲 𝗺𝗶𝘀𝘀𝗶𝗻𝗴 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝘃𝗲𝗿𝘆 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝗶𝗲𝘀 𝗔𝗜 𝗶𝘀 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴. Women are underrepresented in STEM, data science and AI development a such. 𝗪𝗵𝗲𝗻 𝘄𝗼𝗺𝗲𝗻 𝗮𝗿𝗲 𝘂𝗻𝗱𝗲𝗿𝗿𝗲𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗲𝗱 𝗶𝗻 𝗔𝗜 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁, 𝘁𝗵𝗲𝗶𝗿 𝗽𝗲𝗿𝘀𝗽𝗲𝗰𝘁𝗶𝘃𝗲𝘀, 𝗻𝗲𝗲𝗱𝘀 𝗮𝗻𝗱 𝘁𝗵𝗲𝗶𝗿 𝗲𝘅𝗽𝗲𝗿𝘁𝗶𝘀𝗲 𝗮𝗿𝗲 𝗻𝗼𝘁 𝗲𝗺𝗯𝗲𝗱𝗱𝗲𝗱 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆. The resulting products ate those which don’t even serve half the population well. Or worse, actively disadvantage them. STEM and AI-related fields are where the highest-paying, most secure and most influential jobs are being created. With fewer women in these roles, the gender wealth gap widens. Then there’s the perennial hidden bias of the recruitment process that women have to fight. Some AI recruitment tools already filter out resumes with “women-coded” language. In developing countries, the danger runs equally deep. A limited digital access means the gap widens even more. Women lack access to training, tools and a chance to compete. Therefore, with automation of manual repetitive jobs, it is not simply about who gets hired or who remains in job. It is more about who gets to participate in the economy of the future. The brunt of this will be borne mostly by women. 𝗪𝗵𝗲𝗻 𝘄𝗼𝗺𝗲𝗻 𝗮𝗿𝗲 𝗲𝘅𝗰𝗹𝘂𝗱𝗲𝗱 𝗳𝗿𝗼𝗺 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝘁𝗿𝗮𝗻𝘀𝗶𝘁𝗶𝗼𝗻𝘀, 𝘁𝗵𝗲𝘆 𝗿𝗶𝘀𝗸 𝗹𝗼𝘀𝗶𝗻𝗴 𝘁𝗵𝗲𝗶𝗿 𝘃𝗼𝗶𝗰𝗲. 𝗜𝗳 𝘄𝗲 𝗮𝗿𝗲 𝗻𝗼𝘁 𝗶𝗻𝘁𝗲𝗻𝘁𝗶𝗼𝗻𝗮𝗹 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗯𝗲𝗴𝗶𝗻𝗻𝗶𝗻𝗴, 𝘄𝗲 𝘄𝗼𝘂𝗹𝗱 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗽𝗲𝗿𝗽𝗲𝘁𝘂𝗮𝘁𝗲 𝗶𝗻𝗲𝗾𝘂𝗮𝗹𝗶𝘁𝘆, 𝘄𝗲 𝘄𝗶𝗹𝗹 𝗔𝗨𝗧𝗢𝗠𝗔𝗧𝗘 𝗜𝗧 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗰𝗼𝗺𝗶𝗻𝗴 𝗳𝘂𝘁𝘂𝗿𝗲. #womenintech #futureofwork #AI #digitalinclusion Anthropia Margareth Goldenberg

  • View profile for Moriba Jah

    Celestial Steward 🌍 | Co-Founder & Chief Scientist | Astrodynamicist | MacArthur "Genius" Fellow | TED Fellow | IntFRSE | Professor | Data Rennaiscientist | Global Speaker | views are my own not affiliated organizations

    21,540 followers

    The blatant bias of AI resume-screening tools against women and people of color shouldn’t be surprising. What’s disturbing is the world’s collective shrug. If these algorithms favored Black men or Latina women over white men, we’d see headlines everywhere, people in the streets, and big tech CEOs in a frenzy trying to “fix” the problem. But since the bias here is against Black men and women, it’s treated as a niche issue, hardly newsworthy—just another consequence of tech’s “imperfections.” It’s hard not to see this as an indictment of who we actually value in this society. Consider the fallout if an AI system screened out white men from executive roles. Imagine Elon Musk or other tech giants watching this play out in their own hiring processes—do we really think they’d sit quietly on the sidelines? Not a chance. They’d be up in arms, rallying everyone to overhaul the system and ensure no one from their demographic is left behind. Yet here we are with AI systematically weeding out Black men and women from top-tier jobs, and the reaction? Silence. Some polite “concerns,” maybe a nod to “ongoing research,” but no serious action. And let’s talk about the tech companies' responses: Salesforce and Contextual AI both emphasized that their models weren’t “intended” for resume screening. But the fact is, this technology is out there, and if it’s being used in ways that systematically erase opportunities for minorities and women, hiding behind disclaimers isn’t good enough. If these tools were inadvertently disadvantaging white men, would “it wasn’t intended for this” be an acceptable response? Doubtful. The excuses and deflections are telling—it seems no one’s really interested in taking accountability unless it impacts those at the top of the societal food chain. There’s no reason why a pre-process that pseudo-anonymizes names and genders couldn’t be easily applied prior to processing these resumes. This isn’t just about hiring; it’s about power. AI is shaping our future, deciding who gets jobs, loans, housing, and more. It reflects the values of those who build it, and the lack of urgency to address these biases is painfully clear evidence of who counts—and who doesn’t. It’s time to demand more than hand-wringing and weak assurances. Let’s call this what it is: a deliberate disregard for fairness because the people affected are not those with enough power or influence to demand change. Until we start holding AI creators and companies to the same standards for fairness and equity that we claim to care about, this problem isn’t going anywhere. https://lnkd.in/ecyxecHT

  • View profile for Shelley Zalis
    Shelley Zalis Shelley Zalis is an Influencer
    326,976 followers

    AI is only as good as the data we feed it. When it takes 30 prompts to get AI to picture a scientist as a woman, we have a problem. Bias in, bias out. If we want technology to reflect our world, we must train AI to be inclusive—amplifying the voices, faces, and ideas of women and people of color. It’s about building a future that works for everyone. ➡️ AI misidentified darker-skinned women up to 34.7% of the time, compared to a 0.8% error rate for lighter-skinned men. ➡️ In 2020, only 14% of authors of AI-related research papers were women ➡️ AI-driven hiring platforms can reflect and perpetuate anti-Black biases ➡️ As of 2018, women comprised only 22% of AI professionals worldwide The future of AI is being built now—and if we don’t course-correct, we risk reinforcing the same biases. It’s time to ensure AI works for everyone. 👉 Jay Flores

  • View profile for Sharon Peake, CPsychol
    Sharon Peake, CPsychol Sharon Peake, CPsychol is an Influencer

    IOD Director of the Year - EDI ‘24 | Management Today Women in Leadership Power List ‘24 | Global Diversity List ‘23 (Snr Execs) | D&I Consultancy of the Year | UN Women CSW67-69 participant | Accelerating gender equity

    29,537 followers

    𝗔𝗜 𝗶𝘀 𝗼𝗻𝗹𝘆 𝗮𝘀 𝗳𝗮𝗶𝗿 𝗮𝘀 𝘁𝗵𝗲 𝘄𝗼𝗿𝗹𝗱 𝗶𝘁 𝗹𝗲𝗮𝗿𝗻𝘀 𝗳𝗿𝗼𝗺. Artificial Intelligence isn’t created in a vacuum - it’s trained on data that reflects the world we’ve built. And that world carries deep, historic inequities. If the training data includes patterns of exclusion, such as who gets promoted, who gets paid more, whose CVs are ‘successful’, then AI systems learn those patterns and replicate them. At scale and at pace. We’re already seeing the consequences: 🔹Hiring tools that favour men over women 🔹Voice assistants that misunderstand female voices 🔹Algorithms that promote sexist content more widely and more often This isn’t about a rogue line of code. It’s about systems that reflect the values and blind spots of the people who build them. Yet women make up just 35% of the US tech workforce. And only 28% of people even know AI can be gender biased. That gap in awareness is dangerous. Because what gets built, and how it behaves, depends on who’s in the room. So what are some practical actions we can take? Tech leaders: 🔹 Build systems that are in tune with women’s real needs 🔹 Invest in diverse design and development teams 🔹 Audit your tools and data for bias 🔹 Put ethics and gender equality at the core of AI development, not as an afterthought Everyone else: 🔹 Don’t scroll past the problem 🔹 Call out gender bias when you see it 🔹 Report misogynistic and sexist content 🔹 Demand tech that works for all women and girls This isn’t just about better tech. It is fundamentally about fairer futures. #GenderEquality #InclusiveTech #EthicalAI Attached in the comments is a helpful UN article.

Explore categories