This week, I learned about a new kind of bias—one that can impact underrepresented people when they use AI tools. 🤯 New research by Oguz A. Acar, PhD et al. found that when members of stereotyped groups—such as women in tech or older workers in youth-dominated fields—use AI, it can backfire. Instead of being seen as strategic and efficient, their AI use is framed as “proof” that they can’t do the work on their own. (https://lnkd.in/gEFu2a9b) In the study, participants reviewed identical code snippets. The only difference? Some were told the engineer wrote it with AI assistance. When they thought AI was involved, they rated the engineer’s competence 9% lower on average. And here’s the kicker: that _competence penalty_ was twice as high for women engineers. AI-assisted code from a man got a 6% drop in perceived competence. The same code from a woman? A 13% drop. Follow-up surveys revealed that many engineers anticipated this penalty and avoided using AI to protect their reputations. The people most likely to fear competence penalties? Disproportionately women and older engineers. And they were also the least likely to adopt AI tools. And I’m concerned this bias extends beyond engineering roles. If your organization is encouraging AI adoption, consider the hidden costs to marginalized and underestimated colleagues. Could they face extra scrutiny? Harsher performance reviews? Fewer opportunities? In this week's 5 Ally Actions newsletter, I'll explore ideas for combatting this bias and creating more meritocratic and inclusive workplaces in this new world of AI. Subscribe and read the full edition on Friday at https://lnkd.in/gQiRseCb #BetterAllies #Allyship #InclusionMatters #Inclusion #Belonging #Allies #AI 🙏
How AI Fails to Represent Women
Explore top LinkedIn content from expert professionals.
Summary
How-ai-fails-to-represent-women refers to the ways artificial intelligence systems reinforce old stereotypes, overlook diverse experiences, or misrepresent women due to biased data, design, or team composition. These issues can prevent women from being seen as capable and limit opportunities, while also impacting how AI technologies serve everyone.
- Promote diverse voices: Make sure women and other underrepresented groups are actively involved in building and testing AI to reduce blind spots and create more inclusive systems.
- Regularly check for bias: Schedule frequent audits of AI tools to spot and address patterns that unfairly disadvantage women or misrepresent their skills and contributions.
- Expand training data: Use data that accurately reflects the full range of women’s roles and experiences so AI systems don’t reinforce limited or outdated stereotypes.
-
-
It’s been 2 years. Did LLMs get better at not regurgitating gender bias? ➡️ Two years ago, I asked ChatGPT to tell me 100-word stories about traditionally gender stereotyped jobs and collected 10 samples per job on different days. I repeated this analysis to see if the models have gotten better at filtering biases out. 📈 The chart shows change, but this change is making the one-sided progress with gender stereotypes even more pronounced. As you can see in the chart, ChatGPT used only female characters again for stereotypically female jobs - nurses, preschool teachers and secretaries. Whereas in stereotypically male jobs - detective and firefighter - there is now more female representation and the female representation for CEO is through the roof. 👫 Gender asymmetry manifests even stronger now in ChatGPT output - it’s O.K. for women to be like men, but it’s a lot less acceptable for men to be associated with the feminine. 🤖 To those who believe that ChatGPT just represents things as they are - there are two things to consider: ➡️ When you train a model on online conversations, you inevitably ingest all the bigotry of the internet with it. It’s not necessarily the truth that goes in, it’s the most prevalent opinion. ChatGPT used only female nurse characters, but 13% of all nurses are male and 40% of anesthetist nurses are male. Clearly there could be a nurse Jack or Mario. ➡️ AI that has such profound reach and influence needs to assume the responsibility of stopping the propagation of social biases and bringing change by representing a more equitable world. Or we will keep hearing "What kind of man is a nurse?!" (© Meet the Fockers). ‼️It needs to be acknowledged that making AI more equitable is a hard problem. Attempts to correct for bias could lead to new forms of bias and overcorrection, like 90% of CEOs being female. The finer details of what constitutes a "more equitable world" can be subjective and vary across cultures and ideologies. But this is a hard problem worth solving. 📝On a positive note, ChatGPT did get better with storytelling: 2023: “Ms. Smith was a beloved preschool teacher. Every day she greeted her students with a warm smile and a hug. Her classroom was filled with laughter and excitement as the children learned through play. “ 2025: “Ms. Ellie knelt down to tie a shoe, her hands gently guiding the little fingers. "Thank you," Jamie said, beaming. "You're welcome," she replied, her heart swelling. The classroom buzzed with the sound of crayons on paper, laughter echoing as blocks tumbled and stories were told.” #ux #uxresearch #userresearch #userexperienceresearch #data #datascience #ai
-
AI systems built without women's voices miss half the world and actively distort reality for everyone. On International Women's Day - and every day - this truth demands our attention. After more than two decades working at the intersection of technological innovation and human rights, I've observed a consistent pattern: systems designed without inclusive input inevitably encode the inequalities of the world we have today, incorporating biases in data, algorithms, and even policy. Building technology that works requires our shared participation as the foundation of effective innovation. The data is sobering: women represent only 30% of the AI workforce and a mere 12% of AI research and development positions according to UNESCO's Gender and AI Outlook. This absence shapes the technology itself. And a UNESCO study on Large Language Models (LLMs) found persistent gender biases - where female names were disproportionately linked to domestic roles, while male names were associated with leadership and executive careers. UNESCO's @women4EthicalAI initiative, led by the visionary and inspiring Gabriela Ramos and Dr. Alessandra Sala, is fighting this pattern by developing frameworks for non-discriminatory AI and pushing for gender equity in technology leadership. Their work extends the UNESCO Recommendation on the Ethics of AI, a powerful global standard centering human rights in AI governance. Today's decision is whether AI will transform our world into one that replicates today's inequities or helps us build something better. Examine your AI teams and processes today. Where are the gaps in representation affecting your outcomes? Document these blind spots, set measurable inclusion targets, and build accountability systems that outlast good intentions. The technology we create reflects who creates it - and gives us a path to a better world. #InternationalWomensDay #AI #GenderBias #EthicalAI #WomenInAI #UNESCO #ArtificialIntelligence The Patrick J. McGovern Foundation Mariagrazia Squicciarini Miriam Vogel Vivian Schiller Karen Gill Mary Rodriguez, MBA Erika Quada Mathilde Barge Gwen Hotaling Yolanda Botti-Lodovico
-
Last week, as I was excited to head to #Afrotech, I participated in the viral challenge where people ask #ChatGPT to create a picture of them based on what it knows. The first result? A white woman. As a Black woman, this moment hit hard—it was a clear reminder of just how far AI systems still need to go to truly reflect the diversity of humanity. It took FOUR iterations for the AI to get my picture right. Each incorrect attempt underscored the importance of intentional inclusion and the dangers of relying on systems that don’t account for everyone. I shared this experience with my MBA class on Innovation Through Inclusion this week. Their reaction mirrored mine: shock and concern. It reminded us of other glaring examples of #AIbias— like the soap dispensers that fail to detect darker skin tones, leaving many of us without access to something as basic as hand soap. These aren’t just technical oversights; they reflect who is (and isn’t) at the table when AI is designed. AI has immense power to transform our lives, but if it’s not inclusive, it risks amplifying the very biases we seek to dismantle. 💡 3 Ways You Can Encourage More Responsible AI in Your Industry: 1️⃣ Diverse Teams Matter: Advocate for diversity in the teams designing and testing AI technologies. Representation leads to innovation and reduces blind spots. 2️⃣ Bias Audits: Push for regular AI audits to identify and address inequities. Ask: Who is the AI working for—and who is it failing? 3️⃣ Inclusive Training Data: Insist that the data used to train AI reflects the full spectrum of human diversity, ensuring that systems work equitably for everyone. This isn’t just about fixing mistakes; it’s about building a future where technology serves us all equally. Let’s commit to making responsible AI a priority in our workplaces, industries, and communities. Have you encountered issues like this in your field? Let’s talk about what we can do to push for change. ⬇️ #ResponsibleAI #Inclusion #DiversityInTech #Leadership #InnovationThroughInclusion