[Techno-Patriarchy: How AI is Misogyny’s New Clothes Gender discrimination is baked into artificial intelligence by design and it’s in the interests of tech bros. In my day job, I support our clients using AI to accelerate the discovery of new drugs and materials. I can see the benefits of this technology to the people and the planet. But there is a dark side too. That’s the reason tech - Disregards women’s needs and experiences when developing AI solutions. - Deflects its accountability in automating and increasing online harassment - Purposely reinforces gender stereotypes - Operationalises menstrual surveillance - Sabotages women’s businesses and activism I substantiate each of the points above with real examples and the impact on the lives of women. Fortunately, not all is doom and gloom. Because insanity is to do the same thing and expect a different outcome, I also share what we need to start doing differently to develop AI that works for women too. #EthicalAI #InclusiveAI #MisogynisticAI #BiasedAI #Patriarchy #InclusiveTech #WomenInTech #WomenInBusiness
How tech reinforces gender hierarchies
Explore top LinkedIn content from expert professionals.
Summary
The concept of “how-tech-reinforces-gender-hierarchies” explores the ways technology—especially artificial intelligence—can unintentionally or deliberately deepen existing gender and social inequalities. This happens when biased data, lack of diversity in development, and unchecked algorithms shape everything from hiring systems to workplace automation, often to the disadvantage of women and marginalized groups.
- Question existing systems: Regularly audit your technology platforms, data sources, and algorithms to uncover hidden gender bias and address it before it becomes embedded in your organization.
- Diversify development teams: Bring people from varied genders, backgrounds, and perspectives into the design and coding process to help challenge assumptions and create fairer tech solutions.
- Promote equitable policies: Push for transparency in AI decision-making and intentionally design processes that support gender parity, from recruitment to advancement and workplace culture.
-
-
🤖 The Gendered Impact of AI: Why Women—Especially from Marginalised Backgrounds—Are Most at Risk As artificial intelligence continues to reshape the world of work, one thing is becoming increasingly clear: the effects will not be felt equally. A new report from the United Nations’s International Labour Organization and Poland’s NASK reveals that roles traditionally held by women—particularly in high-income countries—are almost three times more likely to be disrupted by generative AI than those held by men. 📉 9.6% of female-held jobs are at high risk of transformation, compared to just 3.5% of male-held roles. Why? Many of these jobs are in administration and clerical work—sectors where AI can automate routine tasks efficiently. But while AI may not eliminate these roles outright, it is radically reshaping them, threatening job security and career progression for many women. This risk is not theoretical. Back in 2023, researchers at OpenAI—the company behind ChatGPT—examined the potential exposure of different occupations to large language models like GPT-4. The results were striking: around 80% of the US workforce could have at least 10% of their work tasks impacted by generative AI. While they were careful not to label this a prediction, the message was clear: AI's reach is widespread and accelerating. 🌍 An intersectional lens shows even deeper inequities. Women from marginalised communities—especially women of colour, older women, and those with lower levels of formal education—face heightened vulnerability: They are overrepresented in lower-paid, more automatable roles, with limited access to training or advancement. They often lack the tools, networks, and opportunities to adapt to digital shifts. And they face greater risks of bias within the AI systems themselves, which can reinforce inequality in recruitment and promotion. Meanwhile, roles being augmented by AI—like those in tech, media, and finance—are still largely male-dominated, widening the gender and racial divide in the AI economy. According to the World Economic Forum, 33.7% of women are in jobs being disrupted by AI, compared to just 25.5% of men. 📢 As AI moves from buzzword to business reality, we need more than technical solutions—we need intentional, inclusive strategies. That means designing AI systems that reflect the full diversity of society, investing in upskilling programmes that reach everyone, and ensuring the benefits of AI are distributed fairly. The question on my mind is - if AI is shaping the future of work, who’s shaping AI? #AI #FutureOfWork #EquityInTech #GenderEquality #Intersectionality #Inclusion #ResponsibleTech
-
How might #AI reinforce gender #bias? And can we leverage AI to mitigate gender bias too? These are the questions I'd like to address in this week's #sundAIreads in honor of #InternationalWomensDay. The reading I chose for this is an interview in UN Women with Zinnya del Villar, Director of Data, Technology, and Innovation at the Data-Pop Alliance. The interview addresses the following questions: 1️⃣ What is AI gender bias and why does it matter? AI gender bias is "when the AI treats people differently on the basis of their gender, because that’s what it learned from the biased data it was trained on." As Zinnya del Villar points out, "These biases can limit opportunities and diversity, especially in areas like decision-making, hiring, loan approvals, and legal judgments." 2️⃣ What is the result of gender bias in AI applications? ❌ It can reinforce stereotypes, e.g., when voice assistants default to female voices, or when text-to-image generators gravitate toward men for executive roles and women for service positions. ❌ It can also lead to disparate impact, e.g., when medical products trained on biased data work better for men than for women, or when recruiting systems automatically filter out applications based on gender. 3️⃣ How can gender bias in AI applications be reduced? Zinnya del Villar emphasizes that gender bias in AI applications must be tackled on multiple fronts: ✅ At the level of the developers: "AI systems should be created by diverse development teams made up of people from different genders, races, and cultural backgrounds. This helps bring different perspectives into the process and reduces blind spots that can lead to biased AI systems." ✅ At the level of the data: "This means actively selecting data that reflects different social backgrounds, cultures and roles, while removing historical biases, such as those that associate specific jobs or traits with one gender." 4️⃣ How can AI mitigate gender bias and drive better decisions? As Zinnya del Villar points out, AI can surface and help evaluate the impact of gender bias. It can also help assess the gender impact of laws and propose relevant reforms. 5️⃣ How can AI improve women's safety and stop digital abuse? The article lists several #AI applications that were developed specifically with women's safety in mind, e.g., chatbots that provide anonymous support for victims of sexual abuse or AI-powered algorithms that limit the spread of non-consensual intimate images. The interview concludes with five concrete suggestions for how to make AI more inclusive: ✅ Using diverse and representative training data ✅ Improving the transparency of algorithms in AI systems ✅ Making AI development and research teams more diverse and inclusive ✅ Adopting strong ethical frameworks for AI systems ✅ Integrating gender-responsive policies in developing AI systems The full interview with Zinnya del Villar can be found here: https://bit.ly/4bybzHW.
-
💼 Organisations and Advisory Boards shaping the future of work must ensure AI strategies don't leave half the talent pool behind. AI is redefining the workforce. However, without deliberate action, we risk reinforcing the same gender biases that have long plagued the tech industry and senior leadership positions. According to the Gender Parity in the Intelligent Age report: 📉 Women are more likely to be in jobs disrupted by GenAI 🔧 Women are less likely to benefit from AI augmentation 📊 Women are underrepresented in STEM and AI leadership roles 📈 However, the gap is narrowing, offering a window for positive change. This aligns with an article I wrote for SmartCompany about how tech is still dominated by a 'bro culture' that drives talented women away. From biased hiring systems to lack of inclusive leadership, the barriers are structural and persistent. 🧱 If we don’t fix the foundations, AI risks automating exclusion rather than opportunity. Advisory boards and executive teams have a powerful opportunity (and responsibility) to challenge this trajectory by: 👩💼 Embedding gender equity into AI design, policy and leadership 🤖 Using AI to elevate, not replace, undervalued roles traditionally held by women 🧠 Bringing inclusive thinking to strategic decision-making 🚀 Making gender parity part of every future-of-work conversation AI should be a force for progress and not a mirror of our old mistakes. How is your organisation or advisory board actively driving gender-inclusive innovation?
-
Higher education is increasingly adopting AI-driven tools: chatbots for applicant queries, systems for verifying qualifications, and algorithms for shortlisting candidates. While these technologies promise efficiency, they also carry the risk of embedding and obscuring systemic biases. Misogyny, racism, and classism are already embedded in our systems. Add AI to the mix, and those biases don’t disappear, they get scaled. In her latest book, The New Age of Sexism, Laura Bates exposes how emerging technologies, including AI, are reinforcing and amplifying existing gender inequalities. She delves into how AI systems, often trained on biased data, can perpetuate harmful stereotypes and discrimination against women and marginalised groups. Consider these scenarios: - A chatbot providing less comprehensive information to female or international applicants, reflecting historical underrepresentation in training data. - A document verification system disproportionately flagging certificates from certain countries as suspicious. - An admissions algorithm favouring candidates from traditionally privileged backgrounds (by something as simple as giving primacy to A-levels), inadvertently penalising those with non-linear educational paths. These are not hypothetical concerns. For instance: The UK government's AI system for detecting welfare fraud was found to exhibit bias against individuals based on age, disability, marital status, and nationality. Research by Joy Buolamwini revealed that facial recognition systems have higher error rates for darker-skinned women, highlighting the intersection of racial and gender biases in AI technologies. If your institution is integrating AI into recruitment or admissions, it's crucial to ask: Who developed and trained (and continues to train) the model? What data was used, and does it reflect diverse populations? How are biases identified and mitigated? Who is accountable for the decisions made by these systems? Automation doesn't eliminate bias; it often conceals it behind a façade of objectivity. We must critically assess and address the implications of AI in our institutions to ensure equity and fairness. As Laura Bates emphasises, it's not about fixing the individuals affected by these systems but about fixing the systems themselves. How is your institution approaching the integration of AI in a way that promotes inclusivity and mitigates bias? Or, has no one thought about it yet?
-
"Quantifying Gender Bias in Generative AI Outputs: A Comparative Analysis of Multiple Platforms and Languages" - This was the topic of my Quantitative Research Paper as part of my #EDBA studies. In my recent research for my Executive Doctorate in Business Administration at École des Ponts Business School, I explored the pervasive gender bias present in generative AI outputs. This study involved testing three leading AI models—ChatGPT, Claude, and Gemini—using 10 carefully crafted prompts covering five thematic categories: Leadership, Science, Gendered Professions, Government, and Technology, across five languages, resulting in a total of 750 responses. 💡 Here are the key findings: 🚹 Masculine Bias: In gendered languages, over 75% of responses for leadership roles defaulted to masculine portrayals. For example, descriptions of CEOs and government officials overwhelmingly featured male characteristics. 🚺 Traditionally feminine roles like nursing exhibited a 100% feminine bias across all models and languages 🌎 Language Impact Gendered languages demonstrated stronger gender biases compared to English Even in English, a relatively gender-neutral language, biases persisted 💻 Comparative Analysis of AI Platforms While all platforms showed some level of gender bias, there were notable differences: · #Claude demonstrated more balanced results in some categories, showing a higher proportion of neutral responses compared to the other platforms. · #ChatGPT showed strong biases in certain themes, particularly in leadership and government categories. · #Gemini often exhibited the strongest biases among the three platforms, with the highest proportion of gendered responses across most themes. Implications: These findings illustrate how generative AI not only reflects but potentially amplifies societal biases. The absence of feminine representations in leadership and government roles is particularly concerning, as it reinforces outdated stereotypes and presents obstacles to women's advancement in these fields. This research serves as a stark reminder of the need for diverse voices in AI development. By addressing these biases head-on, we can work towards creating more equitable AI systems that better represent all individuals. #GenderBias #AIResearch #DiversityInTech #AIethics #EDBA Tamas David-Barrett Dr. Saman Sarbazvatan Ivo Haase WOMEN IN TECH ® Global Saniye Gülser Corat Chiara Corazza
-
📢 AI: A modern-day magician, or a sneaky sexist? I've been noticing gender bias creeping into AI-generated content. Just recently, I was seeking a light-hearted joke to share with my mum friends. Asking Google Gemini for a "relatable" joke for mums, I was met with a classic "dishwashing" punchline. When I requested a less sexist option, AI swapped one chore for another. 🙅♀️ Another happened while planning my son's 4th birthday party. I wanted to create superhero names & powers for the kids (18 of them!), so I turned to AI for inspiration. While the names were creative, the superpowers assigned were starkly divided along gender lines. Boys were bestowed with strength, bravery, and speed, while girls were given powers of kindness, empathy, and beauty. 💥 These are just two examples, but they highlight a larger issue. AI is learning from the data we feed it, and if that data is biased, so will the AI. We need to be critical consumers of AI-generated content and challenge these stereotypes. This could have profound implications for careers. If these tools are biased, they can perpetuate inequality by excluding qualified candidates from certain backgrounds. For example, AI-powered screening tools might overlook applications with traditionally female-coded names or experiences, leading to a less diverse talent pool. If AI consistently presents limited role models or reinforces gender stereotypes in career guidance, it can restrict opportunities for individuals to explore their full potential. Will this stop me using Gen AI? No, I used it to help me polish this post of course(!) and overall I think it's a wonderful tool. But you do have to treat it as a tool: maintain your very own critical thinking and human judgment. What are your thoughts? Have you encountered similar biases in AI? #AI #genderbias #equality #careertech #legaltech
-
For tech companies, product developers, and executives: we have to ask ourselves, are we truly considering the societal impact of the tools we build and deploy? Reading Safiya Umoja Noble's "Algorithms of Oppression" has really opened my eyes to how the algorithms we create are far from neutral. They don't just reflect existing social values; they actively reinforce and deepen them. What we see online is deeply shaped by historical inequalities, commercial interests, and design choices that often overlook or exclude marginalized voices. Remember the UN Women campaign from 2013? They showed real Google autocomplete results where typing "women should" brought up suggestions like "women should stay at home" or "women shouldn't have rights." These weren't edited; they were genuine reflections of what millions of people were searching. As someone who spends a lot of time online and advocates for gender justice, I constantly see how these systems influence what stories gain visibility, what content gets suppressed, and who is targeted. The algorithm is much more than just lines of code. It represents power, it drives profit, and it profoundly shapes how we experience the world around us. It is not enough to fight sexism only in the offline world. We must also confront how it is being programmed into the very digital tools we use every single day. This is a critical conversation for all of us in the tech space. 📚 Reading: Algorithms of Oppression by Safiya Umoja Noble 📷 Campaign: UN Women, 2013 #ai #tech #feminism #safety
-
In today's tech-savvy world, businesses are increasingly turning to artificial intelligence to streamline their recruitment processes. AI promises efficiency and objectivity, but it's crucial to acknowledge and address the gender bias that can inadvertently creep into these systems. While AI-powered recruiting tools can enhance efficiency by sifting through vast pools of applicants, they can also unintentionally reinforce existing gender disparities. This issue arises from the historical data that AI models are trained on, which may contain gender biases present in past hiring decisions. As a result, AI algorithms can inadvertently perpetuate stereotypes and discriminatory practices. One of the primary challenges lies in the algorithms' ability to assess candidates based on subtle linguistic and behavioral cues. For instance, a study conducted in 2018 found that an AI recruiting tool showed a preference for male candidates when evaluating résumés containing the word "women's" (as in "women's chess club") but not when encountering phrases like "men's" (as in "men's soccer team"). This example highlights the importance of scrutinizing the training data to mitigate gender biases. To combat gender discrimination in AI-driven recruiting, organizations must take proactive steps: 1. Diverse Data Sets: Ensure that AI algorithms are trained on diverse and balanced data sets to minimize bias. 2. Continuous Monitoring: Regularly review the algorithm's output for signs of bias and fine-tune it accordingly. 3. Transparency and Accountability: Promote transparency in the recruiting process by clearly communicating the role of AI and ensuring that humans have oversight. 4. Ethical AI Training: Invest in training AI models to recognize and eliminate gender bias in all its forms. 5. Diverse Hiring Panels: Encourage diverse hiring panels to evaluate candidates, providing different perspectives in the decision-making process. 6. Feedback Loops: Establish feedback mechanisms for candidates to report any perceived biases during the application process. By taking these measures, companies can harness the power of AI while fostering an inclusive and diverse workplace. While AI may not be perfect, it has the potential to transform the recruiting landscape by reducing bias, as long as we remain vigilant and proactive in addressing these issues. The integration of AI in corporate recruiting holds immense promise, but it also carries the risk of perpetuating gender biases. Companies must be committed to rooting out these biases and creating a fair and inclusive hiring environment. By doing so, they can harness the true potential of AI-driven recruiting and contribute to a more equitable workforce. Henry C ©️ #AIRecruiting #GenderBias #DiversityandInclusion #EthicalAI #CorporateRecruiting #InclusiveHiring #EqualOpportunity #AIinHR
-
My kids asked me today, "why is there an International Women's Day?". Well, it's all about raising awareness for ongoing inequalities! The graphic below is from the paper "A GENDER PERSPECTIVE ON ARTIFICIAL INTELLIGENCE AND JOBS: THE VICIOUS CYCLE OF DIGITAL INEQUALITY" by Estrella Gómez-Herrera, University of the Balearic Isl, and Sabine T. Köszegi, Technische Universität Wien, published by Bruegel - Improving economic policy, Aug. 2020. The paper focuses on the impact of AI on labour markets, highlighting how gender stereotypes and gendered work segregation, on the one hand, and digitalization and automation, on the other hand, are entangled and result in a vicious cycle of digital inequality. (see graphic below) Digital gender inequality stems from societal stereotypes, leading to fewer women in STEM and ICT, which is worsened by workplace challenges such as retaining female talent due to issues in representation, remuneration, and promotion within technology fields. Consequently, AI systems often are developed by predominantly male teams, potentially overlooking the diverse needs of users and reinforcing gender stereotypes, embedding discriminatory practices and gender biases into AI systems. [See also: - "When Good Algorithms Go Sexist: Why and How to Advance AI Gender Equity", by Genevieve Smith & Ishita Rustagi, 2021: https://lnkd.in/ggU-Vr9c - Berkeley Haas Center for Equity, Gender & Leadership, Bias in AI: Examples Tracker, https://lnkd.in/gPVgu4E5] The paper also shows how AI may worsen gender disparities in the workforce. E.g., to measure the impact of AI on occupations and gender-based risks, the Routine Task Intensity (RTI) framework was used. It showed that women are more likely to perform routine or codifiable tasks than men across all sectors and occupations, putting them at a higher risk of job displacement due to automation. The paper concludes with key policy recommendations: - Addressing gender stereotypes and inequalities in society by revising communication practices, educational content, and professional environments. - Increasing exposure of women/girls to digital technologies, integrating ICT into compulsory education curricula, and providing incentives for STEM participation. - Combating occupational segregation by increasing the number of women in the AI workforce, enhancing transparency in recruitment, promoting visibility for women in AI, and addressing the distribution of unpaid childcare and housework. - Addressing inequalities in technology access and reproduction by conducting algorithmic audits to identify sources of gender bias, examining the gendering of digital assistants, and investing in closing the digital skills gender gap. Link to full report: https://lnkd.in/gbQmQrbd