"This report developed by UNESCO and in collaboration with the Women for Ethical AI (W4EAI) platform, is based on and inspired by the gender chapter of UNESCO’s Recommendation on the Ethics of Artificial Intelligence. This concrete commitment, adopted by 194 Member States, is the first and only recommendation to incorporate provisions to advance gender equality within the AI ecosystem. The primary motivation for this study lies in the realization that, despite progress in technology and AI, women remain significantly underrepresented in its development and leadership, particularly in the field of AI. For instance, currently, women reportedly make up only 29% of researchers in the field of science and development (R&D),1 while this drops to 12% in specific AI research positions.2 Additionally, only 16% of the faculty in universities conducting AI research are women, reflecting a significant lack of diversity in academic and research spaces.3 Moreover, only 30% of professionals in the AI sector are women,4 and the gender gap increases further in leadership roles, with only 18% of in C-Suite positions at AI startups being held by women.5 Another crucial finding of the study is the lack of inclusion of gender perspectives in regulatory frameworks and AI-related policies. Of the 138 countries assessed by the Global Index for Responsible AI, only 24 have frameworks that mention gender aspects, and of these, only 18 make any significant reference to gender issues in relation to AI. Even in these cases, mentions of gender equality are often superficial and do not include concrete plans or resources to address existing inequalities. The study also reveals a concerning lack of genderdisaggregated data in the fields of technology and AI, which hinders accurate measurement of progress and persistent inequalities. It highlights that in many countries, statistics on female participation are based on general STEM or ICT data, which may mask broader disparities in specific fields like AI. For example, there is a reported 44% gender gap in software development roles,6 in contrast to a 15% gap in general ICT professions.7 Furthermore, the report identifies significant risks for women due to bias in, and misuse of, AI systems. Recruitment algorithms, for instance, have shown a tendency to favor male candidates. Additionally, voice and facial recognition systems perform poorly when dealing with female voices and faces, increasing the risk of exclusion and discrimination in accessing services and technologies. Women are also disproportionately likely to be the victims of AI-enabled online harassment. The document also highlights the intersectionality of these issues, pointing out that women with additional marginalized identities (such as race, sexual orientation, socioeconomic status, or disability) face even greater barriers to accessing and participating in the AI field."
Algorithms shaping gender realities
Explore top LinkedIn content from expert professionals.
Summary
Algorithms-shaping-gender-realities refers to how automated systems and artificial intelligence can reinforce, amplify, or challenge gender biases in areas such as hiring, social media, and access to opportunities. These algorithms quietly influence which voices are heard and whose expertise is seen, often resulting in persistent inequalities for women and marginalized genders across digital platforms and workplaces.
- Question algorithm impact: Regularly ask who might be overlooked or unfairly represented by the systems and tools your organization uses.
- Advocate for transparency: Push for clear explanations of how algorithms work and support efforts to make their decision-making processes more open and fair.
- Support inclusive design: Encourage and invest in diverse teams to build and monitor AI solutions that consider a wide range of gender perspectives.
-
-
I work in AI. And I worry it’s the biggest risk to women in history. I'm not talking about AI writing assistants. I mean the systems quietly embedding some of the most harmful ideas about women—often without anyone noticing. After hearing Laura Bates speak about her new book today, 'The New Age of Sexism: How the AI Revolution is Reinventing Misogyny', I left the room equally enraged and shaken. But I'm not going to look away. I’m in my 40s. I’ve spent over two decades in male-dominated industries. Like many women, I’ve experienced everyday sexism (and less everyday)—often so normalised I didn’t realise it at the time. Sexism is scaling faster and deeper than ever but it's covert. Because this time, the bias isn’t only cultural, it’s algorithmic, scalable, and it’s invisible—until it’s not. And what's most scary is I work in AI and I didn't realise the scale—so what chance does everyone else have. Here’s what we’re already seeing: – AI girlfriend and chatbot apps—downloaded by hundreds of millions—encourage submissive behaviour and compliance by design. What does this mean for behaviour in the real world? – Images of real women used to create synthetic avatars or s** robots – New profiles of teenage boys on social platforms shown extreme misogynistic content within minutes of joining – AI hiring tools filtering out CVs for words like “netball”—even when anonymised—because they don’t match male-coded patterns of success This is happening now. It’s shaping how women are seen, heard, and valued—online, at work, and in life by hundreds of millions of people today—how many tomorrow? As someone who advises on AI in the workplace, I know this tech has enormous potential to meet so many of our challenges today. It can surface hidden patterns of discrimination. It can improve access to credit, jobs, and healthcare. It can actually support inclusion—if we build it with care. But without ethical guardrails, it will replicate and accelerate the very inequalities we should be solving. Which it is doing right now. Yes we need international regulation to get accountability for companies profiting from misogynistic systems. Yes we need more women in AI (80% of AI firms are male-led) and we need women-led businesses to be invested in to the same level at men. That's going to take a while. Here's what I'm going to do: ✅ Challenge myself, my team and my clients to ask: Who might this tool overlook? Who is it really serving? ✅ Push for transparency, fairness, and safety by design ✅ Support more women to shape, lead, and fund the future of AI ✅ Do everything I can to lobby for regulation To the men reading this: you are huge part of the solution. Boys are being shaped by algorithms—but they listen more to men they know & trust. 📚 Have you read 'The New Age of Sexism' by Laura Bates? What did you think? https://lnkd.in/eTYMZ6MM Thank you Laura.
-
The blatant bias of AI resume-screening tools against women and people of color shouldn’t be surprising. What’s disturbing is the world’s collective shrug. If these algorithms favored Black men or Latina women over white men, we’d see headlines everywhere, people in the streets, and big tech CEOs in a frenzy trying to “fix” the problem. But since the bias here is against Black men and women, it’s treated as a niche issue, hardly newsworthy—just another consequence of tech’s “imperfections.” It’s hard not to see this as an indictment of who we actually value in this society. Consider the fallout if an AI system screened out white men from executive roles. Imagine Elon Musk or other tech giants watching this play out in their own hiring processes—do we really think they’d sit quietly on the sidelines? Not a chance. They’d be up in arms, rallying everyone to overhaul the system and ensure no one from their demographic is left behind. Yet here we are with AI systematically weeding out Black men and women from top-tier jobs, and the reaction? Silence. Some polite “concerns,” maybe a nod to “ongoing research,” but no serious action. And let’s talk about the tech companies' responses: Salesforce and Contextual AI both emphasized that their models weren’t “intended” for resume screening. But the fact is, this technology is out there, and if it’s being used in ways that systematically erase opportunities for minorities and women, hiding behind disclaimers isn’t good enough. If these tools were inadvertently disadvantaging white men, would “it wasn’t intended for this” be an acceptable response? Doubtful. The excuses and deflections are telling—it seems no one’s really interested in taking accountability unless it impacts those at the top of the societal food chain. There’s no reason why a pre-process that pseudo-anonymizes names and genders couldn’t be easily applied prior to processing these resumes. This isn’t just about hiring; it’s about power. AI is shaping our future, deciding who gets jobs, loans, housing, and more. It reflects the values of those who build it, and the lack of urgency to address these biases is painfully clear evidence of who counts—and who doesn’t. It’s time to demand more than hand-wringing and weak assurances. Let’s call this what it is: a deliberate disregard for fairness because the people affected are not those with enough power or influence to demand change. Until we start holding AI creators and companies to the same standards for fairness and equity that we claim to care about, this problem isn’t going anywhere. https://lnkd.in/ecyxecHT
-
⭐ Today we unveil key insights from our upcoming Outlook Study on #AI and #Gender. Our findings reveal that current AI #policyframeworks often overlook gender considerations. Notably, the Global Index on Responsible AI (GIRAI) indicates that gender equality is one of the lowest-scoring areas in government frameworks. Out of 138 countries assessed, only 24 mention gender in AI, and a mere 18 address it significantly. The study also highlights the specific #risks women face from biased AI systems, such as recruitment #algorithms favoring male candidates. According to MIT’s AI Risk Repository, issues affecting women are predominantly categorized under Discrimination and Toxicity, emphasizing biases that lead to stereotyping and marginalization. Moreover, the lack of gender-disaggregated data hampers our ability to assess the effectiveness of interventions. Our study combines in-depth analysis, real-world examples, and actionable policy recommendations to expose how biases disproportionately affect women, revealing systemic barriers to gender equality in AI. Through the implementation of the Recommendation on the #Ethics of AI and the W4EAI network, UNESCO is committed to driving meaningful change. To foster a more inclusive AI landscape, we must confront these challenges head-on and advocate for diversity and equitable outcomes for all. Let's work together to create a better future!
-
AI technologies are scaling the exploitation of women’s images, monetising stolen content, and erasing the origins of these photos and their creators. This issue extends beyond deepfakes; AI models are trained on images of real women, frequently without their knowledge or consent. Contests like Miss AI are profiting from generating "AI contestants" based on the images of very real people. This is a critical moment to reflect on how we regulate and develop AI systems to ensure they are ethical and responsible. Consent, transparency, and accountability must be core principles in the development of AI tools. An important story by Jason Koebler and Emanuel Maiberg highlights this issue in WIRED https://lnkd.in/eKqfU2Um
-
AI systems built without women's voices miss half the world and actively distort reality for everyone. On International Women's Day - and every day - this truth demands our attention. After more than two decades working at the intersection of technological innovation and human rights, I've observed a consistent pattern: systems designed without inclusive input inevitably encode the inequalities of the world we have today, incorporating biases in data, algorithms, and even policy. Building technology that works requires our shared participation as the foundation of effective innovation. The data is sobering: women represent only 30% of the AI workforce and a mere 12% of AI research and development positions according to UNESCO's Gender and AI Outlook. This absence shapes the technology itself. And a UNESCO study on Large Language Models (LLMs) found persistent gender biases - where female names were disproportionately linked to domestic roles, while male names were associated with leadership and executive careers. UNESCO's @women4EthicalAI initiative, led by the visionary and inspiring Gabriela Ramos and Dr. Alessandra Sala, is fighting this pattern by developing frameworks for non-discriminatory AI and pushing for gender equity in technology leadership. Their work extends the UNESCO Recommendation on the Ethics of AI, a powerful global standard centering human rights in AI governance. Today's decision is whether AI will transform our world into one that replicates today's inequities or helps us build something better. Examine your AI teams and processes today. Where are the gaps in representation affecting your outcomes? Document these blind spots, set measurable inclusion targets, and build accountability systems that outlast good intentions. The technology we create reflects who creates it - and gives us a path to a better world. #InternationalWomensDay #AI #GenderBias #EthicalAI #WomenInAI #UNESCO #ArtificialIntelligence The Patrick J. McGovern Foundation Mariagrazia Squicciarini Miriam Vogel Vivian Schiller Karen Gill Mary Rodriguez, MBA Erika Quada Mathilde Barge Gwen Hotaling Yolanda Botti-Lodovico
-
AI is Shaping the Future—Will Women Be Left Behind? I just returned from the Philippines, where 51% of Coursera learners are women—one of the only country where women outnumber men on our platform. An inspiring milestone! But here’s the challenge: 📉 Globally, women are being left behind in AI. A Harvard Business School study found that women are: Less likely to learn AI Less likely to use AI tools Less likely to build AI technologies If we don’t act now, GenAI will widen the gender gap instead of closing it. That’s why we partnered with AI experts like Jules White, Merve Hickok, and Barbara Oakley to create a new playbook: “Closing the Gender Gap in GenAI Skills.” . Women struggle to find time for AI learning → Flexible, AI-powered learning increases completion rates. . Women don’t see AI’s relevance → Real-world applications (healthcare, business, education) drive engagement. . Women lack confidence to advance → Mentorship and structured career paths are key. Download the playbook to access the full insights and solutions: 🔗 Closing the Gender Gap in GenAI Skills : https://lnkd.in/gQ4_j8sU 📸 (Posting this alongside a photo of the incredible women I work with—because representation matters!) How can we make AI learning more inclusive? Let’s discuss in the comments. #GenAI #WomenInAI #DiversityInTech #FutureOfWork
-
🚀 𝐍𝐞𝐰 𝐑𝐞𝐩𝐨𝐫𝐭: 𝐖𝐡𝐲 𝐆𝐞𝐧𝐝𝐞𝐫 𝐃𝐚𝐭𝐚 𝐢𝐬 𝐄𝐬𝐬𝐞𝐧𝐭𝐢𝐚𝐥 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐅𝐮𝐭𝐮𝐫𝐞 𝐨𝐟 𝐀𝐈! 🤖📊 AI is shaping our world—but without gender-sensitive data, it risks perpetuating bias and deepening inequalities. The latest Friedrich-Ebert-Stiftung report, "𝐆𝐞𝐧𝐝𝐞𝐫 𝐃𝐚𝐭𝐚: 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐢𝐭 𝐚𝐧𝐝 𝐰𝐡𝐲 𝐢𝐬 𝐢𝐭 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐟𝐮𝐭𝐮𝐫𝐞 𝐨𝐟 𝐀𝐈 𝐒𝐲𝐬𝐭𝐞𝐦𝐬?", by Payal Arora & Weijie Huang, explores the critical role of gender data in ensuring fair, ethical, and inclusive AI development. 𝐖𝐡𝐲 𝐝𝐨𝐞𝐬 𝐭𝐡𝐢𝐬 𝐦𝐚𝐭𝐭𝐞𝐫? 📌 AI systems rely on data for decision-making, but biased datasets reinforce stereotypes and exclude diverse experiences. 📌 Gender data is key to addressing inequalities in healthcare, employment, and education, where AI-driven decisions can disproportionately disadvantage women and marginalized groups. 📌 Barriers to gender data collection—including binary norms, lack of transparency, and structural discrimination—must be dismantled to create equitable AI systems. 📢 𝐖𝐡𝐚𝐭 𝐜𝐚𝐧 𝐛𝐞 𝐝𝐨𝐧𝐞? 🔎 The report calls for algorithmic transparency, inclusive data frameworks, cross-sectoral collaboration, and feminist ethics to ensure AI systems are designed with fairness and equity in mind. If we want AI to serve all of society, we must start with inclusive and representative data. This report is a must-readfor anyone working at the intersection of gender, AI, and social justice! 📖 You can download the paper from the post. #GenderData #AIandInclusion #EthicalAI #SocialJustice #DataForEquality #ArtificialIntelligence #FeministAI #InclusiveTech