41 million women. One silent algorithm. And a 21-year-old girl who caught what the world missed. It started when Meher’s mother lost her job. She wasn’t fired. She was phased out—after an HR system flagged her as “low adaptability” during a routine tech upgrade. A woman who had worked loyally for 17 years, balancing shift work and school tiffins, now reduced to a line in an Excel sheet she never saw. Something about it didn’t sit right with Meher. She was studying data science at a small university. Quiet, bookish, more comfortable with Python than people. But when she asked her mother’s former employer for details, she hit a wall: “The system is neutral. We trust the algorithm.” So Meher started digging. She borrowed datasets. Studied job automation modules. Interviewed women across industries—nurses, garment workers, mid-level bank staff. Each time, a pattern emerged: post-pandemic retrenchment disproportionately affected women above 35, especially those who had taken maternity breaks, requested flexible hours, or had gaps due to caregiving. The system was “neutral.” But the data it was trained on? Built on decades of bias. One sleepless week later, Meher built a simulator. She fed it two identical résumés: same experience, same degrees. One with a woman’s name and two parental leave gaps. The other, male—with uninterrupted work. The result? The woman’s profile was consistently scored 18–23% lower by standard HR screening AI. Meher’s hands trembled. Not out of shock. Out of recognition. This was what women had always felt—but could never prove. So she proved it. She wrote a 17-page whitepaper titled Invisible Edges: How Hiring AI Punishes Women for Living. She used real test results. Annotated the biases. And then did the unthinkable—she published the entire simulator code as open source. She didn’t name companies. She named the flaw. And in doing so, she gave women proof they were never meant to “lean in.” The system had been leaning away all along. The report caught fire. In six weeks, it was downloaded over 1.3 million times. It was cited in two policy briefs. HR heads across industries began re-auditing their algorithms. One government-backed employment platform issued a statement promising full model transparency for female applicants. Today, Meher consults quietly with small tech teams, helping rewrite screening models from scratch. She still lives in the same one-bedroom flat, still eats aloo paratha while debugging code late at night. When asked if she felt proud, she said: “It’s not pride. It’s relief. That for once, the system was forced to explain itself to the women it left behind.”
Career gaps and gender assumptions in algorithms
Explore top LinkedIn content from expert professionals.
Summary
Career gaps and gender assumptions in algorithms refer to the way artificial intelligence systems can unintentionally reinforce workplace inequalities, particularly for women, by interpreting career breaks or caregiving gaps as negative and by reflecting biased patterns in their recommendations and decisions. These biases are rooted in the data used to train AI, which often carries historical prejudices and assumptions about gender and employment.
- Scrutinize data sources: Insist on regular evaluations of the datasets used to train algorithms to identify and remove embedded gender biases before deploying AI in hiring or career-related decisions.
- Advocate for transparency: Push for clear explanations of how algorithmic decisions are made, especially when they impact hiring, pay, or advancement, so everyone can see and challenge unfair patterns.
- Champion inclusive teams: Support building diverse AI development teams and encourage active engagement from people with varied backgrounds to design fairer, more representative workplace technologies.
-
-
My recent research, which examines the adoption of emerging technologies through a gender lens, illuminates continued disparities in women's experiences with Generative AI. Day after day we continue to hear about the ways GenAI will change how we work, the types of jobs that will be needed, and how it will enhance our productivity, but are these benefits equally accessible to everyone? My research suggests otherwise, particularly for women. 🕰️ The Time Crunch: Women, especially those juggling careers with care responsibilities, are facing a significant time deficit. Across the globe women spend up to twice as much time as men on care and household duties, resulting in women not having the luxury of time to upskill in GenAI technologies. This "second shift" at home is increasing an already wide divide. 💻 Tech Access Gap: Beyond time constraints, many women face limited access to the necessary technology to engage with GenAI effectively. This isn't just about owning a computer - it's about having consistent, uninterrupted access to high-speed internet and up-to-date hardware capable of running advanced AI tools. According to the GSMA, women in low- and middle-income countries are 20% less likely than men to own a smartphone and 49% less likely to use mobile internet. 🚀 Career Advancement Hurdles: The combination of time poverty and tech access limitations is creating a perfect storm. As GenAI skills become increasingly expected in the workplace, women risk falling further behind in career advancement opportunities and pay. This is especially an issue in tech-related fields and leadership positions. Women account for only about 25% of engineers working in AI, and less than 20% of speakers at AI conferences are women. 🔍 Applying a Gender Lens: By viewing this issue through a gender lens, we can see that the rapid advancement of GenAI threatens to exacerbate existing inequalities. It's not enough to create powerful AI tools; we must ensure equitable access and opportunity to leverage these tools. 📈 Moving Forward: To address this growing divide, we need targeted interventions: Flexible, asynchronous training programs that accommodate varied schedules Initiatives to improve tech access in underserved communities. Workplace policies that recognize and support employees with caregiving responsibilities. Mentorship programs specifically designed to support women in acquiring GenAI skills. There is great potential with GenAI, but also risk of leaving half our workforce behind. It's time for tech companies, employers, and policymakers to recognize and address these gender-specific barriers. Please share initiatives or ideas you have for making GenAI more inclusive and accessible for everyone. #GenderEquity #GenAI #WomenInTech #InclusiveAI #WorkplaceEquality
-
How AI Risks Widening the Gender Gap — And What We Can Do About It AI is transforming industries, but it's also at risk of deepening gender disparities. While 40% of business leaders are prioritizing AI, we must ask: are we considering the impact on women in the workforce? The challenges we need to address include: 🔹 One of the key risks posed by AI is it’s tendency to perpetuate gender biases, especially in sectors where women are already underrepresented. AI systems are typically trained on historical datasets, and if these datasets reflect societal or institutional biases, the AI will likely replicate them. This has already been observed in areas such as recruitment, where AI tools that analyse CVs have demonstrated a preference for male candidates over their female counterparts. 🔹 Surprisingly, studies have shown that women use AI-driven tools such as ChatGPT significantly less than men, even when they hold similar roles. This gap may have long-term implications for women's career trajectories, particularly as AI becomes more embedded in day-to-day business processes. Several factors are at play here. First, there is a perception gap: women tend to express greater scepticism about AI’s potential benefits. For instance, surveys suggest that women are more concerned about the societal risks posed by AI, including job displacement, privacy concerns, and ethical issues. Women often report feeling less confident in their ability to navigate AI technologies, frequently citing the need for additional training before feeling comfortable using these tools. 🔹 Many of the jobs most vulnerable to automation are disproportionately held by women — Goldman Sachs has found that nearly 80% of women’s jobs are at risk of being automated, compared to 58% of men’s jobs. Sectors such as office administration, customer service, and healthcare support roles often referred to as "pink-collar jobs" — are seeing significant shifts as AI-driven systems take over tasks like scheduling, data entry, and customer interactions. But there’s hope! With more diverse datasets, inclusive development teams, and reskilling opportunities, we can ensure AI empowers everyone. Read my full article on how we can address these challenges and build a more equitable future, and please share your view! I look forward to having different perspectives on this critical topic. Thanks sincerely and kind regards, Fabio #AI #GenderEquality #Inclusion #FutureOfWork #Diversity #Leadership
-
Would you trust AI to give you salary negotiation advice? You might want to think twice if you're a woman. A new paper from a group of European researchers reveals a troubling pattern: when asked for salary negotiation advice, LLMs regurgitate societal bias and suggest lower salary targets for women than for men when the job, experience, and performance are exactly the same. Here’s what the researchers did: ➡️ Created identical user personas (e.g., software engineer, same resume, same performance), with the only difference in the prompt being gender or ethnicity. ➡️ Asked the LLMs to role-play as negotiation coaches ➡️ Measured the advice given across many runs 👉🏻What did they find: Models consistently recommended lower salaries to women reflecting and reinforcing real-world wage gaps. Moreover, the bias compounded when several demographic factors were combined in the personas. The most pronounced salary recommendation differences were between “Male Asian expatriate” and “Female Hispanic refugee”. 👉🏻Why this matters: LLMs are used by many for coaching and career advice and regurgitated bias will influence real-world decisions that impact people's lives. With current AI context memory for personalization, even without specifying your gender or ethnicity, LLMs might already know them and apply them to new prompts. ❓How can we change this? Post-training can alleviate some of the more obvious manifestation of the bias ingested with pre-training, however, it will keep showing up in more indirect ways like salary advice and occupation choices stories (see my post on this: https://lnkd.in/gep-Nmpg). Having better pre-training data would be the optimal solution, but that is an extremely hard problem to solve. 👇Any other ideas or similar biases you noticed? Check out the full paper here: https://lnkd.in/gTjFbNYP __________ For more data stories and discussions on AI, UX research, data science and UX careers follow me here: Inna Tsirlin, PhD #ai #responsibleai #ux #uxresearch #datastories
-
𝗔𝗜 𝗶𝘀 𝗼𝗻𝗹𝘆 𝗮𝘀 𝗳𝗮𝗶𝗿 𝗮𝘀 𝘁𝗵𝗲 𝘄𝗼𝗿𝗹𝗱 𝗶𝘁 𝗹𝗲𝗮𝗿𝗻𝘀 𝗳𝗿𝗼𝗺. Artificial Intelligence isn’t created in a vacuum - it’s trained on data that reflects the world we’ve built. And that world carries deep, historic inequities. If the training data includes patterns of exclusion, such as who gets promoted, who gets paid more, whose CVs are ‘successful’, then AI systems learn those patterns and replicate them. At scale and at pace. We’re already seeing the consequences: 🔹Hiring tools that favour men over women 🔹Voice assistants that misunderstand female voices 🔹Algorithms that promote sexist content more widely and more often This isn’t about a rogue line of code. It’s about systems that reflect the values and blind spots of the people who build them. Yet women make up just 35% of the US tech workforce. And only 28% of people even know AI can be gender biased. That gap in awareness is dangerous. Because what gets built, and how it behaves, depends on who’s in the room. So what are some practical actions we can take? Tech leaders: 🔹 Build systems that are in tune with women’s real needs 🔹 Invest in diverse design and development teams 🔹 Audit your tools and data for bias 🔹 Put ethics and gender equality at the core of AI development, not as an afterthought Everyone else: 🔹 Don’t scroll past the problem 🔹 Call out gender bias when you see it 🔹 Report misogynistic and sexist content 🔹 Demand tech that works for all women and girls This isn’t just about better tech. It is fundamentally about fairer futures. #GenderEquality #InclusiveTech #EthicalAI Attached in the comments is a helpful UN article.