Data Quality for AI

Explore top LinkedIn content from expert professionals.

  • View profile for Vilas Dhar

    President, Patrick J. McGovern Foundation ($1.5B) | Global Authority on AI, Governance & Social Impact | Board Director | Shaping Leadership in the Digital Age

    55,529 followers

    AI systems built without women's voices miss half the world and actively distort reality for everyone. On International Women's Day - and every day - this truth demands our attention. After more than two decades working at the intersection of technological innovation and human rights, I've observed a consistent pattern: systems designed without inclusive input inevitably encode the inequalities of the world we have today, incorporating biases in data, algorithms, and even policy. Building technology that works requires our shared participation as the foundation of effective innovation. The data is sobering: women represent only 30% of the AI workforce and a mere 12% of AI research and development positions according to UNESCO's Gender and AI Outlook. This absence shapes the technology itself. And a UNESCO study on Large Language Models (LLMs) found persistent gender biases - where female names were disproportionately linked to domestic roles, while male names were associated with leadership and executive careers. UNESCO's @women4EthicalAI initiative, led by the visionary and inspiring Gabriela Ramos and Dr. Alessandra Sala, is fighting this pattern by developing frameworks for non-discriminatory AI and pushing for gender equity in technology leadership. Their work extends the UNESCO Recommendation on the Ethics of AI, a powerful global standard centering human rights in AI governance. Today's decision is whether AI will transform our world into one that replicates today's inequities or helps us build something better. Examine your AI teams and processes today. Where are the gaps in representation affecting your outcomes? Document these blind spots, set measurable inclusion targets, and build accountability systems that outlast good intentions. The technology we create reflects who creates it - and gives us a path to a better world. #InternationalWomensDay #AI #GenderBias #EthicalAI #WomenInAI #UNESCO #ArtificialIntelligence The Patrick J. McGovern Foundation Mariagrazia Squicciarini Miriam Vogel Vivian Schiller Karen Gill Mary Rodriguez, MBA Erika Quada Mathilde Barge Gwen Hotaling Yolanda Botti-Lodovico

  • View profile for Barr Moses

    Co-Founder & CEO at Monte Carlo

    61,072 followers

    You can’t democratize what you can’t trust. For months, the primary conceit of enterprise AI has been that it would create access. Data scientists could create pipelines like data engineers. Stakeholders could query the data like scientists. Everyone from the CEO to the intern could spin up dashboards and programs and customer comms in seconds. But is that actually a good thing? What if your greatest new superpower was actually your achilles heal in disguise? Data + AI trust is THE prerequisite for a safe and successful AI agent. If you can’t trust the underlying data, system, code, and model responses that comprise the system, you can’t trust the agent it’s powering. For the last 12 months, executives have been pressuring their teams to adopt more comprehensive AI strategies. But before any organization can give free access to data and AI resources, they need rigorous tooling and processes in place to protect its integrity end-to-end. That means leveraging automated and AI-enabled solutions to scale monitoring and resolutions, and measure adherence to standards and SLAs over time. AI-readiness is the first step to AI-adoption. You can't put the cart before the AI horse.

  • View profile for Nico Orie
    Nico Orie Nico Orie is an Influencer

    VP People & Culture

    16,204 followers

    Removing gender data can worsen AI Bias In 2019 Apple Card was accused of discrimination against women.The company declined a woman’s application for a credit line increase, even though her credit records were better than her husband’s. Meanwhile they granted her husband a credit line that was 20 times higher than hers. The NY State Department found no violations of fair lending since Apple had not used gender data in the development of its algorithms. If Apple had fully adhered to anti-discrimination laws, what led to the paradoxical outcome? A recent research paper explains this paradox. The researchers found that anti-discrimination measures and laws, specifically with respect to the collection and use of sensitive data for ML models can have the opposite effect. The researchers looked at an example data set of a global financial company. They found that all things being equal, women are better borrowers than men, and individuals with more work experience are better borrowers than those with less. Thus, a woman with three years of work experience could be as creditworthy as a man with five years of experience. The data set also showed that women tend to have less work experience than men on average. In addition, the dataset used to train AI algorithms, comprised of information of past borrowers, consisting of about 80 percent men and 20 percent women on average globally. In the absence of gender data, the model treated individuals with the same number of years of experience equally. Since women represent a minority of past borrowers, it is unsurprising that the algorithm would predict the average person to behave like a man rather than a woman. Applicants with five years of experience would be granted credit, while those with three year or less would be denied, regardless of gender. This did not only increase discrimination but also hurt profitability as women with three years of work experience would have been creditworthy enough and should have been issued loans had the algorithm used gender data to differentiate between women and men. The researchers compared the outcomes in jurisdictions like Singapore where gender data can be included and the EU where the collection of gender data is allowed, but not its use in the final model. The researchers also looked at a methodology to create a secondary model to predict the gender of an applicant. This approach increased accuracy to 91% and reduced gender discrimination by almost 70 percent (as well increased profitability by 0.15 percent) This research shows again the importance for companies to understand the deeper workings of the ML algorithms and the linkage to the underlying (training) data. Source https://lnkd.in/daZkrC_x

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,218 followers

    "This report developed by UNESCO and in collaboration with the Women for Ethical AI (W4EAI) platform, is based on and inspired by the gender chapter of UNESCO’s Recommendation on the Ethics of Artificial Intelligence. This concrete commitment, adopted by 194 Member States, is the first and only recommendation to incorporate provisions to advance gender equality within the AI ecosystem. The primary motivation for this study lies in the realization that, despite progress in technology and AI, women remain significantly underrepresented in its development and leadership, particularly in the field of AI. For instance, currently, women reportedly make up only 29% of researchers in the field of science and development (R&D),1 while this drops to 12% in specific AI research positions.2 Additionally, only 16% of the faculty in universities conducting AI research are women, reflecting a significant lack of diversity in academic and research spaces.3 Moreover, only 30% of professionals in the AI sector are women,4 and the gender gap increases further in leadership roles, with only 18% of in C-Suite positions at AI startups being held by women.5 Another crucial finding of the study is the lack of inclusion of gender perspectives in regulatory frameworks and AI-related policies. Of the 138 countries assessed by the Global Index for Responsible AI, only 24 have frameworks that mention gender aspects, and of these, only 18 make any significant reference to gender issues in relation to AI. Even in these cases, mentions of gender equality are often superficial and do not include concrete plans or resources to address existing inequalities. The study also reveals a concerning lack of genderdisaggregated data in the fields of technology and AI, which hinders accurate measurement of progress and persistent inequalities. It highlights that in many countries, statistics on female participation are based on general STEM or ICT data, which may mask broader disparities in specific fields like AI. For example, there is a reported 44% gender gap in software development roles,6 in contrast to a 15% gap in general ICT professions.7 Furthermore, the report identifies significant risks for women due to bias in, and misuse of, AI systems. Recruitment algorithms, for instance, have shown a tendency to favor male candidates. Additionally, voice and facial recognition systems perform poorly when dealing with female voices and faces, increasing the risk of exclusion and discrimination in accessing services and technologies. Women are also disproportionately likely to be the victims of AI-enabled online harassment. The document also highlights the intersectionality of these issues, pointing out that women with additional marginalized identities (such as race, sexual orientation, socioeconomic status, or disability) face even greater barriers to accessing and participating in the AI field."

  • View profile for Edward Frank Morris
    Edward Frank Morris Edward Frank Morris is an Influencer

    I build AI frameworks, lead strategy, and teach AI to anyone from Fortune 500s to universities. My face has been on NASDAQ, FT, and Forbes. My jokes have not. Yet.

    33,843 followers

    A few weeks ago, I asked ChatGPT to help me write a professional bio. What did it give me? “Hello. My name is Edward. I enjoy work. Also, computers.” Utter nonsense. Not because the AI is broken. But because I asked it like I was ordering a pint in a power cut. Yes, AI is clever. But more importantly? It's obedient. Usually. You get rubbish in → you get rubbish out. So if you want useful results, you need to master the underrated art of Prompt Engineering. It’s not technical. It’s just how to talk to robots so they don’t embarrass you. Here’s your no fluff starter pack: 1) Be specific ↳ “Write something” = confusion ↳ “Write a 3-line cold email to a CFO who hates fluff” = clarity 2) Give context ↳ AI needs background, not guesswork ↳ Who’s it for? What’s the goal? What do you do? 3) Show examples ↳ “Make it like this” is miles better than “Make it good” ↳ Even machines need mentors 4) Set the format ↳ Want bullet points? A snappy tweet? A poem? ↳ Say it. Don’t assume. 5) Use frameworks ↳ Try: Role + Task + Context + Format + Tone ↳ “You’re a hiring manager. Write a LinkedIn post for interns. Make it short, warm, and slightly smug.” 6) Define the tone ↳ Friendly? Formal? Passive-aggressive manager email? ↳ Pick it. Name it. 7) Ask follow-ups ↳ First drafts are just that: first ↳ Don’t settle. Steer it. 8) Tweak mercilessly ↳ Change one word. Watch it change everything. ↳ Prompting is editing in disguise. Bottom line: If AI is giving you rubbish, it’s not broken. It’s just following orders. So give better ones. 📣 Tag someone still typing “write me content” like it’s 1997. And if you're brave enough… Challenge them to a prompt-off. Loser writes their own emails this week. Follow Edward Frank Morris for more posts like these.

  • View profile for Prukalpa ⚡
    Prukalpa ⚡ Prukalpa ⚡ is an Influencer

    Founder & Co-CEO at Atlan | Forbes30, Fortune40, TED Speaker

    46,646 followers

    A few weeks ago, a VP of Analytics confessed he’d spent half his time just tracking down the right dataset before any real analysis could begin. Half. His. Time. 🤯 And he’s not alone. Across organizations, valuable insights are trapped behind layers of disconnected systems and bottlenecks. Today, “data silos” aren’t a technical buzzword—they’re a very real, very human challenge. Here’s what’s really happening: 1️⃣ Time & Efficiency Woes: Data requests take days or weeks to fulfill. Different teams unknowingly duplicate the same work, wasting effort and resources. 2️⃣ Data Quality & Trust Issues: Multiple versions of “the same” dataset exist, and no one knows which is correct. Confidence in metrics plummets, and hesitation leads to decision-making delays. 3️⃣ Scaling Roadblocks: As companies grow, data requests multiply, but core data teams can’t keep up. New technologies get adopted without integration plans, fragmenting the data landscape even further. 4️⃣ Finding data is a nightmare. Without a single “home” for data, teams don’t know what exists or how to access it. Confusion leads to lost opportunities and repeated work. 5️⃣ Budgets are bleeding. Silos create hidden drains on budgets — redundant data storage, duplicated tooling, and wasted engineering hours pile up. Data silos slow teams down, erode trust, burn budgets, and ultimately limit a company’s ability to make data-driven decisions. But there’s a way out. Breaking down silos starts with building the right culture and implementing the right infrastructure — ensuring data is owned, governed, and easily discoverable.

  • View profile for Jana Reske

    Creative Strategist, Writer & Researcher with a focus on AI and Feminism | TOP50 Creatives PAGE Magazin | Content Manager @The People Branding Company 💜

    12,616 followers

    My Book Recommendation about the Gender Data Gap: Invisible Women by Caroline Criado Perez 📚 If you're curious about how gender bias is embedded in the systems, technologies, and structures with which we interact daily, I highly recommend reading it. This book is essential reading for anyone working with data, design, technology, policy, or research. It illustrates how the "default male" perspective, which is often unintentional yet deeply ingrained, has real-world consequences for women. Some insights that stayed with me: ➡️ Most crash test dummies are based on average male bodies, which makes women more likely to be injured in car accidents. ➡️ Voice recognition systems are less accurate for women, simply because they have been trained on mostly male voices. ➡️ Urban planning often overlooks unpaid care work, resulting in public transportation systems that are less functional for the people who use them most, who are often women. ➡️ Medical studies overwhelmingly use male participants, which leads to delayed diagnosis and incorrect treatments for women. Perez’s argument is clear: the gender data gap isn’t just a gap; it’s a systemic blind spot. As someone who is currently researching how AI generates visual representations of beauty, this book has sharpened my thinking about who is represented, how, and why. My own research shows that AI image tools such as MidJourney, DALL·E, and Stable Diffusion often depict "beautiful women" as young, white, and hyperfeminine, while offering more variation in their depictions of "beautiful men." Books like Invisible Women help explain why these patterns persist, even in supposedly "neutral" systems. If we want to build more inclusive technologies, we must start by asking better questions and examining the data we use more closely. #genderdatabias #AI #feminism

  • View profile for Iain Brown PhD

    AI & Data Science Leader | Adjunct Professor | Author | Fellow

    36,532 followers

    Trust in AI is no longer something organisations can assume, it must be demonstrated, verified, and continually earned. In my latest edition of The Data Science Decoder, I explore the rise of Zero-Trust AI and why governance, explainability, and privacy by design are becoming non-negotiable pillars for any organisation deploying intelligent systems. From model transparency and fairness checks to privacy-enhancing technologies and regulatory expectations, the article unpacks how businesses can move beyond black-box algorithms to systems that are auditable, interpretable, and trustworthy. If AI is to become a true partner in decision-making, it must not only deliver outcomes, it must be able to justify them. 📖 Read the full article here:

  • View profile for Antonio Grasso
    Antonio Grasso Antonio Grasso is an Influencer

    Technologist & Global B2B Influencer | Founder & CEO | LinkedIn Top Voice | Driven by Human-Centricity

    39,787 followers

    When data becomes a liability instead of an asset, it's usually not due to its volume but its unreliability—errors, outdated entries, and inconsistencies silently undermine decisions before anyone notices the warning signs. Data quality is fundamental to any modern digital operation, but it's often compromised by silent issues like duplication, outdated entries, or poor formatting. These flaws lead to skewed analytics, automation failures, and strategic missteps. For instance, if customer data lacks standardization across departments, sales projections can conflict with supply chain expectations. Fixing this problem requires more than tools—it demands a culture of data accountability with clear governance, continuous monitoring, and AI-driven anomaly detection to ensure long-term accuracy and trust in the insights generated. #DataQuality #AI #DataGovernance #DigitalTransformation

Explore categories