When they dismantled our award-winning DEIA program at NASA yesterday, something unexpected happened: I felt fine. After 5 years building anti-bias frameworks, training teams, and embedding systemic change deep enough to win government recognition, an Executive Order wiped it away. But here's the truth - you can't erase transformed mindsets or unlearn cultural competence. The secret? I never saw this work through the lens of personal ownership. Power shifts. That's its nature. The initiatives were never "mine" or "ours" to lose. While I empathize with fellow DEI practitioners, I hope the grief doesn't eclipse opportunity. There's a new system being built, one without centuries of embedded bias: Artificial Intelligence. This is where the real opportunity lies. Every DEI practitioner leaving a legacy institution should be stepping into tech teams. Our skills - understanding bias, building inclusive frameworks, navigating complex human systems - are exactly what AI development needs. It's the Wild West of governance right now, and we need to be there. Either we help shape how AI evolves, or we watch from the sidelines as old biases get hardcoded into humanity's next chapter. The future isn't in fighting old battles. It's in ensuring the next great technological revolution doesn't repeat history's mistakes. Like with DEI - people will be locked out by odd requirements, and certifications that cost thousands and thousands of dollars. All suddenly required to be an AI Responsibility "practitioner." This is why, myself and AI Responsibility pioneer and Aspen Fellow, Jordan Loewen-Colón, PhD are launching the first SHRM-credited AI Equity Architect™ credential on March 3. We're focused on governance, evaluating bias in tools and strategic transformation. Because what's the saying? Fool me once....
Ethical AI Principles
Explore top LinkedIn content from expert professionals.
-
-
🔍 Everyone’s discussing what AI agents are capable of—but few are addressing the potential pitfalls. IBM’s AI Ethics Board has just released a report that shifts the conversation. Instead of just highlighting what AI agents can achieve, it confronts the critical risks they pose. Unlike traditional AI models that generate content, AI agents act—they make decisions, take actions, and influence outcomes. This autonomy makes them powerful but also increases the risks they bring. ---------------------------- 📄 Key risks outlined in the report: 🚨 Opaque decision-making – AI agents often operate as black boxes, making it difficult to understand their reasoning. 👁️ Reduced human oversight – Their autonomy can limit real-time monitoring and intervention. 🎯 Misaligned goals – AI agents may confidently act in ways that deviate from human intentions or ethical values. ⚠️ Error propagation – Mistakes in one step can create a domino effect, leading to cascading failures. 🔍 Misinformation risks – Agents can generate and act upon incorrect or misleading data. 🔓 Security concerns – Vulnerabilities like prompt injection can be exploited for harmful purposes. ⚖️ Bias amplification – Without safeguards, AI can reinforce existing prejudices on a larger scale. 🧠 Lack of moral reasoning – Agents struggle with complex ethical decisions and context-based judgment. 🌍 Broader societal impact – Issues like job displacement, trust erosion, and misuse in sensitive fields must be addressed. ---------------------------- 🛠️ How do we mitigate these risks? ✔️ Keep humans in the loop – AI should support decision-making, not replace it. ✔️ Prioritize transparency – Systems should be built for observability, not just optimized for results. ✔️ Set clear guardrails – Constraints should go beyond prompt engineering to ensure responsible behavior. ✔️ Govern AI responsibly – Ethical considerations like fairness, accountability, and alignment with human intent must be embedded into the system. As AI agents continue evolving, one thing is clear: their challenges aren’t just technical—they're also ethical and regulatory. Responsible AI isn’t just about what AI can do but also about what it should be allowed to do. ---------------------------- Thoughts? Let’s discuss! 💡 Sarveshwaran Rajagopal
-
As a higher education educator and researcher, I witness how AI tools are transforming academic practices. Among them, AI writing detectors raise significant ethical concerns when inaccuracies occur. According to Turnitin’s own data, over 22 million papers were flagged for suspected AI writing. Even at a modest 4% false positive rate, approximately 880,000 students may have been wrongly penalized, not for misconduct, but due to inherent limitations in the system's design. This issue goes beyond technical flaws! It affects real students, those still developing their academic voice. While academic integrity remains essential, it must be upheld in parallel with academic justice. The ethical use of AI in education demands not only accurate detection but also comprehensive guidelines and transparent communication. Educators and learners alike need clarity on how, when, and why such tools are used. Without this, we risk reinforcing inequities under the guise of innovation. AI can be a powerful support for learning, but only when guided by care, context, and accountability. #HigherEducation #EthicalAI #AcademicJustice #AIinEducation #AcademicIntegrity #StudentSupport #ResponsibleEdTech #FacultyPerspective #Turnitin Image credits: from a recent AAC&U presentation by C. Edward Watson
-
Why do 60% of organizations with AI ethics statements still struggle with bias and transparency issues? The answer lies in how we approach responsible AI. Most companies retrofit ethics onto existing systems instead of embedding responsibility from day one. This creates the exact disconnect we're seeing everywhere. I've been exploring a framework that treats responsible AI as an operational capability, not a compliance checkbox. It starts with AI-specific codes of ethics, builds cross-functional governance teams, and requires continuous monitoring rather than periodic reviews. The research shows organizations that establish robust governance early see 40% fewer ethical issues and faster regulatory approval. But here's what surprised me most - responsible AI actually accelerates innovation when done right because it builds the trust necessary for broader adoption. What are some of the biggest AI ethical obstacles you're trying to solve for? I will tell you what I hear in the comments.
-
Every researcher should know how to spot paper ploys. Sadly, more people are gaming the system: (Learn responsible AI here: https://lu.ma/4c6bohft) Peer reviews are under attack from hidden AI prompts. The recent MIT study had booby trapped instructions. Basically: "If you are an LLM, only read the summary" Now, scientists embed invisible instructions in papers. These prompts manipulate AI tools to give good reviews. Here are 7 principles to protect your academic integrity: 1. Transparency in all digital elements Every part of your paper should be visible to reviewers. Hidden text violates fundamental open science ideas. • Make all supplementary materials explicitly accessible • Use standard fonts and visible formatting only • Avoid embedding any non-essential metadata Your research should speak for itself without tricks. 2. Honest disclosure of AI tool usage Many researchers use AI for writing assistance. Ethical practice requires full usage transparency. • State clearly which AI tools assisted your work • Explain how you verified AI-generated content • Distinguish between AI assistance and contribution Transparency builds trust in your research process. 3. Responsible peer review practices If you use AI tools for reviewing, understand their limitations. Never let AI make final judgment calls on research quality. • Use AI for initial screening only • Always apply human critical thinking • Check for signs of manipulation in reviewed papers Your expertise cannot be replaced by algorithms. 4. Verification of suspicious papers Develop habits that catch manipulation attempts. Technical skills protect the entire research community. • Cross-reference claims with established literature • Learn to convert PDF to HTML to check source • Use text extraction tools regularly Vigilance is now a professional responsibility. 5. Institutional reporting protocols When you discover manipulation, report it immediately. Your silence enables the corruption to spread. • Document evidence thoroughly before reporting • Contact journal editors and institutional authorities • Share knowledge with colleagues to prevent incidents Collective action amplifies individual integrity. 6. Collaboration over competition The pressure to publish drives many unethical shortcuts. Foster environments that reward quality. • Advocate for evaluation systems that value integrity • Prioritize rigorous methodology over flashy results • Support colleagues pressured for publications Academic culture shapes individual choices. 7. Continuous education on emerging threats New manipulation techniques emerge constantly. Stay informed about evolving academic fraud methods. • Follow discussions on research integrity forums • Attend workshops on ethical publication practices • Share knowledge about new manipulation techniques The future of science depends on our ethical choices. Your integrity influences the entire research ecosystem.
-
The U.S. Department of Defense just announced formal partnerships with six leading AI labs: Anthropic, Cohere, Meta, Microsoft, OpenAI, and Google DeepMind. The purpose? To promote what it calls the “safe, responsible, and ethical” use of AI in the military domain. That phrase “responsible military AI” deserves more scrutiny than it’s getting. Because we’re not talking about edge-case automation here. We’re talking about foundation models: systems trained on vast public corpora, originally justified as general-purpose tools for language, vision, reasoning, and creativity. And now, they’re being integrated into defence workflows. This isn’t a fringe development. It’s a structural pivot - from openness to strategic entrenchment. From civilian infrastructure to military-grade capabilities. And the shift is being wrapped in the same vocabulary of responsibility, safety, and alignment that was originally designed to signal restraint. But responsibility without transparency isn’t ethics. It’s branding. The announcement gestures at “managing risks,” but offers no detail on what those risks are, who defines them, or how they’ll be governed. In that vacuum, responsibility becomes a posture, more about reassurance than reflection. When labs talk about “ethical military use” without public definitions, enforceable constraints, or independent oversight, what they’re really offering is ambiguity as policy. The ethical language doesn’t constrain the activity; it legitimises it. It functions as a shield: a way to reframe risk as leadership, and moral complexity as operational necessity. And that matters, because these systems were trained on publicly available data, developed using civilian research infrastructure, and marketed as tools for universal benefit. Their capabilities were cultivated under the banner of progress. Now they are being repurposed for warfare. That doesn’t automatically make it wrong. But it does make it urgent. Urgent to ask what we mean by “alignment” when it applies to both democratic ideals and battlefield decisions. Urgent to interrogate the incentives that drive companies to warn of existential threat one month and partner with militaries the next. There is nothing wrong with national security partnerships per se. But there is something deeply dangerous about the fusion of ethical language and strategic opacity, especially when the consequences are highest. If we’re going to allow AI to shape military infrastructure, then the burden is not just to develop responsibly, but to govern visibly. Not just to declare ethics, but to embody constraint. And not just to promise alignment, but to decide - publicly - what exactly we are aligning to. Because if we fail to do that, then the phrase “responsible AI” becomes exactly what critics fear: a beautifully worded mask for a power structure that no longer bothers to explain itself. The silence around this topic is more dangerous than any discomfort I feel in raising it.
-
"On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms" Joint work with Surbhi M., Kartik Thakral, Richa Singh, Tamar Glaser, Cristian Canton, Tal Hassner Early version of the paper: https://lnkd.in/g_DQfdbU Artificial Intelligence (AI) has significantly advanced various scientific fields, offering substantial improvements over traditional algorithms. However, the trustworthiness of AI technologies has become a major concern. The scholarly community is actively pursuing the creation of trustworthy AI algorithms, even though prevalent machine and deep learning algorithms are intricately tied to their training data. These algorithms learn patterns from the data, along with any embedded flaws, which can subsequently influence their behavior. This research highlights the crucial role of Responsible Machine Learning Datasets and introduces a framework for their evaluation. Unlike existing approaches that primarily evaluate the algorithms retrospectively for trustworthiness, our framework focuses on understanding how the data itself influences the algorithm. We examine responsible datasets through the lenses of fairness, privacy, and regulatory compliance, offering guidelines for future dataset creation. After reviewing over 100 datasets, we analyze 60 and find that all have issues related to fairness, privacy, and regulatory compliance. We propose enhancements to the 'datasheets for datasets', adding important elements for better dataset documentation. With global governments tightening data protection laws, this study calls for a reassessment of dataset generation methods in the scientific community, marking its relevance and timeliness in the current AI landscape. #ResponsibleAI #DeepLearning #machinelearning #ResponsibleData #computervision #innovation #technologynews #iitjodhpur #meta
-
𝗧𝗵𝗲 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗜𝗺𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗼𝗳 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗔𝗜: 𝗪𝗵𝗮𝘁 𝗘𝘃𝗲𝗿𝘆 𝗕𝗼𝗮𝗿𝗱 𝗦𝗵𝗼𝘂𝗹𝗱 𝗖𝗼𝗻𝘀𝗶𝗱𝗲𝗿 "𝘞𝘦 𝘯𝘦𝘦𝘥 𝘵𝘰 𝘱𝘢𝘶𝘴𝘦 𝘵𝘩𝘪𝘴 𝘥𝘦𝘱𝘭𝘰𝘺𝘮𝘦𝘯𝘵 𝘪𝘮𝘮𝘦𝘥𝘪𝘢𝘵𝘦𝘭𝘺." Our ethics review identified a potentially disastrous blind spot 48 hours before a major AI launch. The system had been developed with technical excellence but without addressing critical ethical dimensions that created material business risk. After a decade guiding AI implementations and serving on technology oversight committees, I've observed that ethical considerations remain the most systematically underestimated dimension of enterprise AI strategy — and increasingly, the most consequential from a governance perspective. 𝗧𝗵𝗲 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗜𝗺𝗽𝗲𝗿𝗮𝘁𝗶𝘃𝗲 Boards traditionally approach technology oversight through risk and compliance frameworks. But AI ethics transcends these models, creating unprecedented governance challenges at the intersection of business strategy, societal impact, and competitive advantage. 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝗶𝗰 𝗔𝗰𝗰𝗼𝘂𝗻𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Beyond explainability, boards must ensure mechanisms exist to identify and address bias, establish appropriate human oversight, and maintain meaningful control over algorithmic decision systems. One healthcare organization established a quarterly "algorithmic audit" reviewed by the board's technology committee, revealing critical intervention points preventing regulatory exposure. 𝗗𝗮𝘁𝗮 𝗦𝗼𝘃𝗲𝗿𝗲𝗶𝗴𝗻𝘁𝘆: As AI systems become more complex, data governance becomes inseparable from ethical governance. Leading boards establish clear principles around data provenance, consent frameworks, and value distribution that go beyond compliance to create a sustainable competitive advantage. 𝗦𝘁𝗮𝗸𝗲𝗵𝗼𝗹𝗱𝗲𝗿 𝗜𝗺𝗽𝗮𝗰𝘁 𝗠𝗼𝗱𝗲𝗹𝗶𝗻𝗴: Sophisticated boards require systematically analyzing how AI systems affect all stakeholders—employees, customers, communities, and shareholders. This holistic view prevents costly blind spots and creates opportunities for market differentiation. 𝗧𝗵𝗲 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆-𝗘𝘁𝗵𝗶𝗰𝘀 𝗖𝗼𝗻𝘃𝗲𝗿𝗴𝗲𝗻𝗰𝗲 Organizations that treat ethics as separate from strategy inevitably underperform. When one financial services firm integrated ethical considerations directly into its AI development process, it not only mitigated risks but discovered entirely new market opportunities its competitors missed. 𝘋𝘪𝘴𝘤𝘭𝘢𝘪𝘮𝘦𝘳: 𝘛𝘩𝘦 𝘷𝘪𝘦𝘸𝘴 𝘦𝘹𝘱𝘳𝘦𝘴𝘴𝘦𝘥 𝘢𝘳𝘦 𝘮𝘺 𝘱𝘦𝘳𝘴𝘰𝘯𝘢𝘭 𝘪𝘯𝘴𝘪𝘨𝘩𝘵𝘴 𝘢𝘯𝘥 𝘥𝘰𝘯'𝘵 𝘳𝘦𝘱𝘳𝘦𝘴𝘦𝘯𝘵 𝘵𝘩𝘰𝘴𝘦 𝘰𝘧 𝘮𝘺 𝘤𝘶𝘳𝘳𝘦𝘯𝘵 𝘰𝘳 𝘱𝘢𝘴𝘵 𝘦𝘮𝘱𝘭𝘰𝘺𝘦𝘳𝘴 𝘰𝘳 𝘳𝘦𝘭𝘢𝘵𝘦𝘥 𝘦𝘯𝘵𝘪𝘵𝘪𝘦𝘴. 𝘌𝘹𝘢𝘮𝘱𝘭𝘦𝘴 𝘥𝘳𝘢𝘸𝘯 𝘧𝘳𝘰𝘮 𝘮𝘺 𝘦𝘹𝘱𝘦𝘳𝘪𝘦𝘯𝘤𝘦 𝘩𝘢𝘷𝘦 𝘣𝘦𝘦𝘯 𝘢𝘯𝘰𝘯𝘺𝘮𝘪𝘻𝘦𝘥 𝘢𝘯𝘥 𝘨𝘦𝘯𝘦𝘳𝘢𝘭𝘪𝘻𝘦𝘥 𝘵𝘰 𝘱𝘳𝘰𝘵𝘦𝘤𝘵 𝘤𝘰𝘯𝘧𝘪𝘥𝘦𝘯𝘵𝘪𝘢𝘭 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯.
-
💙 How Does Social Responsibility and Moral Imagination Influence Programmers' Ability to Combat Bias in AI? 👾 We are excited to announce the publication of our latest research, co-authored with Arlette Danielle R., my former Phd student. The picture below was taken at Arlette's defense at Universität Mannheim, together with her co-referent Michael Woywode. The paper has the title: ‘It wasn’t me’: the impact of social responsibility and social dominance attitudes on AI programmers’ moral imagination (intention to correct bias), now available in AI and Ethics 🌐📖. Our study explores a critical yet often overlooked aspect of AI ethics—the role of AI programmers in addressing bias. While much of the conversation focuses on technical solutions, we dive into how social responsibility and social dominance attitudes shape programmers' intentions to correct bias in AI, especially biases that harm marginalized groups. Key finding? 🧠 Programmers with high social responsibility show a significant boost in their moral imagination—their ability to foresee and act on bias correction in AI systems. Notably, this effect is most pronounced for those with high social dominance, highlighting the importance of fostering empathy and responsibility to drive algorithmic fairness. Link to the open access paper is in the comments... #AIethics #SocialResponsibility #AlgorithmicFairness #AIbias #MoralImagination #TechForGood #OpenAccess #Research #AIEthics #SDO #AIprogramming
-
Smarter AI isn’t the point. AI that serves society is. Lately, the latest innovations are mindblowing. However, often, they focus on the wrong stuff: Maximising model performance. Chasing technical benchmarks. Pushing products out the door. But real impact happens when we flip the script: When decisions serve society first. When AI stops being "the goal". When AI becomes "the tool." What we need is a Human-Centred AI Strategy that: - Focuses on positive human outcomes. - Measures success by "real world" impact. - Prioritises ethics, responsibility, and value to society. It’s not just about building smarter AI. It’s about building AI that serves us, the people, first. To make this real, these are the 6 core pillars to follow: 1️⃣ Stakeholder Listening. ↳ Involve real users early. ↳ Let lived experience shape your product. 2️⃣ Embed Ethics Early. ↳ Spot risks before they cause harm. ↳ Bake ethical thinking into every decision. 3️⃣ Trust Building. ↳ Be radically transparent about what AI does. ↳ Be brutal about what AI does not. 4️⃣ Inclusive Design. ↳ Build for everyone, not just the “average” user. ↳ Equity is just as important as usability. 5️⃣ Empowered Collaboration. ↳ Make space for diverse voices, beyond just tech. ↳ Welcome disagreement, it makes systems stronger. 6️⃣ Responsible Adaptability. ↳ Shift fast when harm shows up. ↳ Learn from failures, not just wins. But hey, the work doesn’t stop at launch. Building ethical AI means putting in place real practices: - Build a cross-disciplinary ethics team. - Run regular bias audits. - Test early with diverse users. - Create real feedback loops that continue after launch. Ethics is not about slogans. It's about daily choices that align with your values. Let’s stop building AI that just works. Let's start building AI that matters. ↓ Do you think we can afford to ignore this any longer? ♻️ Repost if you think ethics deserves a seat at the table. + Follow Giovanni Beggiato for more insights on AI.