When a bridge collapses, no one blames the lawyers who drafted the building codes. We hold engineers accountable - because they’re the ones responsible for translating rules into working systems. So why, in AI, do we reverse that logic? Today, when AI systems or data practices fail - when trust is broken, when rights are violated - we look to legal teams for better policies or compliance language. But the truth is: we’ve handed a deeply technical problem to engineers, without giving them the infrastructure to solve it. The real failure isn’t technical incompetence. It’s that we’ve made engineers responsible for interpreting legal ambiguity - at scale. AI governance isn't theoretical anymore. Enterprises are operationalizing it now. That means engineers are on the front lines, being asked to take vague, jurisdiction-specific privacy laws and somehow translate them into software systems that protect user rights, control data flows, and stay out of legal trouble - all while keeping the business moving at AI speed. It’s an impossible task. Here’s what we’re asking of them: Decode overlapping, often contradictory global privacy regulations ■ Map data flows across sprawling, distributed architectures ■ Predict downstream consequences of data use in dynamic AI workflows ■ Enforce consent and usage rights in real time ■ Maintain all of this without breaking performance or functionality This is the core problem Ethyca exists to solve. The present reality for most organizations isn't just difficult - it's untenable. We’ve normalized a situation where engineers are expected to build legally-compliant systems using spreadsheets, policy PDFs, and tribal knowledge. That’s not governance. That’s wishful thinking. Just because it's normal doesn’t mean it’s acceptable. Engineers aren’t failing at AI governance. Our approach to AI governance is failing engineers. And no, the solution isn't to turn lawyers into engineers, or engineers into lawyers. That false choice has paralyzed progress for years. What’s missing is the infrastructure layer - a system that translates legal requirements into executable, deterministic logic. That’s what we’re building with Fides. Not another checkbox compliance tool, but a foundational layer that makes policy enforceable by design - across data mapping, consent, access, and data usage controls. The principles are the same ones we've always believed in: privacy automation, data rights, transparency, control. But the use case has evolved. Now, they’re the building blocks of trust in an AI-powered enterprise. Because in a world where data drives everything, trust in your AI begins with trust in your data. And trust in your data starts with systems that engineers can actually use. If your governance system doesn’t make policy executable, you’re not building AI safely. You’re building risk - and placing the blame in the wrong place when it fails.
Engineering Compliance Challenges In Emerging Technologies
Explore top LinkedIn content from expert professionals.
Summary
Engineering compliance challenges in emerging technologies refer to the significant difficulties engineers face when ensuring that new and complex systems, like artificial intelligence (AI), adhere to legal, ethical, and safety standards. These challenges stem from the rapid evolution of technologies, legal ambiguities, and the integration of systems in diverse environments.
- Understand legal requirements: Take time to study and interpret global regulations and standards that impact your technology, as compliance expectations often vary across industries and regions.
- Implement proactive governance: Build systems that continuously monitor and enforce compliance policies, addressing issues such as data privacy, bias, and user consent in real-time.
- Prioritize transparency and accountability: Employ explainable tools and processes, ensure stakeholders understand decision-making methodologies, and take steps to continuously assess and mitigate risks as technology evolves.
-
-
"Our analysis of eleven case studies from AI-adjacent industries reveals three distinct categories of failure: institutional, procedural, and performance... By studying failures across sectors, we uncover critical lessons about risk assessment, safety protocols, and oversight mechanisms that can guide AI innovators in this era of rapid development. One of the most prominent risks is the tendency to prioritize rapid innovation and market dominance over safety. The case studies demonstrated a crucial need for transparency, robust third-party verification and evaluation, and comprehensive data governance practices, among other safety measures. Additionally, by investigating ongoing litigation against companies that deploy AI systems, we highlight the importance of proactively implementing measures that ensure safe, secure, and responsible AI development... Though today’s AI regulatory landscape remains fragmented, we identified five main sources of AI governance—laws and regulations, guidance, norms, standards, and organizational policies—to provide AI builders and users with a clear direction for the safe, secure, and responsible development of AI. In the absence of comprehensive, AI-focused federal legislation in the United States, we define compliance failure in the AI ecosystem as the failure to align with existing laws, government-issued guidance, globally accepted norms, standards, voluntary commitments, and organizational policies–whether publicly announced or confidential–that focus on responsible AI governance. The report concludes by addressing AI’s unique compliance issues stemming from its ongoing evolution and complexity. Ambiguous AI safety definitions and the rapid pace of development challenge efforts to govern it and potentially even its adoption across regulated industries, while problems with interpretability hinder the development of compliance mechanisms, and AI agents blur the lines of liability in the automated world. As organizations face risks ranging from minor infractions to catastrophic failures that could ripple across sectors, the stakes for effective oversight grow higher. Without proper safeguards, we risk eroding public trust in AI and creating industry practices that favor speed over safety—ultimately affecting innovation and society far beyond the AI sector itself. As history teaches us, highly complex systems are prone to a wide array of failures. We must look to the past to learn from these failures and to avoid similar mistakes as we build the ever more powerful AI systems of the future." Great work from Mariami Tkeshelashvili and Tiffany Saade at the Institute for Security and Technology (IST). Glad I could support alongside Chloe Autio, Alyssa Lefaivre Škopac, Matthew da Mota, Ph.D., Hadassah Drukarch, Avijit Ghosh, PhD, Alexander Reese, Akash Wasil and others!
-
A lot of companies think they’re “safe” from AI compliance risks simply because they haven’t formally adopted AI. But that’s a dangerous assumption—and it’s already backfiring for some organizations. Here’s what’s really happening— Employees are quietly using ChatGPT, Claude, Gemini, and other tools to summarize customer data, rewrite client emails, or draft policy documents. In some cases, they’re even uploading sensitive files or legal content to get a “better” response. The organization may not have visibility into any of it. This is what’s called Shadow AI—unauthorized or unsanctioned use of AI tools by employees. Now, here’s what a #GRC professional needs to do about it: 1. Start with Discovery: Use internal surveys, browser activity logs (if available), or device-level monitoring to identify which teams are already using AI tools and for what purposes. No blame—just visibility. 2. Risk Categorization: Document the type of data being processed and match it to its sensitivity. Are they uploading PII? Legal content? Proprietary product info? If so, flag it. 3. Policy Design or Update: Draft an internal AI Use Policy. It doesn’t need to ban tools outright—but it should define: • What tools are approved • What types of data are prohibited • What employees need to do to request new tools 4. Communicate and Train: Employees need to understand not just what they can’t do, but why. Use plain examples to show how uploading files to a public AI model could violate privacy law, leak IP, or introduce bias into decisions. 5. Monitor and Adjust: Once you’ve rolled out your first version of the policy, revisit it every 60–90 days. This field is moving fast—and so should your governance. This can happen anywhere: in education, real estate, logistics, fintech, or nonprofits. You don’t need a team of AI engineers to start building good governance. You just need visibility, structure, and accountability. Let’s stop thinking of AI risk as something “only tech companies” deal with. Shadow AI is already in your workplace—you just haven’t looked yet.
-
🪄 The Illusion of Ethical AI Compliance: Governance Needs a Reality Check🪄 Meeting legal requirements like the EU AI Act addresses immediate risks but leaves deeper ethical issues unresolved. Your leadership demands more. It requires addressing how AI systems evolve, interact with their environments, and impact the people and societies they serve. ➡️ Why Compliance Falls Short Compliance frameworks focus on measurable benchmarks, such as accuracy thresholds or fairness metrics. These tools provide a foundation but often overlook how AI systems behave beyond their initial design and deployment. For example, a hiring system might meet diversity standards during development. Over time, its performance drifts, subtly disadvantaging certain groups. Compliance measures may confirm the system is operationally sound, but they don’t reveal how decisions shape broader organizational outcomes. The comfort compliance provides often masks emerging risks. ➡️Risks Hidden in Plain Sight AI systems are not static tools; they interact with people, organizations, and data in ways that compliance measures rarely account for. These interactions introduce unexpected challenges: 🔹A pricing algorithm may prioritize revenue but inadvertently restrict access for lower-income users. 🔹Content recommendation engines can reinforce polarization by adapting to user behavior. 🔹Credit scoring systems may pass initial fairness audits but gradually shift due to biased input data over time. 🔸Governance frameworks must go beyond surface-level checks to address these evolving risks.🔸 ➡️Rethinking AI Governance Through Systems Thinking AI systems operate within complex environments where decisions influence outcomes in ways that are difficult to predict. Governance must address not only technical performance but also the broader context in which these systems function. Decision-making processes in AI don’t end with the algorithm’s output. They create ripple effects, affecting how users interact with the system and what data feeds back into it. Governance must consider how these interactions accumulate over time and how seemingly minor flaws can lead to systemic failures. Systems thinking reframes governance as a process that anticipates unintended effects, rather than one that merely reacts to immediate concerns. ➡️What Your Ethical Leadership Requires Organizations committed to ethical governance don’t stop at compliance. They: 🔹Track the downstream impacts of their AI systems and adjust governance processes as systems evolve. 🔹Make governance processes transparent, ensuring stakeholders understand the choices behind decisions. 🔹Take accountability for addressing harms when they occur, rather than deflecting responsibility. Ethical governance is a practice, not a milestone. It requires vigilance and the willingness to address the ethically just complexities that compliance alone cannot resolve. A-LIGN #TheBusinessofCompliance #ComplianceAlignedtoYou
-
Unlocking AI’s Potential in Risk and Compliance at Sphere After evaluating numerous AI-powered systems to build a legal, risk, and compliance program at Sphere, I’m more convinced than ever of AI’s transformative impact. The advancements in agentic AI are game-changing, but they also raise complex challenges and risks. AI’s ability to automate and optimize workflows is revolutionary, yet it’s clear that immense potential remains unexplored. With this innovation come significant risks, particularly in high-stakes areas like compliance and risk management. Key Risks and Regulatory Challenges • Lack of transparency: AI’s “black-box” models can obscure how decisions are made, complicating accountability. • Data privacy and security: Protecting sensitive data remains a critical concern. • Bias in decision-making: Undetected biases in AI models can lead to unfair or unethical outcomes. • Regulatory obligations: Governing bodies increasingly expect robust model validation and lifecycle management. Recommended Practices 1. Governance: Establish strong governance frameworks for continuous monitoring, validation, and accountability for AI systems. 2. Explainability: Use interpretable models to meet regulatory transparency requirements and build trust with stakeholders. 3. Bias Mitigation: Proactively detect and mitigate biases. Balance metrics like precision (correct positive predictions) and recall (percentage of actual positives identified). AI has the power to drive operational efficiency, reduce manual errors, and enhance compliance effectiveness. Success, however, hinges on adopting transparent, well-governed practices that align with evolving regulatory landscapes. At Sphere, we are committed to leveraging AI responsibly while navigating these opportunities and challenges.