Might be a good time to dust off the NIST Systems Security Engineering publications. Single points of failure can have severe or catastrophic effects on organizational operations, organizational assets, individuals, and the Nation. Defense-In-Depth and Diversity are key design principles for building trustworthy secure systems. See SP 800-160, Vol. 1, Sections E.9 and E.11. https://lnkd.in/esF9-uA3 Diversity is also a key cyber resiliency technique. See SP 800-160, Vol. 2, Section D.3. https://lnkd.in/dt2RbkMz “Use heterogeneity to minimize common mode failures, particularly threat events exploiting common vulnerabilities. Limit the possibility of the loss of critical functions due to the failure of replicated common critical components. In the case of an adversarial threat event, maximize the probability that some of the defending organization’s systems will survive the adversary’s attack.” #SystemsEngineering #SecurityEngineering #SystemsThinking #SinglePointsOfFailure #CriticalSystems #APT #NationStateThreats
Ensuring diversity in trust-weighted systems
Explore top LinkedIn content from expert professionals.
Summary
Ensuring diversity in trust-weighted systems means designing digital platforms and artificial intelligence in ways that reflect the perspectives, values, and needs of different communities, so their trustworthiness isn't skewed by narrow or biased inputs. This approach helps prevent vulnerabilities, increases fairness, and builds confidence by incorporating a wide range of experiences and backgrounds into system design.
- Prioritize cultural inclusion: Invite input from people of various backgrounds to shape system development and uncover potential blind spots or biases that might otherwise be missed.
- Implement ongoing testing: Regularly challenge models and platforms with real-world scenarios from diverse communities to catch and address any issues that could undermine reliability or fairness.
- Build multidisciplinary teams: Assemble groups with expertise in ethics, culture, security, and technology to bring multiple viewpoints into every stage of system planning and maintenance.
-
-
As AI becomes more ubiquitous and robust, ensuring it is aligned with the goals of diverse communities is crucial. AI systems are the product of many different decisions made by those who develop and deploy them. Therefore, working with diverse communities to build responsible AI is necessary to create responsible AI that benefits everyone and warrants people’s trust. By engaging with diverse communities, we can learn from their perspectives, experiences, and challenges and co-create AI solutions that are fair, inclusive, and beneficial for all. Moreover, we can foster trust, collaboration, and innovation among different stakeholders and empower communities to participate in the AI ecosystem. I spent the last week at the United Nations diving into this topic. The teams at UN Women & Unstereotype Alliance allowed me to share how teams use Microsoft's Inclusive Design Toolkit to partner with diverse communities to understand their goals, guiding AI Product development towards more equitable outcomes by keeping people and their goals at the center of systems design decisions. The toolkit and more can be found at https://lnkd.in/eTdpKhGY
-
Today let me talk about one of the key stakeholders in the responsible and safe development of Artificial Intelligence systems: the red teamers. 🌐 Source: https://shorturl.at/EXs1w In February 2025, Singapore’s Infocomm Media Development Authority (IMDA), in collaboration with the nonprofit HumaneIntelligence, released the evaluation report of the world’s first multicultural and multilingual red teaming challenge: this initiative is not only a global first, but also a significant milestone in the responsible and ethical development of AI systems in the Asia-Pacific region. The red teaming exercise involved over 9 countries and addressed a critical gap: testing AI models for biases across diverse cultural and linguistic backgrounds. The methodology sets four phases: risk definition, challenge design, annotation, and results analysis. What makes this process a must-be, is its comprehensive scope, rather than limiting to Western-centric datasets or languages, the red teamers challenged the models with context-specific prompts in regional dialects and cultural scenarios. The report also sets out a consistent methodology so that we can test across diverse languages and cultures, as no one party can accomplish that alone. Key components included: - A cultural bias taxonomy, which categorized potential biases into various domains (e.g., gender, religion, ethnicity). - Quantitative and qualitative findings, showing that LLMs exhibited bias even in everyday usage scenarios, not just adversarial prompts. - Limitations and recommendations, such as the need for more robust annotation frameworks and better prompt engineering to avoid introducing bias in testing itself. There are two messages I want to share: 1- Red teaming should not be a one-time audit. Embed it into: - Model development: Inform fine-tuning and training data choices. - Pre-deployment testing: Validate safety in target markets. - Post-deployment monitoring: Continuously audit and update models based on live feedback. 2- Trust in AI starts with diversity in testing, making red teamers not just evaluators, but essential architects of digital trust. The Why and HowTo. For companies, especially those deploying AI in multilingual or multicultural markets, failing to address cultural biases isn't just a technical deal it's a reputational and compliance risk. How companies can integrate red teaming into their AI development and deployment processes? The adoption of a structured four-stage red teaming framework, with risk definition, challenge designation, annotation, and results analysis, is not enough because it must be followed by establishing internal red teams comprising AI ethicists, cultural experts, security analysts, and domain-specific professionals, or collaborating with independent organizations or academic institutions across regions to source diverse perspectives. #ArtificialIntelligence #AI #AIRisks #Cybersecurity #RedTeaming #Compliance #Governance