🗞️ Needed report By CyberArk on a burning issue : identity security. A decisive element that will determine our ability to restore digital trust. 🔹 « Identity is now the primary attack surface. » Defenders must secure every identity — human and machine 🔹 with dynamic privilege controls, automation, and AI-enhanced monitoring 🔹and prepare now for LLM abuse and quantum disruption. Machine identities are the fastest-growing attack surface 🔹Growth outpaces human identities 45:1. 🔹Nearly half of machine identities access sensitive data, yet 2/3of organizations don’t treat them as privileged. Quantum readiness is urgent 🔹Quantum computing will break today’s cryptography (RSA, TLS, identity tokens). 🔹Transition planning to quantum-safe algorithms must start now, even before standards are finalized. Large Language Models include prompt injection, data leakage, and misuse of AI agents. So organizations must treat them as a new class of machine identity requiring monitoring, access controls, and secrets management. 🧰 What can we do? ⚒️ 1/ Implement Zero Standing Privileges (ZSP) • Remove always-on entitlements; grant access dynamically and just-in-time. • Minimize lateral movement by revoking privileges once tasks are complete 👥2/ Secure the full spectrum of identities • Differentiate controls for workforce, IT, developers, and machines. • Prioritize machine identities: vault credentials, rotate secrets, and eliminate hard-coded keys. 🛡️ 3/ Embed intelligent privilege controls • Apply session protection, isolation, and monitoring to high-risk access. • Enforce least privilege on endpoints; block or sandbox unknown apps. • Deploy Identity Threat Detection & Response (ITDR) for continuous monitoring. ♻️ 4/ Automate identity lifecycle management • Use orchestration to onboard, provision, rotate, and deprovision identities at scale. • Relieve staff from manual tasks, counter skill shortages, and improve compliance readiness. 5/ Align security with business and regulatory drivers • Build an “identity fabric” across IAM, PAM, cloud, SaaS, and compliance. • Tie metrics (KPIs, ROI, cyber insurance conditions) to board-level priorities. 6/ Prepare for next-generation threats • Establish AI/LLM security policies: control access, monitor usage, audit logs. • Begin phased adoption of post-quantum cryptography to protect long-lived sensitive data. Enjoy the read
Latest data on identity trust challenges
Explore top LinkedIn content from expert professionals.
Summary
The latest data on identity trust challenges highlights how identity—the way organizations confirm who users or machines really are—has become a primary target for cyber risks, fraud, and technical failures. As technology evolves, threats like sophisticated deepfakes, insider risks, and system breakdowns are making it harder for businesses to build and maintain digital trust.
- Strengthen identity controls: Regularly review and update how you grant access to sensitive information, focusing on both staff and automated systems to close gaps that attackers exploit.
- Adopt proactive monitoring: Shift from only detecting problems after they happen to continuously watching for signs of misuse or unusual behavior, especially as new AI-driven threats emerge.
- Prepare for future risks: Start planning how to protect your data against future disruptions like quantum computing and evolving regulations, ensuring your security measures keep pace with technology.
-
-
The rise of AI-powered fraud reached a critical inflection point in 2024, and the numbers are staggering. Studies from this year paint a sobering picture of our digital landscape. According to MiTek's 2024 Identity Intelligence Index, 76% of financial institutions report that fraud cases have become more sophisticated, with deepfakes emerging as a primary attack vector. Research from Sift this year reveals that 52% of businesses now face deepfake attacks daily or weekly, creating unprecedented risks to critical communications. A September 2024 Medius study found that 87% of finance professionals admit they would make a payment if "called" by their CEO/CFO — yet 53% have already experienced attempted deepfake scamming attacks. Most concerning: iProov's August 2024 research shows that while 70% of industry leaders believe AI-generated attacks will significantly impact their organizations, 62% worry their organizations aren't taking the threat seriously enough. At Reality Defender, our mission is clear: secure critical communication channels by detecting deepfake impersonations in real-time. We're working tirelessly with enterprises to build resilience against this rapidly evolving threat landscape. The trust gap in our AI-powered world is widening. Yet through proactive defense and cutting-edge detection capabilities, we can help organizations interact with confidence in an era of synthetic media.
-
The Coinbase incident is a compelling case study in both the strengths and persistent gaps of modern identity security. Their 8-K filing highlights sophisticated detection capabilities, but the core question remains: How do we prevent authorized users from becoming insider threats? This is just the latest example of what the latest threat reports from CrowdStrike, Expel, Verizon, and Cisco have all highlighted: identity is the new battleground. Both nation-state and financially motivated attackers are now using the same playbook-targeting credentials, exploiting trusted access, and moving laterally at unprecedented speed. My key takeaways: ▪️ Detection isn’t enough. Despite identifying unauthorized access and terminating compromised employees, the damage was already done. Modern identity security must “shift left” - moving from reactive detection to proactive prevention. ▪️ The human element is our biggest challenge. No technical control can fully stop staff from being recruited by threat actors. That’s why we need: ➖ Continuous behavioral monitoring, not just point-in-time checks ➖ Dynamic access adjustments based on real-time risk signals ➖ Zero standing privileges for high-risk functions ▪️ Mapping access to sensitive data is paramount. It’s not enough to identify excessive permissions or access to internal resources; organizations must be able to map every user and non-human identity to the specific sensitive data they can reach. As the Coinbase breach shows, data like government ID images, masked SSNs, and financial records should be so tightly controlled that, in theory, no one should have standing access unless absolutely necessary. ▪️ The financial impact is real. With an estimated $180M–$400M at stake, identity security clearly deserves executive-level focus. Prevention costs far less than breach response. And perhaps most importantly: transparency in security isn’t just about public statements-it’s about having the controls and visibility to know exactly who has access, when, and why. The future of identity security will require balancing trust with continuous verification, protecting both assets and people. References: - https://lnkd.in/ekiH4fbu - https://lnkd.in/eMu5UfPn - https://lnkd.in/eCkU7JRj #identitysecurity #cybersecurity #zerotrust #infosec
-
Digital ID: Security Promises, Privacy Risks. As the UK moves toward implementing a national Digital ID system, the debate extends far beyond questions of immigration or employment verification. The real challenge lies in how citizen data will be stored, shared, and potentially monetized. The Electoral Commission allowed 40 million citizens' data to be exfiltrated from Aug 2021 to Oct 2022 via Insecure servers - Digital ID will continue the enablement of data exfiltration. Recent independent research of Multiverse—owned by former UK Prime Minister Tony Blair's son, Euan. Multiverse already have lucrative government contracts, and is connected to large-scale data initiatives—however, show widespread and persistent security weaknesses, from weak key exchange configurations to insecure DNS. Such basic security and technical lapses highlight how unprepared many digital service providers remain when entrusted with sensitive information and the recklessness and negligence by the Government. If Digital ID platforms are built atop insecure infrastructure, they will become massive repositories of exploitable data. Such cyber incidents are easily Foreseeable and Preventable. Without strong oversight, citizens’ personal information could become a valuable commodity, traded or leveraged for profit under the guise of “innovation.” Digital identity systems must be built on verifiable security, clear limits on data use, and public accountability—not commercial incentives, VIP Lanes, or unchecked and insecure data collection and storage.
-
National Institute of Standards and Technology (NIST) has released its first major update to digital identity guidelines since 2017 – and the timing couldn’t be more important. The threat landscape has changed dramatically in those eight years. Fraudsters are no longer tinkering in basements; they are running organised operations, using generative AI to produce convincing fake IDs at scale. Old frameworks are not enough. The updated NIST guidance is a reminder that identity proofing must be continuous, risk-led, and resilient against new attack vectors. Metrics, behavioural signals, and layered verification have to be baked into the process – not added on later. For many businesses outside government contracts, compliance with NIST may not be mandatory. But in sectors like finance, healthcare, and beyond, these guidelines are fast becoming the benchmark for trust. At GBG Plc, we see the same truth every day: effective identity assurance means moving beyond box-ticking checks to dynamic, risk-based strategies that keep pace with reality. https://lnkd.in/e_BRVtYn
-
Identity-based attacks are not just still relevant — they’re getting more sophisticated. A recent example? Void Blizzard, which leverages a mix of advanced techniques like: AiTM phishing and cookie theft/ device code flow abuse. Attackers are shifting from password-based to token-based attacks, bypassing traditional defenses like MFA. This is a wake-up call for every security team to test of the environment is still optimal protected with the latest features and protected against the latest security attacks. 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐀𝐢𝐓𝐌 𝐩𝐡𝐢𝐬𝐡𝐢𝐧𝐠? During AiTM attacks, tools like Evilginx, Muraena, and Modlishka act as proxies that sit in the middle between a phishing page and a legitimate login page. An AiTM phishing attack involves tricking a user into going to a legitimate-looking copy of a website to collect the credentials and tokens. 𝐈𝐧 𝐭𝐡𝐢𝐬 𝐛𝐥𝐨𝐠: - What is AiTM phishing - New protections and products from Microsoft - Defender TI in combination with AiTM - Protections/ Mitigations against AiTM phishing (Conditional Access/ Trusted locations/ Token protection and many more are tested) - Token protection (is it protecting??) - Demos with passwordless phone sign-in/ PIM and more - Automatic attack disruption with Microsoft 365 Defender - Defender SmartScreen - Defender for Endpoint web content filtering - Defender for Office - Entra Global Secure Access - Audit data and alerts/ automation in Defender 𝐃𝐢𝐯𝐞 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐟𝐮𝐥𝐥 𝐛𝐫𝐞𝐚𝐤𝐝𝐨𝐰𝐧 𝐡𝐞𝐫𝐞: Blog AiTM: https://lnkd.in/eFB7Mtug Blog device code Flow: https://lnkd.in/eR_cKgX9 Blog automatic attack disruption configuration: https://lnkd.in/eAC_9j9f Identity threats are no longer just about stolen passwords, they’re about stolen sessions, stolen tokens, and stolen trust. Important to make sure the protection is up-to-date and recent #MicrosoftSecurity #DefenderXDR
-
🚨 Salesforce “ForcedLeak” wasn’t just a glitch — it was a warning shot. A $5 expired domain. A poisoned prompt. And suddenly, sensitive CRM data was exfiltrated from one of the world’s largest SaaS platforms. Yes, Salesforce patched it. But the real lesson? Identity & control plane failures — not just sloppy config — are the disease. On the No Trust Podcast, three experts broke down the challenge of AI: 🔑 Richard Bird → Identity isn’t just for people anymore. Agents, IoT, and service accounts need crisp, enforceable identities or everything built on top inherits risk. ⚖️George Finney → AI collapses boundaries. When data becomes commands, you’ve lost the separation between control and data planes. Without enforcement, you’re asking for leaks. 🛑 Joshua Woodruff Woodruff → Identity alone isn’t enough. AI agents need segmentation, monitoring, and kill switches from day one. Otherwise, “trusted” agents can still go rogue. 👉 The triad every org adopting AI must hardwire in: Agent Identity (unique, auditable, revocable) Control/Data Plane Separation (policy immune to prompt corruption) Governance & Kill Switches (constraints, oversight, real-time monitoring) 💡 ForcedLeak isn’t an anomaly. It’s a preview of what happens when we half-bake identity and trust agents without limits. 🎧 Hear Richard Bird, George Finney, and Josh Woodruff unpack it on the No Trust Podcast. Search it in your podcatcher, give it a like, and hit subscribe.