Risks of Using Malicious Software Packages

Explore top LinkedIn content from expert professionals.

Summary

Malicious software packages can compromise sensitive data, introduce security vulnerabilities, and jeopardize the integrity of systems or applications. These risks often stem from unverified code, typosquatting (malicious lookalikes of legitimate software), or AI-generated "hallucinated" dependencies.

  • Verify package authenticity: Always confirm the source of any software package, especially those recommended by AI tools, by checking official repositories or cross-referencing dependencies with trusted sources.
  • Implement security reviews: Establish thorough dependency-review processes to detect malicious code or unknown software in your projects before integration.
  • Educate your team: Train developers to recognize risks like slopsquatting and equip them to scrutinize AI-generated code suggestions carefully.
Summarized by AI based on LinkedIn member posts
  • View profile for Lenin Alevski

    Security Engineer at Google | #RSAC #DEFCON #BSIDES Speaker | Blogger

    2,912 followers

    Did you know two malicious packages on PyPI managed to collect hundreds of downloads before being removed? 🐍💻 Security researchers at Fortinet FortiGuard Labs identified the malware-laden Python packages *zebo* and *cometlogger*. These packages, which were downloaded 118 and 164 times respectively, were designed to exfiltrate sensitive data from compromised systems. Most downloads originated from the United States, China, Russia, and India. *zebo*, a straightforward example of malware, used techniques like hex-encoded strings to hide the command-and-control (C2) server it communicated with via HTTP. It captured keystrokes using the `pynput` library and took hourly screenshots through ImageGrab, storing them locally before uploading to ImgBB with an API key fetched from the C2 server. To ensure persistence, the malware created batch scripts that added itself to the Windows startup folder for automatic execution on reboot. *cometlogger* was even more sophisticated, targeting a wide range of information, including cookies, passwords, and tokens from apps like Discord, Instagram, TikTok, and Steam, among others. It also accessed system metadata, network details, clipboard content, and terminated browser processes to gain better file access. Additionally, it included anti-virtual machine checks to avoid detection during analysis. Its asynchronous task execution allowed it to steal large amounts of data quickly. These incidents highlight the risks of using unverified code. While some of these features might appear legitimate in other contexts, their malicious implementation underscores why reviewing third-party packages is critical for developers. Always verify the source before downloading and executing such scripts. #Infosec #Cybersecurity #Software #Technology #News #CTF #Cybersecuritycareer #hacking #redteam #blueteam #purpleteam #tips #opensource #cloudsecurity — ✨ 🔐 P.S. Found this helpful? Tap Follow for more cybersecurity tips and insights! I share weekly content for professionals and people who want to get into cyber. Happy hacking 💻🏴☠️

  • View profile for Farid Abdelkader

    Global Head of Technology Audit and Associate General Auditor // ISACA NY Metropolitan Chapter Immediate Past President

    5,271 followers

    ⚠️ 𝐒𝐥𝐨𝐩𝐬𝐪𝐮𝐚𝐭𝐭𝐢𝐧𝐠: 𝐀𝐈 𝐇𝐚𝐥𝐥𝐮𝐜𝐢𝐧𝐚𝐭𝐢𝐨𝐧𝐬 𝐅𝐮𝐞𝐥 𝐚 𝐍𝐞𝐰 𝐒𝐮𝐩𝐩𝐥𝐲 𝐂𝐡𝐚𝐢𝐧 𝐓𝐡𝐫𝐞𝐚𝐭 ⚠️⁣ ⁣ 𝐒𝐥𝐨𝐩𝐬𝐪𝐮𝐚𝐭𝐭𝐢𝐧𝐠 is a new software supply chain attack that exploits AI hallucinations. AI coding tools like 𝐂𝐡𝐚𝐭𝐆𝐏𝐓 𝐨𝐫 𝐆𝐢𝐭𝐇𝐮𝐛 𝐂𝐨𝐩𝐢𝐥𝐨𝐭 sometimes fabricate package or library names that don’t actually exist. Attackers are now squatting on these AI-invented names by 𝐫𝐞𝐠𝐢𝐬𝐭𝐞𝐫𝐢𝐧𝐠 𝐭𝐡𝐞𝐦 𝐚𝐬 𝐫𝐞𝐚𝐥 𝐩𝐚𝐜𝐤𝐚𝐠𝐞𝐬 – 𝐛𝐮𝐭 𝐩𝐚𝐜𝐤𝐢𝐧𝐠 𝐭𝐡𝐞𝐦 𝐰𝐢𝐭𝐡 𝐦𝐚𝐥𝐰𝐚𝐫𝐞. It's like typosquatting – 𝘵𝘩𝘦 "𝘵𝘺𝘱𝘰" 𝘪𝘴 𝘣𝘺 𝘈𝘐, 𝘯𝘰𝘵 𝘢 𝘩𝘶𝘮𝘢𝘯.⁣ ⁣ When a developer blindly uses code from an AI that references one of these fictitious libraries, they could inadvertently download the attacker’s malicious package, leading to a compromise. ⁣ ⁣ 💡 𝐏𝐞𝐫𝐯𝐚𝐬𝐢𝐯𝐞𝐧𝐞𝐬𝐬 & 𝐒𝐜𝐚𝐥𝐞: Researchers found roughly 20% of recommended dependencies from popular AI code assistants were non-existent hallucinations, yielding over 𝟐𝟎𝟓,𝟎𝟎𝟎 𝐟𝐚𝐤𝐞 𝐩𝐚𝐜𝐤𝐚𝐠𝐞 𝐧𝐚𝐦𝐞𝐬 in their study. Worryingly, these hallucinations were often persistent (reappearing across multiple runs) and believable (names often resembled real packages). ⁣ ⁣ 🔺 𝐖𝐡𝐲 𝐢𝐭 𝐌𝐚𝐭𝐭𝐞𝐫𝐬: It can poison your software supply chain – a malicious dependency can compromise your entire application or infect downstream systems. This risk is amplified by developers’ over-trust in AI-generated code. One researcher proved it: they published a fake package name that an AI invented, and it was downloaded over 32,000 times. If that package had been malicious, imagine the fallout. ⁣ ⁣ 🛡️ 𝐌𝐢𝐭𝐢𝐠𝐚𝐭𝐢𝐨𝐧 – 𝐒𝐭𝐚𝐲𝐢𝐧𝐠 𝐒𝐚𝐟𝐞:⁣ 🛡️ 𝐃𝐨𝐧’𝐭 𝐭𝐫𝐮𝐬𝐭, 𝐯𝐞𝐫𝐢𝐟𝐲: Treat AI suggestions as helpful hints, not gospel. Double-check any package or link AI suggests – verify on official repositories that it actually exists and is trustworthy.⁣ 🛡️ 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭 𝐜𝐨𝐝𝐞 𝐫𝐞𝐯𝐢𝐞𝐰 & 𝐜𝐨𝐧𝐭𝐫𝐨𝐥𝐬: Require manual review/approval for any new dependency introduced via AI-generated code. Don’t add unknown libraries without due diligence.⁣ 🛡️ 𝐔𝐬𝐞 𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐭𝐨𝐨𝐥𝐢𝐧𝐠: Use dependency scanning tools to catch malicious packages.⁣ 🛡️ 𝐄𝐝𝐮𝐜𝐚𝐭𝐞 𝐚𝐧𝐝 𝐚𝐝𝐚𝐩𝐭: 👨💻 𝐓𝐫𝐚𝐢𝐧 𝐝𝐞𝐯𝐬: Slopsquatting + LLM hallucinations are real threats. 🧠 𝐌𝐢𝐧𝐝𝐬𝐞𝐭: Trust AI… but always verify. 🤖 𝐏𝐫𝐨 𝐭𝐢𝐩: Ask the AI if a package is real- it’ll admit it made it up. 𝐁𝐨𝐭𝐭𝐨𝐦 𝐋𝐢𝐧𝐞: AI coding assistants boost productivity, but integrity is key. Slopsquatting shows a single hallucinated dependency can open the door to malware.🔒💡 ⁣ ⁣ 𝘍𝘰𝘳 𝘔𝘰𝘳𝘦 𝘐𝘯𝘧𝘰:⁣ csoonline.comsecurityweek.com⁣ ⁣ 𝐈𝐥𝐥𝐮𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧 𝐀𝐭𝐭𝐚𝐜𝐡𝐞𝐝: 🚨 Attack Flow: Bad actor uploads fake, AI-hallucinated package (Pkg X). Dev blindly trusts LLM, installs it. 💥 Malicious code gets in. ISACA New York Metropolitan Chapter Tim Wei Teena Eugene Christina Alyssa

  • A bonus post this week - 🥳 Here's another great example of how AI is reshaping and expanding the role of CISOs, especially within the supply chain and critical infrastructure sectors. LLMs like ChatGPT, CodeWhisperer, and others are hallucinating non-existent packages when generating code. Attackers are now registering those fake packages (aka “slopsquatting," what a fun name, eh?) to deliver malware into real development pipelines. It's a mistake to think of "slopsquatting" as a DevSecOps issue. Developers may be the ones pulling packages, but CISOs are ultimately responsible for identifying the enterprise exposure, making recommendations to control / reduce the risk, and will be called to question as to why more wasn’t done to realize, and mitigate this risk if something happens. [Ahh...the life of the modern CISO...] According to an article in SecurityWeek (link in the comments) researchers found over 205,000 hallucinated packages from 16 models. Some open-source LLMs had hallucination rates above 20%. That’s not fringe. That’s mainstream. So what can a CISO do about it? Some quick recommendations: - Mandate an Internal Mirror for Package Repos Enforce use of internal mirrors or package proxies. These allow your security team to whitelist vetted dependencies and block packages not explicitly reviewed, even if hallucinated ones are published upstream. - Implement Rigorous Dependency Validation Establish protocols to verify the authenticity of all third-party packages, particularly those suggested by AI tools. It's not enough to "set it and forget it" with AI. It may be a fast team member, but that doesn't mean it’s always the most reliable or competent. When possible, utilize tools that cross-reference packages against trusted repositories to detect anomalies. - Improve (start) and Specify Your Developer Training Educate development teams about the risks associated with AI-generated code and the importance of scrutinizing suggested dependencies. Encourage a culture of skepticism and verification. -  Integrate LLM-Aware SCA and SBOM Enforcement Update your SCA tools and SBOM policies to flag new, low-trust, or previously unseen packages. This helps to catch LLM-influenced packages with low install counts or no public audit trail before they become production vulnerabilities. - Issue Secure Coding Guidelines for LLM-Generated Code Publish and stringently enforce internal guidance on using LLMs for code generation - including requirements for validating any dependencies suggested by AI tools. Make this part of your SDLC and annual developer training. Periodically audit for compliance when able. There is no "annual review" luxury in the age of AI-powered threats. As always, I welcome any additional insights or suggestions on how CISOs can be more proactive and empowered in reducing supply chain vulnerabilities. Thoughts? Comments?

  • View profile for Matt Kowalczyk

    CEO at EXIT83 Consulting, Inc • 12x Founder • ex-VC • ex-MSFT

    4,986 followers

    🚨 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗡𝗼𝘁 𝗤𝘂𝗶𝘁𝗲 𝗥𝗲𝗮𝗱𝘆 𝗳𝗼𝗿 𝗣𝗿𝗶𝗺𝗲𝘁𝗶𝗺𝗲 🤖🔓 I recently gave a talk to CEO's regarding AI Agents. One question that came up was about how secure these agents really are. My response is that it is currently the wild west, and I wouldn't provide agents access to PII information.... yet. While AI agents are revolutionizing business operations, automating tasks from customer support to supply chain management, their rapid adoption often outpaces the implementation of robust security measures. A concerning development in this space is the emergence of a supply chain exploit known as 𝘀𝗹𝗼𝗽𝘀𝗾𝘂𝗮𝘁𝘁𝗶𝗻𝗴 (or package hallucination). It was discovered in 2024 by researcher Bar Laynado, but reared its ugly head again last week. 🔍 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗦𝗹𝗼𝗽𝘀𝗾𝘂𝗮𝘁𝘁𝗶𝗻𝗴? Slopsquatting is a novel cyberattack that exploits the tendency of large language models (LLMs) to “hallucinate”—that is, to generate plausible but non-existent software package names in their code suggestions. Attackers monitor these hallucinations and register malicious packages under the invented names. When developers, trusting the AI-generated code, install these packages, they inadvertently introduce malware into their systems. 𝘈 𝘤𝘰𝘮𝘱𝘳𝘦𝘩𝘦𝘯𝘴𝘪𝘷𝘦 𝘴𝘵𝘶𝘥𝘺 𝘢𝘯𝘢𝘭𝘺𝘻𝘪𝘯𝘨 576,000 𝘤𝘰𝘥𝘦 𝘴𝘢𝘮𝘱𝘭𝘦𝘴 𝘧𝘳𝘰𝘮 𝘷𝘢𝘳𝘪𝘰𝘶𝘴 𝘈𝘐 𝘮𝘰𝘥𝘦𝘭𝘴 𝘳𝘦𝘷𝘦𝘢𝘭𝘦𝘥 𝘵𝘩𝘢𝘵 𝘢𝘱𝘱𝘳𝘰𝘹𝘪𝘮𝘢𝘵𝘦𝘭𝘺 19.7% 𝘰𝘧 𝘵𝘩𝘦 𝘴𝘶𝘨𝘨𝘦𝘴𝘵𝘦𝘥 𝘱𝘢𝘤𝘬𝘢𝘨𝘦𝘴 𝘥𝘪𝘥𝘯’𝘵 𝘦𝘹𝘪𝘴𝘵. 𝘕𝘰𝘵𝘢𝘣𝘭𝘺, 𝘰𝘱𝘦𝘯-𝘴𝘰𝘶𝘳𝘤𝘦 𝘮𝘰𝘥𝘦𝘭𝘴 𝘩𝘢𝘥 𝘢 𝘩𝘪𝘨𝘩𝘦𝘳 𝘩𝘢𝘭𝘭𝘶𝘤𝘪𝘯𝘢𝘵𝘪𝘰𝘯 𝘳𝘢𝘵𝘦 (21.7%) 𝘤𝘰𝘮𝘱𝘢𝘳𝘦𝘥 𝘵𝘰 𝘤𝘰𝘮𝘮𝘦𝘳𝘤𝘪𝘢𝘭 𝘰𝘯𝘦𝘴 (5.2%) . 🛡️ 𝗛𝗼𝘄 𝘁𝗼 𝗛𝗮𝗿𝗱𝗲𝗻 𝗬𝗼𝘂𝗿 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗴𝗮𝗶𝗻𝘀𝘁 𝗘𝘅𝗽𝗹𝗼𝗶𝘁𝗮𝘁𝗶𝗼𝗻 At EXIT83 Consulting we mitigate the risks associated with slopsquatting and other threats, by following these best practices: • Dependency Verification: Rigorously validate and continuously monitor third-party dependencies. Use tools that can detect and alert you to suspicious or unverified packages. • Secure Frameworks: Utilize security-focused AI frameworks and sandboxes that isolate AI agents from critical system components, reducing the potential impact of a compromised package. • Least Privilege Access: Limit AI agents’ system permissions to only essential operations, minimizing potential damage if an agent is compromised. • Educate Development Teams: Ensure that developers are aware of the risks associated with AI-generated code and encourage them to verify the existence and integrity of suggested packages before use. Securing AI agents isn’t optional—it’s essential. Agentic development is in its infancy but it will become more secure as adoption grows. 🔗 Do you want to explore how AI Agents can be securely added to your enterprise workflows? Set up a meeting with us today to learn more. 👉 https://e83.us/ai 👈 #AI #cybersecurity #AIAgents

  • View profile for Gareth Young

    Founder & Chief Architect at Levacloud | Delivering Premium Microsoft Security Solutions | Entrepreneur & Technologist

    7,940 followers

    In a recent alert from Microsoft's Security Team, a concerning trend has emerged involving financially motivated threat actors exploiting the App Installer in Windows to distribute malware. Since mid-November 2023, groups such as Storm-0569, Storm-1113, Sangria Tempest, and Storm-1674 have been identified misusing the ms-appinstaller URI scheme to push malicious software, including ransomware. These cybercriminals have been deploying signed malicious MSIX packages via websites linked through malicious ads for popular software, alongside phishing efforts through Microsoft Teams, exploiting the ms-appinstaller protocol's ability to bypass security measures like Microsoft Defender SmartScreen. Here are a few things you can do to proactively protect yourself against this threat: Strengthen Your Authentication Deploy Phishing-Resistant Authentication: Implement multi-factor authentication (MFA) that's resistant to phishing, such as hardware security keys or biometrics, for an added layer of security. Use Conditional Access: Apply Conditional Access authentication strength to require phishing-resistant authentication for both employees and external users, especially for accessing critical applications. Enhance Teams Security Educate on External Communication: Train Microsoft Teams users to recognize and verify 'External' tags in communications and to exercise caution in sharing information. Ensure they know not to share account details or authorize sign-in requests via chat. Best Practices for Teams: Apply Microsoft's security best practices for Teams to protect your users within this collaborative platform. User Education and Vigilance Review Sign-In Activity: Encourage users to regularly review their sign-in activity and to report any suspicious attempts as unrecognized. Promote Safe Browsing: Advocate for the use of Microsoft Edge and other browsers that support Microsoft Defender SmartScreen to help identify and block malicious sites and downloads. Validate Software Publishers: Educate users on the importance of verifying the legitimacy of software publishers before installing any software. Utilize Microsoft Defender Capabilities Configure Microsoft Defender for Office 365: Enable Safe Links in Microsoft Defender for Office 365 to ensure URLs are scanned on click, providing additional protection against malicious links in emails, Teams, SharePoint Online, and other Microsoft Office applications. Enable PUA Protection: Activate Potentially Unwanted Application (PUA) protection in block mode to prevent unwanted software downloads. Implement Attack Surface Reduction Rules: Turn on rules to reduce the attack surface, such as blocking executable files that don't meet certain criteria, and implementing advanced protections against ransomware. By adopting these comprehensive measures, organizations can significantly enhance their security posture. Learn more in the comments! #CybersecurityAwareness #DigitalDefense #MicrosoftSecurity

  • View profile for Cory Wolff

    Director | Proactive Services at risk3sixty. We help organizations proactively secure their people, processes, and technology.

    4,321 followers

    Cybersecurity Exec Brief: China Nexus Threat Actors Hammer at the Doors of Top-Tier Targets and Supply-chain attack hits npm packages with 960k weekly downloads    ➡️ China Nexus Threat Actors Hammer at the Doors of Top-Tier Targets  A fresh report from SentinelLabs reveals a sharp escalation in cyber-espionage operations attributed to Chinese state-linked actors. Two threat groups, including the known BackdoorDiplomacy and an as-yet unnamed cluster wielding a novel backdoor dubbed TAMECAT, have launched targeted attacks against top-tier organizations in government, telecom, and defense sectors. The campaigns employed sophisticated custom malware, DNS-tunneled command-and-control channels, and exploited vulnerabilities in widely used enterprise technologies like Fortinet and Ivanti. Analysts say the tactics reflect a growing trend in China’s cyber doctrine—favoring modular implants, evasive payloads, and an aggressive pursuit of long-term access to critical infrastructure.    🔗 More reading: https://lnkd.in/esBn42c6 ➡️ Supply-chain attack hits Gluestack npm packages with 960k weekly downloads A coordinated supply chain attack has compromised several npm packages published by Gluestack, a popular open-source software provider whose modules collectively rack up nearly a million downloads per week. According to BleepingComputer, attackers hijacked access to inject malicious code that silently siphons environment variables—data often used for application secrets and cloud credentials. The breach is believed to stem from compromised maintainer accounts and bears the hallmarks of LofyGang, a group previously linked to similar open-source package attacks. Security experts warn that the ripple effect of this breach could be significant, as thousands of developers unknowingly incorporated the malicious code into their projects. 🔗 More reading: https://lnkd.in/eQFqS-nk 

  • View profile for Sammy Basu

    CISO & Founder, Careful Security | Author of CISO Wisdom

    5,777 followers

    Security Alert: Malicious PyPI Package Steals AWS Keys A fake Python package "fabrice" on PyPI has stolen AWS credentials from developers for years, with over 37,000 downloads due to typosquatting the popular "fabric" package. The package runs hidden scripts on Linux and downloads malicious executables on Windows, using the AWS SDK to steal credentials and mask activity through a VPN. To reduce risk, developers should verify packages and consider AWS IAM for secure access management. #Cybersecurity #AWS #Python #DataSecurity #DevSecOps https://lnkd.in/gwy2CXwP

  • View profile for Idan Gour

    Co-Founder & CTO, Astrix Security

    7,652 followers

    A recently discovered threat is targeting AWS credentials, right from a widely used package on PyPI 🕵️♂️ Researchers recently spotted a malicious Python package, fabrice, designed to mimic the legitimate SSH package, fabric. Since 2021, fabrice has been downloaded over 37,000 times - evading detection while stealing AWS credentials and running backdoor commands. This is another reminder that attackers are getting more creative in exploiting non-human identities like API keys and tokens of SaaS, IaaS and on-prem technologies, often staying under the radar for years. These kinds of incidents make it clear that security solutions need to go beyond management workflows to actively monitor and detect anomalies in real time. This is why, at Astrix Security, we focus on securing NHIs by building behavioral baselines for keys, tokens, and applications and monitoring for unusual patterns that indicate unauthorized access attempts. Curious to hear what others think: with so many packages out there, how can security teams realistically keep up with this level of monitoring? #nonhuman #identity #behavior

  • View profile for Varun Badhwar

    Founder & CEO @ Endor Labs | Creator, SVP, GM Prisma Cloud by PANW

    21,961 followers

    Open source powers most software. And for good reason. But there are risks: That’s why we assembled the top 10 security and operational risks into a consolidated list. The OSS Top 10 represent the biggest challenges security and engineering teams face when leveraging reusable code: 1/ Known Vulnerabilities A component version may contain vulnerable code, accidentally introduced by its developers. Vulnerability details are publicly disclosed, e.g, through a CVE. Exploits and patches may or may not be available. 2/ Compromise of Legitimate Package Attackers may compromise part of an existing legitimate project - or its distribution infrastructure - in order to inject malicious code into a component, e.g, through hijacking the accounts of legitimate project maintainers or exploiting vulnerabilities in package repositories. 3/ Name Confusion Attacks Attackers may create components whose names resemble names of legitimate open-source or system components (typo-squatting), suggest trustworthy authors (brand-jacking) or play with common naming patterns in different languages or ecosystems (combo-squatting). 4/ Unmaintained Software A component or component version may not be actively developed any more, thus, patches for functional and non-functional bugs may not be provided in a timely fashion (or not at all) by the original open source project 5/ Outdated Software A project may use an old, outdated version of the component (though newer versions exist). 6/ Untracked Dependencies Project developers may not be aware of a dependency on a component at all, e.g., because it is not part of an upstream component's SBOM, because SCA tools are not run or do not detect it, or because the dependency is not established using a package manager. 7/ License Risk A component or project may not have a license at all, or one that is incompatible with the intended use or whose requirements cannot be met. 8/ Immature Software An open source project may not apply development best practices, e.g., not use a standard versioning scheme, have no regression test suite, review guidelines or documentation. As a result, a component may not work reliably or securely. 9/ Unapproved Change A component may change without developers being able to notice, review or approve such changes, e.g., because the download link points to an unversioned resource, because a versioned resource has been modified or tampered with or due to an insecure data transfer. 10/ Under/over-sized Dependency A component may provide very little functionality (e.g. npm micro packages) or a lot of functionality (of which only a fraction may be used). Bottom line: There are good reasons why open source powers the digital economy, but that isn’t to say we can rely on it without scrutiny.

  • View profile for Mark Thomasson

    Evangelist/Sr Consultant/ Trusted Advisor/CTI Analyst

    11,531 followers

    Tracking emerging trends in Nation-state activity is crucial for defenders and CTI roles. Recently, Tzachi Zornstain, Yehuda Gelb, and the team at Checkmarx uncovered a campaign by a new DPRK group utilizing the public npm registry to distribute malicious packages. Please share this with those in AppSec and the developer roles in your organization. Key takeaways from this discovery: - Moonstone Sleet, a newly identified North Korean threat actor, has emerged, focusing on the open-source software supply chain using tactics similar to other known North Korean groups. Moonstone Sleet's strategy involves spreading malware through malicious NPM packages on the public NPM registry, which poses a significant risk to developers. - Moonstone Sleet, Jade Sleet, and other North Korean state-sponsored actors' activities emphasize the persistent threat to the open-source ecosystem. For more details on this alarming development, check out the article here: [Link to the article](https://lnkd.in/gJTDYFSe)

Explore categories