Generative AI has fundamentally changed how software is built. In the span of just 18 months, AI coding assistants such as Cursor, Windsurf, and GitHub Copilot have evolved from experimental novelties into essential infrastructure. This has exponentially accelerated development velocity to unprecedented levels. But this speed has come at a steep cost: The pace at which software is created exceeds the pace at which organizations can secure it.
Today, we released the 2026 “Open Source Security and Risk Analysis” (OSSRA) report, the industry’s definitive look at the state of open source software. Based on the analysis of 947 commercial codebases across 17 industries, this year’s findings document a pivotal moment in the evolution of software. While AI has democratized code creation, it has also amplified risk across every dimension we measure—security, licensing, and operational sustainability.
For AppSec professionals, CISOs, and legal teams, the 2026 OSSRA serves as both a wakeup call and a roadmap for navigating this new landscape.
The AI acceleration effect
AI’s most obvious and immediate impact is the sheer volume of code being produced. The mean number of files per codebase grew by 74% year-over-year, while the average number of open source components increased by 30%. This mushrooming of complexity is directly linked to widespread adoption of AI tools. Roughly 85% of organizations are using AI-powered coding assistants. Alarmingly, 76% of the companies that explicitly prohibit these tools acknowledge that their developers are using them anyway.
Historic surge in vulnerabilities
The correlation between rapid, AI-assisted development and security risk is stark. For the first time in OSSRA’s history, the mean number of open source vulnerabilities per codebase has more than doubled—rising 107% to an average of 581 vulnerabilities.
The data reveals that 87% of all audited codebases contained at least one vulnerability, and 78% contained high-risk vulnerabilities, including 44% that contained critical-risk issues (i.e., vulnerabilities that could lead to remote code execution or significant data breaches).
The surge has as much to do with the complexity of modern supply chains as it does with the proliferation of new code. In fact, 65% of organizations experienced a software supply chain attack in the past year. Attackers are increasingly targeting the ecosystem itself, with 66% of attacks being malicious packages created specifically to harm users through tactics like typosquatting and social engineering, while 34% were legitimate packages that had been hijacked.
Licensing conflicts reach all-time high
Security isn’t the only risk vector expanding in the AI era. The 2026 OSSRA report identifies the largest year-over-year increase in licensing conflicts in the report’s history. Two-thirds (68%) of audited codebases contained open source license conflicts (compared to 56% the previous year), representing a significant legal risk for enterprises.
The complexity of managing intellectual property has grown exponentially. One audited codebase contained a staggering 2,675 distinct license conflicts. This rise is partly driven by “license laundering,” where AI assistants generate code snippets derived from copyleft sources (like GPL) without retaining the original license information.
With only 54% of organizations currently evaluating AI-generated code for IP and licensing risks, many companies are accumulating legal debt that may only surface during critical events like M&A due diligence or product launches.
The “zombie component” problem
Beyond active vulnerabilities and legal disputes lies the insidious risk of maintenance debt. This year’s report highlights a “zombie component” problem that is pervasive across industries.
- 93% of codebases contain components with no development activity in the last two years
- 92% contain components that are four or more years out-of-date
- Only 7% of components in use are the latest versions
These abandoned components are a ticking time bomb. When a vulnerability is discovered in a project that hasn’t been touched in years, there is often no maintainer left to fix it. Organizations are left with difficult choices: fork the project, refactor the application, or accept the risk.
The governance gap
The overarching theme of the 2026 OSSRA report is the widening gap between AI adoption and governance. While 97% of organizations use open source AI models in development, far fewer have the visibility to track them. The report notes that 17% of open source components enter codebases outside of standard package managers—via copy-pasted snippets, direct vendor inclusions, or AI generation—making them invisible to traditional manifest-based scanning tools.
As regulatory pressure mounts from frameworks such as the EU AI Act and Cyber Resilience Act, the “ship and forget” model of software delivery is no longer viable. Organizations must move toward a model of continuous supply chain transparency, where every component, whether human-written, AI-generated, or open source, is accounted for.
Download the full report
The data is clear. Open source and AI-assisted development are now inseparable. To secure your software supply chain in 2026, you need more than a list of vulnerabilities. You need deep visibility into the origins, licensing, and maintenance status of each line of code—as well as the ability to continuously monitor for new vulnerabilities that exist in your dependencies, so they can be resolved as quickly as possible.
Download the full 2026 OSSRA report today to explore the data by industry, understand the emerging risks of the AI era, and learn actionable strategies for modernizing your software governance.
What's in your code?
Explore insights into open source security trends and get guidance for managing AI-driven development risks and supply chain security.
