For more than a decade, software security has evolved gradually—new tooling here, a policy tweak there, incremental cultural shifts toward DevSecOps. But with the rise of Generative AI and large language models (LLMs), that era is over. Application security (AppSec) isn’t evolving anymore. It is being fundamentally rewritten.

The BSIMM16 report provides the clearest industrywide snapshot yet of how AI is reshaping software security—across development, testing, compliance, governance, and even organizational culture. The data-driven Building Security in Maturity Model (BSIMM) shows how leading organizations actually build and run their software security programs. Instead of prescribing best practices, it documents 128 real-world software security activities observed across more than 100 firms, giving teams a clear, evidence‑based way to benchmark their maturity and prioritize improvements—especially as AI, supply chain risk, and automation reshape AppSec.

And the message is unmistakable: AI is driving the most significant shift in AppSec since the move to cloud-native architectures.

Organizations that embrace this shift will accelerate innovation and reduce risk. Those that don’t will find themselves facing vulnerabilities they can’t see, threats they don’t understand, and regulatory obligations they can’t meet.


AI is now a first‑class attack surface

For years, developers relied on intuition, experience, and pattern recognition to make secure coding decisions. AI changes this dynamic entirely.

BSIMM16 makes it clear that LLM‑generated code is not secure by default—even if it looks clean, idiomatic, and professional. It often omits crucial security controls or introduces subtle logic vulnerabilities that automated scanners weren’t designed to detect. This creates a paradox: AI accelerates development dramatically, but it also accelerates the introduction of hard‑to‑spot vulnerabilities. As a result, organizations are forced to expand their threat models to include

  • Prompt injection and model manipulation attacks
  • AI‑assisted malicious payload generation
  • Abuse of LLM integrations and data flows
  • New vulnerabilities introduced by both developers and AI

The firms leading the way are already investing in AI‑specific attack intelligence and developing technology‑specific attack patterns that account for this new paradigm.

Governance and compliance are being rebuilt for the AI era

AI isn’t just a technical disruption—it’s a governance disruption.

Regulators around the world are raising expectations for software security, and AI‑driven development is accelerating that pressure. BSIMM16 shows significant growth in security activities that help organizations prove the trustworthiness of their development environments, including

  • Protecting development endpoints
  • Securing build and deployment toolchains
  • Documenting software compliance
  • Defining standards for adopting new technologies—especially AI

The EU Cyber Resilience Act, U.S. government self‑attestation requirements, and similar initiatives worldwide are sending the same message: If AI touches your software, you must be able to prove you built it securely.

Organizations that treat AI as an “experiment” rather than a regulated software component risk falling behind—and falling out of compliance.

Automation is no longer optional—it’s the backbone of AppSec

One of the strongest signals from BSIMM16 is the explosive growth in automation across the software supply chain.

  • SBOM generation surged almost 30%
  • Automated infrastructure security verification rose over 50%
  • Custom security rules for AI‑generated code increased notably
  • Organizations scaled “governance‑as‑code” into CI/CD pipelines

Why? Because manual review simply cannot keep pace with AI‑accelerated development velocity.

AI writes code at machine speed. Security teams cannot defend it at human speed. The future of AppSec belongs to organizations that move from manual enforcement to continuous, automated, verifiable controls.

Security training is becoming real‑time and embedded

BSIMM16 identifies a dramatic cultural shift in training: Traditional classroom education is giving way to short‑form, context‑specific, just‑in‑time learning—a shift driven largely by AI adoption.

The activity “Provide expertise via open collaboration channels” grew 29%, reflecting a move toward

  • Instant access to SMEs
  • Microlearning embedded in tools
  • Training triggered by development behavior

This mirrors how developers use AI: not through long lectures, but through ambient, on‑demand guidance that blends seamlessly into their workflow.

Security knowledge must now move at the same speed as AI‑assisted coding.

The most successful organizations are redesigning their AppSec programs around AI

Perhaps the most compelling insight from BSIMM16 is how leading organizations are restructuring their software security initiatives.

  • They are merging governance and engineering into unified DevSecOps ecosystems. Traditional siloed models can’t handle AI’s velocity.
  • They are empowering security champions to scale expertise. Ninety-six percent of the top BSIMM performers have active champions programs.
  • They are re‑evaluating their entire software inventory—including AI agents, prompts, and training data. AI components are now in scope as first‑class artifacts.
  • They are implementing feedback loops and telemetry‑driven governance. Security becomes an analytics discipline, not just a policy function.
  • They are building secure‑by‑design AI patterns and integrating them early. This includes approved design templates for AI/ML and LLM integrations.

These organizations are not simply “adopting AI.” They are transforming their security programs to enable AI safely and at scale.

The strategic imperative: AI‑ready security programs

AI adoption is not slowing down. Code generation is only the beginning. Soon AI will

  • Generate architectures
  • Orchestrate pipelines
  • Detect and fix real‑time vulnerabilities
  • Manage policy enforcement
  • Participate in incident response

The organizations that thrive will be those that build AI‑ready software security programs today that

  • Anticipate new attack classes
  • Automate aggressively
  • Provide real‑time developer enablement
  • Unify engineering and security
  • Embed governance directly into CI/CD
  • Treat AI as a regulated, auditable component

The BSIMM16 data is unambiguous: AI-driven development requires AI-driven security models. Those that fail to adapt will be left defending systems built faster—and broken faster—than they can secure.
 

Download the full report

Continue Reading

Explore Topics