The landscape of software development is being reshaped by AI, creating both excitement and apprehension. At Black Duck, we’re committed to helping organizations navigate this new frontier. Our latest research dives into the top concerns and surprising benefits of AI-enabled development, and offers insights into where your peers are investing their resources and how you can optimize your strategy for future success.
Video Companion: Get the full picture on AI-enabled development and pipeline strategies by watching the accompanying video.
This year’s top concerns about AI-enabled development and pipelines highlight the scale of the challenge.
These concerns are long-standing in DevSecOps: vulnerabilities, compliance, and IP protection. It stands to reason that AI-enabled development and generative AI coding assistants face the same issues, as these risk categories apply to modern software in general, with AI simply being the mechanism that creates the potentially afflicted code.
Interestingly, a significant portion of survey respondents expressed a complete lack of significant security concerns regarding AI coding assistants. Our data suggests this viewpoint often correlates with those who simply don't believe AI introduces new risks, outweighing even their confidence in security preparedness. This is a critical oversight that organizations cannot afford.
While risks are prominent, the benefits of AI in development are equally compelling.
These benefits often directly empower developers, streamlining their daily work. While security teams benefit downstream from reduced backlogs, it's crucial for DevSecOps leaders to determine their primary focus: Are you building AI-enabled development pipelines or AI-enabled security workflows? The good news is that, with effective collaboration and deliberate integration, you can often optimize both.
Given the complexities of AI-enabled development and DevSecOps, where are organizations investing their time, money, and people over the next 12 months? The plurality of respondents are focusing on better developer workflow integrations for their AppSec tools. This suggests a growing recognition that, as AI fosters semiautonomous development, AppSec needs robust detection and mitigation controls embedded directly within the pipeline to prevent the downstream propagation of risk.
However, a surprising disconnect emerged: Despite a prevalence of noisy AppSec testing (AST) results, we don't see a similar investment in reducing false positives or increasing testing tool accuracy. This could imply a focus on speed over precision, or a reliance on later-stage triage.
Key question: Does your investment strategy align with your actual pain points? If pipeline slowdowns are an issue, better integration and test/fix speed are logical investments. But if noisy results are eroding efficiency, are you truly addressing the root cause?
Our data revealed several counterintuitive trends that demand discussion.
Key action: As you advocate for change, consider not just how to increase test coverage, but how to optimize tests for your pipeline as well. Ensure that your AST program supports development and release deadlines to reduce the likelihood of compromises or developers circumventing controls, which ultimately leads to decreased coverage.
While security risks are heavily emphasized, remember that AI coding assistants also introduce highly scalable risks to your valuable intellectual property. An autonomous agent introducing reciprocally licensed open source snippets is a very real concern. Yet our data clearly shows that license compliance isn’t a major compulsion for many security, development, and DevOps teams.
Given that developers are using AI trained on open source, it’s imperative to get IP protection on your radar. And the ability to ensure and maintain license compliance must keep pace with AI coding assistants and semiautonomous development and release pipelines.
Key Action: Start the dialogue within your organization. Emphasize that your software's value—and your business value—depends on keeping detrimental licenses out of your code, whether from dependencies or AI-generated snippets of licensed third-party assets. Explore how your existing AppSec tools can identify licensed components and open source code, extending your investment to protect both security and IP without extra effort.
At Black Duck, we provide the comprehensive visibility and control needed to manage both security vulnerabilities and IP risks in AI-enabled development and build pipelines. Take proactive steps to prevent software security and license risks in ways that align to priorities across security, development, and legal teams.
Mar 31, 2026 | 4 min read
Feb 05, 2026 | 6 min read
Jan 22, 2026 | 3 min read
Dec 16, 2025 | 4 min read
Oct 08, 2025 | 6 min read