Watch the accompanying video below for a deeper dive into the nuances of confidence, risk, and investment in AI-enabled pipelines.

It’s encouraging to see that both security and development teams express confidence in their ability to secure the output of AI coding assistants (even as developers show a slightly more tempered optimism). But I urge teams to look beyond this initial confidence and truly ground their AI security strategy in verifiable mechanisms and clear priorities.

Confidence in ability to secure AI-generated code

The Source of Confidence: Is it Real?

What determines confidence? How do you measure success? These are critical questions for your teams. If one team expresses high confidence while another remains skeptical, how do you justify action or change? This dialogue is crucial, especially when considering the potential biases we've discussed in other blogs—biases that can arise from excessive manual effort or diminished visibility due to over-automation.

Confidence in securing AI-generated code

Our data shows that organizations with fully automated and fully manual AppSec testing (AST) pipelines express similar levels of confidence. But what truly matters is less the overall percentage and more the rate of change in confidence predicated on the level of automation or manual effort. Those who prioritize automation seem to expect their automated AST tools to inherently facilitate security with each incremental addition of automated mechanisms. This may, perhaps, lead to a false sense of security in some scenarios where explicit controls are still pending.

Key action: Evaluate whether your security gates and control mechanisms are consistent across both your automated and manual pipeline segments.

The "Chicken and Egg" of AI Adoption

Here's a provocative question for your team: Are you using AI in development more frequently because you are genuinely confident in your ability to secure its output? Or, are you justifying frequent usage by simply assessing your security controls as "adequate" after the fact? The latter, of course, is a backward approach.

Confidence in securing AI code versus usage frequency of AI coding assistants in development

The data reflects this dynamic.

  • Teams using AI constantly or frequently exhibit greater confidence in their security measures.
  • Those rarely using AI express a distinct lack of security confidence.

Organizations that don’t prioritize security mechanisms for AI-enabled pipelines face huge risk exposure. Even if they aren't implementing AI “constantly,” their largest constituents are using it “frequently,” often without clear guidelines. This suggests that AI adoption might be outpacing security preparedness.

Risk Recognition vs. Confidence: A Disconnect

Our research reveals an interesting disconnect: Those who are more confident in their ability to address the security risks in AI-enabled pipelines express a greater recognition that AI coding assistants introduce or complicate security risks. While these could be independent facts, it's intriguing.

Risk recognition vs. confidence in ability to address security risks

Furthermore, among those who lack confidence in their ability to secure whatever risks AI coding assistants do create, we found a greater representation of those who disagree that AI introduces or complicates risks. This is counter-intuitive. Is this lack of confidence in security preparedness alleviated by refusing to accept that AI introduces risk? Is this an emotional response? Is it security teams trying to buy time while they formulate a plan?

Key action: Ask your colleagues what mechanisms are in place to pressure-test your security controls and verify your ability to handle what AI throws at you.

AI Policing Itself: A Dangerous Assumption

Given the strong expectation that AI coding assistants will help write more-secure code and more-effectively support AppSec, a crucial question emerges: Are we seeing a greater dependence on AI to police itself—to protect the business from the very issues AI coding assistants may introduce?

Confidence in security AI-generated coding assistants helping to write secure code

Key action: Discuss with your colleagues how to ensure that application security remains the role of the security team, to ensure control over your organization’s risk exposure and compliance with requirements for security preparedness. Developers' primary job is to write code; security should be a natural, integrated consequence of their daily tasks. AI coding assistants are powerful development tools, but they are not security tools. Establishing a clear plan for how security teams can leverage AI's benefits without offloading core responsibilities onto development teams is paramount.

At Black Duck, we empower security teams to integrate effectively into the AI-enabled development landscape, ensuring that confidence is backed by robust, verifiable security measures.
 

See the research

Continue Reading

Explore Topics