Black Duck CEO Jason Schmitt sat with Dark Reading News Desk's Terry Sweeney for an interview at RSA 2026. Watch the video below to see Jason's insights regarding the future of AI code security. Transcript under the video.
Terry Sweeney: Welcome back to the Dark Reading News Desk. I'm Terry Sweeney, contributing editor with Dark Reading. And joining me now is Jason Schmidt of Black Duck. Jason, thanks so much for joining us on News Desk today.
Jason Schmitt: Thanks for having me.
TS: We are talking about the ways that AI is radically transforming how software is built. Start us off with some thoughts around the security challenges that are posed by this transformation.
JS: Sure. You know, the most important change is that 10 to 20 times more software is being produced than even a year ago. So if you consider what that would mean for securing a system or having confidence in the security of the software, humans can't possibly look at that much code.
TS: Sure. We have the volume piece, but then also just all the new complexities that seem to be discovered on a daily basis.
JS: That's right, because AI is also a very attractive attack vector for manipulating and creating exploits. So it complicates the picture because of the power of the technology.
TS: I can't think of any other analog in the history of conflict where attackers and defenders basically are armed with the same tools. And really, the warfare is around who can be the most creative or the most innovative.
JS: Yeah. And there's actually a cliché in security—since we're at the RSA conference—which is that the attacker only has to be right once.
TS: Sure.
JS: But the defenders have to be right all time. And that's very different than warfare also. It's very asynchronous.
TS: No, that's a great point. Can you talk about how application security needs to evolve to meet the demands of this AI-driven world we're in?
JS: I like to describe it as a new evolution, and this is really the third wave of application security, because when the AppSec industry was born, it really was about automating human-based code reviews and doing that at scale. And then the advent and adoption of the cloud rapidly meant this needed to be highly integrated into DevOps workflows so that it's more and more transparent. And now you take the volume and speed of development teams, and that requires a new thinking about how these technologies are applied to secure the code that's being generated.
TS: Well, can AI be part of the solution to the problems that it creates? This seems to be the million-dollar question for the industry and for basically every user out there.
JS: Yeah. I think it's a billion-dollar question. In fact, you know, just in the last two to three years, AI-generated coding assistance has gone from a market that didn't exist to, some say, $10 billion in three years.
TS: Wow.
JS: So, AI is absolutely a factor in that explosion in the amount of code. But the technology is also extremely promising for reasoning through large codebases, identifying business logic flaws that existing tools don't find. So there's a lot of promise in applying the technology as well.
TS: But as we were just saying, the attackers are also using these same tools. It's a constant spy versus spy sort of scenario, isn't it?
JS: It can be, but the reason the technology is so promising is, you know, the ability to have autonomous agentic workflows that find and fix issues literally while you sleep is kind of a colloquialism I use for the potential is technology. It very well can stay ahead of the attackers if it is built into the development workflow.
TS: Can AI replace the AppSec technology that's in use today?
JS: I think it is in the process of replacing some limited use cases, but it's more additive in the sense that existing technologies are really built for doing simple things at really large scale, for a low price, very accurately. None of those describe what LLMs do.
TS: Sure.
JS: And so what that means is some use cases are augmented by AI and made better, and AI is then an additive for doing things like penetration testing, red-teaming, and business logic assessments that existing tools aren't really effective at.
TS: So AppSec Plus, really.
JS: It is AppSec Plus. Absolutely. And then you add that autonomous workflow potential of it, so that it can be continuous and automated.
TS: What’s the biggest misconception about AI in the AppSec world today?
JS: Biggest misconception, I think, is that it's bad for AppSec, honestly. It's perceived that these technologies are so extremely powerful that they will disrupt a multi-billion-dollar industry of expertise around solving these problems rather than being additive. As we look at how the threat landscape is getting more complex and the attack surface is much larger, we need these technologies to scale AppSec. So it's very complementary and additive in that fashion, and not a threat to effective software security as it exists today.
TS: Developers chafe against security historically, because it slows things down. It delays competitiveness, innovation—those sorts of things. Is there a thought that having to transform AppSec to be more AI-friendly is going to add more drag in the development process?
JS: I think it's the opposite. It's an unlock for having this be truly automated.
TS: Okay.
JS: Because existing AppSec solutions usually require you to, what I refer to as “stop and scan.” Developers don't like that. They don't like a lot of issues that don't matter. AI adds intelligence that makes it viable for it to be completely autonomous.
TS: Great. Jason, some great insights into the transformation of AppSec. Thanks so much for joining us on the Dark Reading News Desk.
JS: Thank you very much.
TS: We've been talking with Jason Schmidt of Black Duck. This has been Terry Sweeney for the Dark Reading News Desk. Thanks for joining us for this segment. We'll see you next time.
Mar 31, 2026 | 4 min read
Feb 05, 2026 | 6 min read
Jan 22, 2026 | 3 min read
Dec 16, 2025 | 4 min read
Oct 08, 2025 | 6 min read