AI is a game-changer, certainly in the software world, and so it’s no surprise that it’s raising new questions for tech acquirers. Could this company be disrupted out of existence? What are the implications of AI writing code? Are there any legal issues with using an LLM to power an app? AI chatbots have become standard features in enterprise software; they are a new, overlooked part of a target’s attack surface. Through a technique called prompt injection, a bad actor can use plain English to trick a chatbot into handing over data it was never supposed to share. No coding required. No hacking tools. Just words.
In addition to inputs from the user, chatbots are guided by instructions called system prompts. These are the rules that tell the bot what to do and what to avoid, like “don’t share customer data” or “don’t discuss competitors.” But the AI models behind the bots may not reliably distinguish between internal rules and whatever a user types. To the model, it’s all input to react to.
Attackers exploit this. A basic attack is simply telling the bot to ignore its rules (“Ignore all previous instructions”). If that doesn’t work, the attacker may be able to coax out the system prompt, map the guardrails, and craft follow-ups that slip past them. In one security test, a chatbot explained how it protected customer data, and that explanation was enough to reverse-engineer prompts that pulled records it should never have exposed.
The creative attacks are where it gets interesting. There’s a well-known story in the industry of an LLM disclosing instructions on how to build a bomb to someone whose grandmother used to tell him bomb stories to put him to sleep at night. Yeah, sure! A chatbot can go from safe to compromised with nothing more than well-chosen words.
Traditional software attacks like SQL injection or XSS require deep technical knowledge. Prompt injection on a chatbot requires only a sentence. That collapses the barrier to entry. A curious employee, a disgruntled contractor, or a competitor with access to the chat window can all take a shot. OWASP has ranked prompt injection as the number one vulnerability in AI applications for two years running, and that ranking reflects both how common the problem is and how badly companies underestimate it.
Over the years, the Black Duck Audit team has seen its share of hacks.
Just like a customer service rep, chatbots have access to useful company data. Humans can be socially engineered into giving up secrets, but chatbots are more vulnerable.
When you’re evaluating a company that has added AI to its product, the due diligence questions go beyond the ones you’re used to asking about traditional software. The risks aren’t always just in the code. They’re in how the AI is deployed, what it can reach, and whether anyone thought to harden it against attack.
A good starting point is asking for the target’s AI product policy. Many of the startups we evaluate don’t have one. When they do, it’s often a paragraph in an employee handbook rather than a real governance document. A strong AI policy names the stakeholders, spells out what data the AI can touch, defines input and output filtering expectations, and has executive sponsorship behind it. If no one can tell you who owns the AI roadmap or who’s accountable when it goes wrong, that’s a red flag.
Even with a solid policy, though, testing is important. Perhaps the chatbot was implemented before the policy, or the developers missed the memo. Expert pen testers have the right mindset and can translate their hacking experience into prompt injection testing.
The companies that rushed to deliver AI features are the ones that carry the most risk. In our experience, that describes most of what we see on the diligence side right now. The technology is moving faster than the governance around it. Acquirers need to be mindful of the exposure.
AI chatbots are now a standard part of enterprise software, and prompt injection is the most common way they get exploited. It does not require a sophisticated attacker. It does not require technical skill. It requires someone willing to spend a few minutes asking the right questions in the wrong way.
For investors and acquirers, the takeaway is straightforward: Add this to your due diligence checklist. Ask whether anyone has tried to break the target’s AI. Ask what data it can access. Ask whether there is a policy behind it. If the answers are vague or the question has never been asked, that is a flag worth taking seriously.
Black Duck advises on hundreds of M&A engagements every year. If you want to understand the AI risk in a deal you are looking at, we can help.
Apr 14, 2026 | 8 min read
Mar 31, 2026 | 4 min read
Feb 05, 2026 | 6 min read
Jan 22, 2026 | 3 min read
Dec 16, 2025 | 4 min read
Oct 08, 2025 | 6 min read