In the rapidly evolving landscape of software development, artificial intelligence (AI) has become a transformative force. Our latest report, “The State of Embedded Software Quality and Safety 2025,” which is based on a survey of over 750 application developers, reveals that 89% of development teams are now leveraging AI coding assistants.
However, the research also shows that “shadow AI”—the use of AI tools even though company policy forbids it—is a very real issue. Among the 25% of surveyed organizations that prohibit AI coding assistants, 75% admit that developers are using them anyway.
“AI coding assistants are being used in these businesses against policy, and that certainly brings up a lot of potential for increased risk,” said Corey Hamilton, the principal researcher on the report.
AI is a powerful tool that can significantly enhance the efficiency and effectiveness of embedded software development. However, it’s crucial to approach its use with a balanced and thoughtful strategy. By understanding the potential risks and implementing the right policies, organizations can ensure that AI-generated code is both reliable and compliant, aligning with the high standards set by management and the practical needs of developers.
To safely use AI coding assistants in embedded software development, Hamilton recommends that organizations should
By following these guidelines, organizations can harness the power of AI while maintaining the quality and compliance of their embedded software.
Andrew Burton: Hello and welcome to another episode of AppSec Decoded. I'm Andrew Burton, security advocate at Black Duck, and I'm here today with my colleague Corey Hamilton. Corey is the principal researcher on our latest embedded software quality report. Can you give us a quick overview of some of the key findings in the report?
Corey Hamilton: We went out and talked to professionals working with embedded software, especially on the development side. So, a couple of the main findings. First off was the very widespread use of AI across all the industries, which is no surprise, of course.
I mean, we know AI's everywhere today, but embedded software is generally considered to be more risk-averse. So that there was that level of adoption, even among these participants, was a little bit surprising. However, we also found that while the usage is widespread, a fair number of businesses, about 21%, weren't totally confident in their ability to mitigate the risks that come with that AI usage. So a really important finding there for our audience.
Andrew Burton: What are some of the main concerns regarding the use of AI in embedded software development?
Corey Hamilton: There are two things that really popped up. One of them is something that we've been calling shadow AI. AI is widely adopted, and a lot of great benefits come from it, right? Organizations are getting a lot of benefits in terms of productivity and development velocity.
We found that about a quarter of the organizations we surveyed actually prohibit the use of AI. However, among those 25%, about 70% of them acknowledge that AI is still being used. AI coding assistants are being used in these businesses against policy, and that certainly brings up a lot of potential for increased risk.
I would say the other main area of concern we see is use of things like AI coding assistants. These tools generally get their code and the data they use from publicly available sources.
A lot of open source software comes from sites like Stack Overflow. And coding assistants will pull down a code snippet from there, and studies have shown that lots of times these code samples will actually have major vulnerabilities or major defects in them. And about 30% to 35% of the time, the coding assistant will just give you that code with the vulnerability still intact.
What you need to understand is that there's a potential risk there. You need to make sure you're testing for it, and not take this code or these samples as something that has already been vetted. You have to make sure you do the testing on your own behalf.
Andrew Burton: That leads right into my next question. What are some of the steps organizations can take to ensure that using these assistants don't introduce major vulnerabilities or defects?
Corey Hamilton: First and foremost, I think all organizations need to take a moment and really do some thinking and really get a strategy around what AI is going to mean to their business. How are they going to use it? What are the benefits that they're looking to see?
And then around that, they need to create clear, articulated policies that they can share with the rest of the business. How they're going to enforce those policies and make sure that they are enforceable. That's certainly step one.
Once you've done that, assuming that you're going to use things like AI coding assistants or integrate with LLMs or AI models, which we see a ton of businesses doing, now you need to actually think about how you're treating the information coming from these, right? And what we tell people is to treat those tools like really talented interns—somebody who's really effective, but they're maybe not super reliable. So you have to double-check what they do. You have to make sure that the testing and the practices are all in place to really vet and make sure that you’re not letting new risks into your software and impact your customers in that way.
And I would say the final thing that companies should really be doing is making sure you understand what regulations exist or are coming. We're starting to see AI-based regulations popping up in different industries. Automakers for one—certainly an area where we probably want there to be regulations. It's an evolving space and you should make sure that you understand which regulations are coming, and what's currently there, and make sure that you have your compliance and all your processes in place to be able to adhere to that.
Andrew Burton: That's great advice, Corey. Thank you so much for joining me today.
Oct 08, 2025 | 6 min read
Jun 03, 2025 | 3 min read
May 08, 2025 | 3 min read
Jan 23, 2025 | 6 min read
Jan 06, 2025 | 6 min read