As developers leverage AI to write code and accelerate development cycles, a critical need has emerged: DevSecOps practices must evolve to ensure the quality and security of AI-generated code. Although over 90% of organizations are already using AI in software development, many find that their security initiatives are struggling to keep up. This has led to a concerning statistic: 26% of organizations lack confidence in their ability to secure AI-generated code.
To effectively harness the power of generative AI while mitigating potential risks, organizations need to implement a strategic approach powered by a “security first” mindset and best-in-breed application security tools. Eighty percent of C-suite executives that do have a strategy for adopting and implementing generative AI say they’ve been very successful at achieving their goals.
What does such a strategy entail? What tools can be used to implement it? Black Duck’s guide to evolving DevSecOps at the speed of AI recommends a focus on four key steps.
An important first step is to help developers find and fix security problems early on, ideally within their integrated development environment (IDE). IDE-based security scanning acts as a vital first line of defense against AI-introduced vulnerabilities, helping prevent the introduction of insecure code. Security IDE plug-ins such as Black Duck’s Code Sight™ IDE Plug-in act as a “security spellchecker” for developers, providing real-time feedback, highlighting potential flaws directly in the code editor, and offering fix suggestions and explanations.
With AI speeding up and increasing how much code is being produced, adding automatic security checks into the CI/CD pipeline is crucial. A good approach involves integrating a variety of types of security tools, such as software composition analysis (SCA) and static application security testing (SAST). Black Duck solutions integrate seamlessly into CI/CD pipelines to automate SCA and SAST tools, providing comprehensive visibility into open source risks and insecure proprietary code with each edit, commit, and build. Developers can utilize Black Duck's out-of-the-box plug-ins and automation templates for popular platforms like GitHub, GitLab, Azure DevOps, Jenkins, and Bitbucket to tailor testing activities to their projects and workflows while adhering to security policies.
Cultivating a strong sense of security among developers is vital in an AI-enabled development culture. This involves eliminating implicit trust in AI-generated code—necessary, as AI models’ training doesn't guarantee secure output. Organizations must also provide effective, tailored security training that addresses proactive risk awareness and provides clear remediation guidance for detected issues. AI-powered security resources can assist by prioritizing vulnerabilities and providing context-aware guidance. Investing in developer security training can significantly reduce vulnerabilities introduced into the codebase, limiting risk and reducing the burden on security teams. Black Duck developer security training, powered by Secure Code Warrior, enhances developers’ secure coding skills and provides actionable remediation guidance for detected risks.
Ensure a baseline of security by establishing centralized security policies that can be consistently applied across teams and projects. Define clear security goals based on business risk, so you can focus security efforts on the most critical areas. Use centralized visibility and control over security testing results to manage and prioritize issues at scale. Adopt a policy-driven automation approach to ensure consistent application of security standards and support compliance requirements.
Tools are available today to manage safety across all the ways software is being built and delivered. For example, Black Duck enables flexible and policy-driven security testing through detailed policy configurations, a variety of deployment options, seamless integration into developer workflows, customizable testing configurations, and scalable architecture. Black Duck Polaris™ Platform offers flexible, centralized issue, testing, and component policies for a suite of AST engines with both on-premises and as-a-service deployment options, making it ideal for complex enterprise environments as AI tools encourage rapid scale and pipeline evolution.
Secure AI-generated code with these DevSecOps best practices