Artificial intelligence (AI) is transforming software testing, driving unparalleled speed, accuracy, and coverage in comparison to traditional, manual methods. However, all good things come at a cost. As organizations integrate AI-powered testing tools into their DevOps pipelines, they must also handle new obstacles – such as AI’s difficulties in handling complex test scenarios, ongoing skepticism toward AI reliability in the enterprise, and novel AI-specific cyber threats.

The continued evolution of AI will further solidify and grow its role in software quality and security. However, AI success depends on striking the right balance between leveraging its capabilities and acknowledging its limitations.

In this Q&A, we’ll explore the answers to commonly asked customer questions regarding how AI is reshaping software testing, the hurdles organizations face when adopting AI-powered tools, the importance of governance and security in ensuring AI delivers on its promises, and the potential of future AI applications in cybersecurity – including zero-day vulnerability detection and securing open-source components.

What advantages does AI offer over conventional testing methods regarding speed, accuracy, and coverage? 

In today’s dynamic, AI-driven environments, manual testing can no longer keep pace.

Unlike manual testing methods, AI offers significant advantages to DevOps teams, particularly when it comes to challenges tied to speed, accuracy, and coverage. By using AI-augmented tools, teams can automate test case generation, analyze test results, and even perform risk analysis on code changes. Generative AI copilot assistants are accelerating this transformation further by streamlining test creation, self-healing broken tests, and offering actionable guidance throughout the testing lifecycle. This saves time and allows quality assurance (QA) and development teams to focus on higher-value tasks that boost business efficiency, enhance the strategic output of developers, and drive more secure code.

Intelligent, adaptive testing solutions are now essential. AI-powered testing allows for simplified software quality with an intelligent, codeless, and scalable approach that works no matter what is being tested (and who’s writing the code).

What limitations do current AI systems face in understanding complex test scenarios?

While the benefits of AI are enormous, it has become more difficult for DevOps teams to accurately predict its full capabilities. Generative AI systems sometimes lack the specialized knowledge that human testers have about specific applications and can struggle to interpret vague requirements in test cases. Examples of this can be assessing “user-friendliness” or “aesthetic appeal,” both of which require human judgment. Diving into more complex test scenarios, these often depend on not only understanding more broad parameters—from historical data to user intent—but also require the technology to adapt in real time to complex workflows. On top of this, when gone untested, AI applications can result in biased systems with flawed test data, resulting in misguided analytics.

This is why it is important for engineers and testers to view AI as a collaborator rather than the decision-maker. As such, the advantages AI technology provides can only be fully recognized if engineers maintain oversight and validate outputs of tests to ensure any potential issues are caught before they escalate.

What are organizations’ most prominent challenges when integrating AI into their testing pipelines? 

Some general challenges that generative AI poses, from hallucinations to creating biased systems, all remain true when integrating this technology into testing pipelines. But beyond the challenges that come with the complexity of the technology itself, organizations face challenges at the developer level. Simply put, there remains a major shortage of AI expertise within DevOps. While it’s imperative for human testers to be involved throughout the pipeline, ensuring organizations have the right talent, given how involved humans are in the process of manually reviewing AI-generated outputs, remains a challenge for leaders integrating AI into their testing pipelines.

Creating a sound governance and risk mitigation framework is also a challenge (and simultaneously, an opportunity) for organizations as they look to incorporate AI into their operations quickly. These frameworks are crucial in demonstrating a commitment to responsible and secure AI use. Building this trust with customers, regulators, and other stakeholders is paramount in overcoming challenges related to trust. Not only does this underscore an organization’s commitment to responsible and secure AI use, but it also ensures DevOps teams can drive rapid innovation while prioritizing ethical, regulatory, security, and operational considerations as they integrate AI into testing pipelines.

Can AI autonomously identify zero-day vulnerabilities? What progress has been made in this area?

Not yet, but this capability is on the horizon. The complexity of how a system can be exploited, or how a vulnerability can be weaponized, is something well situated on the roadmap for AI development. But it will be a double-edged sword: defenders will take advantage of AI’s ability to quickly detect and maybe fix novel vulnerabilities in code, while adversaries may use the same AI capabilities to detect vulnerabilities in code and quickly weaponize and operationalize them for malicious use. This cat and mouse game isn’t new, but the speed and impact of these developments may force product security teams and defenders to re-think their approaches to vulnerability management.

Many testing frameworks rely on open-source components. How can AI improve the security of these tools?

As AI becomes more effective at examining code for novel vulnerabilities, we will effectively have a bigger and more powerful “microscope” to point at open-source software. Theoretically this will help us move faster to secure existing open-source software and help us have greater confidence in future open-source software. But the value proposition works both ways, as malicious actors use AI to more efficiently exploit open-source software. While the role of AI in helping better secure open-source software is clear, where and when to employ it is a greater question.

About the Author

Jason Kichen is the Chief Information Security Officer at Tricentis. In his role, Jason leads Tricentis’ digital and physical security programs, including security operations, governance and compliance, product security, IT, and cloud engineering. Prior to joining Tricentis, Jason held multiple security leadership roles at various startups and technology firms after nearly 15 years in the US intelligence community. He serves the security community through his roles as Adjunct Faculty at University of California, Berkeley and as a Board Member at the Institute for Security and Technology (IST).

Jason can be reached online on LinkedIn and on the Tricentis company website.

Share.
Leave A Reply