GenAI has been the star of the show lately. Tools like ChatGPT impressed everyone with how well they can summarize, write, and respond. But something new is gaining ground: agentic AI. These systems don’t just answer questions. They make decisions, take action, and in some cases, even work together to get things done. Naturally, CISOs are starting to ask the big question: can we trust it to be secure?
Agentic AI has the potential to change everything for data (Source: ISG)
The latest State of the Agentic AI Market Report from ISG offers a look at where agentic AI stands today. It shows that while the technology is still early in its adoption cycle, it’s already reshaping IT and cybersecurity operations. And that has big implications for security teams.
Cybersecurity is a key use case
Agentic AI isn’t just a research experiment. In enterprise IT, it’s being deployed for real, measurable outcomes, especially in cybersecurity. According to ISG, cybersecurity agents already account for 15% of agentic AI use cases in IT. These agents monitor environments, detect anomalies, and even carry out remediation with limited or no human intervention.
One reason agentic AI fits so well in cybersecurity is that it can process large amounts of telemetry and threat intelligence in real time. The report highlights agents that build real-time inventories of assets, apply graph-based risk scoring, and trigger automated incident responses. These agents don’t just sound alarms. They can isolate affected systems, deploy patches, and kick off cleanup workflows, all without waiting for human approval.
Multi-agent systems take this a step further. Different agents specialize in detection, response, and recovery, working together as part of a coordinated defense. As threats get faster and more complex, this kind of orchestrated automation could offer a critical speed advantage.
Automation isn’t always the answer
Still, not every security task needs an agent. ISG notes that simple and model-based agents, those designed to handle predictable, repeatable tasks, make up 43% of current agentic AI deployments. In some cases, robotic process automation (RPA) is just as effective and cheaper to implement.
This matters for CISOs watching their AI budgets. When evaluating agentic tools, it’s important to match the type of agent to the task. Overengineering with highly autonomous agents where simpler automation would do can increase cost and risk without adding value.
Multi-agent complexity is coming
Right now, most agentic AI deployments are narrow, function-specific, and focused on fast wins. But ISG sees a shift coming. Providers are investing in multi-agent solutions that can coordinate across departments. In security, that might mean agents that monitor infrastructure, run threat modeling, and manage compliance checks, all as part of a single system.
This kind of orchestration introduces new challenges. For example, how should memory be shared across agents? What happens if one agent’s actions conflict with another’s priorities? CISOs will need to work closely with IT and software teams to define architecture and controls for agent networks.
Data is a double-edged sword
ISG emphasizes that agentic AI’s effectiveness depends on real-time data. In cybersecurity, that includes threat feeds, asset inventories, vulnerability data, and logs. But most enterprises aren’t there yet. More than half of the organizations ISG surveyed still struggle with legacy data issues.
This is a risk. An agent acting on bad or incomplete data can do more harm than good. ISG’s findings suggest that enterprises are turning to service providers to help modernize data infrastructure and improve readiness.
For CISOs, that means aligning AI efforts with broader data governance initiatives. Security teams should push for better access to telemetry, consistent data tagging, and unified data lakes that agents can use without friction.
Agentic AI may be the future of intelligent security operations. But for now, it’s a tool best used with a sharp eye and steady hand.