Symantec’s threat hunters have demonstrated how AI agents like OpenAI’s recently launched Operator could be abused for cyberattacks. While AI agents are designed to boost productivity by automating routine tasks, Symantec’s research shows they could also execute complex attack sequences with minimal human input.

This is a big change from older AI models, which could only provide limited help in making harmful content. Symantec’s research came just a day after Tenable Research revealed that the AI chatbot DeepSeek R1 can be misused to generate code for keyloggers and ransomware.

In Symantec’s experiment, the researchers tested Operator’s capabilities by requesting it to:

  • Obtain their email address
  • Create a malicious PowerShell script
  • Send a phishing email containing the script
  • Find a specific employee within their organization

According to Symantec’s blog post, though the Operator initially refused these tasks citing privacy concerns, researchers found that simply stating they had authorization was enough to bypass these ethical safeguards. The AI agent then successfully:

  • Composed and sent a convincing phishing email
  • Determined the email address through pattern analysis
  • Located the target’s information through online searches
  • Created a PowerShell script after researching online resources

Watch as it’s done:

J Stephen Kowski, Field CTO at SlashNext Email Security+, notes that this development requires organizations to strengthen their security measures: “Organizations need to implement robust security controls that assume AI will be used against them, including enhanced email filtering that detects AI-generated content, zero-trust access policies, and continuous security awareness training.”

While current AI agents’ capabilities may seem basic compared to skilled human attackers, their rapid evolution suggests more sophisticated attack scenarios could soon become reality. This might include automated network breaches, infrastructure setup, and prolonged system compromises – all with minimal human intervention.

This research shows that companies need to update their security strategies because AI tools designed to boost productivity can be misused for harmful purposes.

Share.
Leave A Reply