Microsoft has unveiled a suite of security-focused product enhancements collected under the Security Copilot banner. The company hopes to position these tools as solutions to the threats posed by artificial intelligence- (AI) assisted automation, using that same AI-assistance to create them as well as assist with end-to-end security challenges.

To its credit, Microsoft has acknowledged that it needs to do more on security. A series of high-profile failures in the recent years highlighted that for such a large and well-resourced company, it still treats security as somewhat of an afterthought. Microsoft Security Copilot is part of that response, combining customer need for better security with the cause célèbre of AI.

Microsoft believes that AI can help automate the processing of security incidents. In a Tech Field Day Showcase recently, Microsoft chose email phishing reports as an example use case.

After training staff on how to recognize phishing emails and to report them to security, customers then need to deal with all the reports. But, what if many of them are false positives?
Security analysts’ time can be chewed up sifting through volumes of reports that turn out to be merely annoying marketing messages rather than the more dangerous live phishing attempts.

Microsoft claims that its AI Security Copilot tools can help analysts determine more quickly the false positives from the true positives, and more efficiently deal with the problem.

Microsoft Security Copilot’s information enrichment features are really impressive. A demonstration of incident management in handling an attempted chatbot jailbreak showed how the tool can provide detailed context of what happened, where, when, and by whom.

This contextual information helps responders to quickly determine what went on and deliver an appropriate response. Integration with existing policy tools would help to automate the process still further.

The demonstration was, however, limited to processing existing, fairly routine incidents that are already well-understood. There is far greater value to be found in providing this enriched context for incident responders dealing with new and novel events.

Finding patterns in disparate, complex information from a variety of sources is challenging for human operators, that which machine learning (ML) is far better-suited for. Retrieval augmented generation (RAG) can help human operators to make sense of complex data, guiding them towards the understanding that which machine models are not capable of.

This is where the real opportunity lies: Helping human operators to respond to a constantly evolving threat landscape where the novelty of unknown dangers requires a blend of machine-assisted human intelligence.

Reducing the actions of human operators to merely confirming the recommended action of a machine-learning model is not so much “human in the loop” as “human in a cage”. Microsoft would do better to tap the true potential of what it has built instead of fixating on mere automation.

It is frustrating that the hype of large language models (LLMs) continues to draw focus away from the more prosaic, and yet greatly more valuable, forms of AI and machine learning. Microsoft has an opportunity here to move past its own marketing hype and reimagine its products in ways that help customers with the truly challenging problems they face. It would be a great shame if it limited itself to mere automation and efficiency.

On April 9th, 2025, Microsoft will be talking about its AI security options in more depth at Microsoft Secure, a one-day online event. Join the event to better understand where Microsoft’s security efforts are focused, and send in your feedback on what you think would help the most. The schedule is below:

April 9, 2025, 11:00 am- noon Eastern (Americas)

April 10, 2025, 10:00-11:00 am CET (Europe, Middle East, Africa)

April 10, 2025, 12:00 noon – 1:00 pm SGT (Asia)

Share.
Leave A Reply