Phishing scams used to be filled with awkward wording and obvious grammar mistakes. Not anymore. AI is now making it harder to distinguish what is real.
According to Cofense, email-based scams surged 70% year over year, driven by AI’s ability to automate lures, spoof internal conversations, and bypass spam filters with subtle text variations.
Criminals use AI algorithms to analyze large amounts of data to understand the interests, behavior, and preferences of their target. For this purpose, they use specialized tools such as FraudGPT, which can be found on underground channels. Unlike the popular ChatGPT, it is devoid of any restrictions preventing it from answering questions about illegal activity.
Using AI to improve threat detection
Manual detection depends on human analysts, so it’s slower and less efficient. It also can’t keep up with large amounts of data and often spots threats too late.
This is where AI excels, identifying patterns and unusual activity quickly, even in massive datasets. By learning from real-world examples and adapting to new threats, it helps detect phishing attempts early.
Besides being fast and handling lots of data, AI cuts down on alerts that don’t matter, letting security teams focus on real problems. AI also spots small changes in how users behave that could mean an attack is underway.
“Treating AI like a human is a perspective shift that will fundamentally change how cybersecurity leaders operate. This shift encourages security teams to think of AI as a collaborative partner with human failings,” explained Doug Kersten, CISO of Appfire.
AI techniques to detect AI-driven phishing
Machine learning helps recognize when something’s off, such as logins from unusual locations or access to systems the user typically doesn’t interact with. These models learn what’s normal over time and call out anything that seems out of place.
Natural language processing (NLP) pays attention to how emails are written. It can catch signs that something’s not right, such as weird requests, forced urgency, or language that tries too hard to sound official.
Deep learning is used to catch fake audio, images, or video. It detects things like deepfakes or cloned voices in scams where someone’s pretending to be someone else.
Challenges in AI defense deployment
All new technologies have flaws, and AI is no exception. The drawbacks we need to consider are:
False positives: It wouldn’t be the first time a piece of content written entirely by a human gets flagged by an AI tool as AI-generated. The same thing can happen with emails if the AI, for whatever reason, labels them as dangerous. Too many of these false alarms can overwhelm cybersecurity teams and make it harder for them to focus on real threats.
Privacy concerns: AI systems that detect phishing often analyze emails, messages, attachments, and user behavior. This raises questions about how data is stored, used, and protected.
Model tuning: Just like mobile apps, these models need to be constantly updated to keep up with new phishing tactics. Otherwise, over time, their accuracy could degrade. This could result in missing new threats or generating even more false positives.
Skill gaps: Because AI tech is developing so fast, there aren’t many experts who really know how to manage these systems yet. Without proper training, teams might struggle to adjust the models and track how they’re doing.
Future outlook
AI will continue to advance, both for cybercriminals and security teams. It’s hard to say who will gain the upper hand in this race. But we shouldn’t fool ourselves into thinking AI can solve cybercrime on its own. Without human oversight, AI should never be given full control to make decisions for us.
“AI is a powerful tool, but it can’t replace humans. It’s about helping us do our jobs better. The best cybersecurity people will be those who can effectively work with AI, using it to boost their own skills and knowledge,” said Vineet Chaku, President of Reaktr.ai.