IRONSCALES has extended the reach of the machine learning algorithms it uses to identify email anomalies to now include the video and audio files used to create deepfakes.

Grant Ho, chief marketing officer for IRONSCALES, said that capability will make it easier for cybersecurity teams to investigate next-generation phishing attacks that are employing artificial intelligence (AI) technologies to create more sophisticated social engineering attacks that can be aimed at specific individuals.

As these so-called Phishing 3.0 attacks are becoming less expensive to create and launch in the age of AI, it’s clear cybersecurity teams will need to extend their existing cybersecurity defenses capabilities, he added.

Like it or not, cybersecurity teams of all sizes are now locked in an AI arms race. They need to be able to leverage AI technologies to better identify new classes of threats, such as deepfakes that cybercriminals are creating using AI technologies. The challenge they face is that many of them cannot afford to acquire, deploy and support additional tools and platforms. Instead, they need the tools and platforms they rely on today to combat existing threats to be extended to those new classes of threats, noted Ho.

It’s not clear to what degree cybercriminals are launching deepfake attacks, but there is now enough evidence to suggest the technologies needed to create them are already widely distributed among them. As such, it’s now more a question of when versus if deepfakes will become more commonplace.

In addition to acquiring capabilities to detect and thwart these types of attacks, organizations also need to create a stronger cyber resilience culture within their organization, said Ho. For example, any type of message that requires an urgent response should immediately be viewed with suspicion, he noted. Ultimately, even in the age of AI, there is still no substitute for ongoing training, he added.

The one thing that does remain to be seen, however, is the degree to which organizations will proactively respond to deepfakes. If history is any guide, there will need to be a series of high-profile breaches before business leaders fully appreciate the severity of the threat. Unfortunately, there may still be a tendency to blame employees for not recognizing a deepfake, even though it’s already been shown that senior-level employees are just as prone to be fooled as anyone else. In fact, given their access to sensitive information, the truth is a deepfake attack is likely to be aimed more at senior-level executives.

Hopefully, there will come a day when many of the interactions occurring on the internet are assumed to be fake unless somehow proven otherwise. In the meantime, cybersecurity teams at the very least should be preparing for a new era of social engineering attacks that, for example, combine all the information readily available on the internet to create deepfakes that most individuals will at least initially find highly credible. The only way to detect those types is going to be to rely more on AI algorithms to detect anomalies that no human is able to see, much less recognize.

Share.
Leave A Reply