Skip to main content

The Latest Phishing Scam Uses Deepfake AI Voices

By October 8, 2019November 4th, 2019Blog, Phishing, Security

What some observers are calling the first reported AI heist occurred this past September, when a British subsidiary was tricked into sending over $200,000 to a fraudulent account.

What some observers are calling the first reported AI heist occurred this past September, when a British subsidiary was tricked into sending over $200,000 to a fraudulent account. The scammers achieved this by leveraging an artificially generated version of the voice of the victim’s German parent company’s CEO to convince the subsidiary’s chief executive to wire the money to a supplier in Hungary. The British CEO only became suspicious once the hackers attempted a second theft and the “deepfake voice” contacted him at the same time his actual boss did.

Spoofing, Spear Phishing, Whaling – Several Avenues of Attack

The subsidiary was represented by insurance firm, Euler Hermes Group SA, who first brought the story to light. A Euler spokeperson later spoke to the Washington Post about how their client was fooled, saying, “[t]he software was able to imitate the [German CEO’s] voice, and not only the voice: the tonality, the punctuation, the German accent.” Despite suspicions about the nature of the request, the phone call and a follow-up email were enough to convince the subsidiary’s CEO to comply.

The fact that the call was accompanied by a spoofed email only reinforces the case that the hackers did an extensive amount of research into both the British company’s chief executive and his boss. This is typical of an even more directed form of spear phishing termed whaling, which typically displays much more sophisticated methods of attack. Cybercriminals that employ whaling are putting in considerably more effort into how they catch a victim off-guard, all in the hopes of a bigger reward than they would get from any random target.

Deepfake Audio & Video Development

This type of AI-generated media is widely known as a “deepfake,” a name that reflects the worrying connotations of combining the concept of machine “deep learning” with fraud. While supporters of AI-generated voice technology often point to the potential benefits, deepfake software has increasingly been used to involuntarily depict female celebrities in pornographic videos, as well as create targeted hoaxes and fake news campaigns.

Public trust and privacy concerns have not stopped the technology behind deepfakes from being proliferated, though. One such example is Google’s Duplex AI, which can allow Google Assistant to make calls for reservations, car rentals and more using a convincingly human-like voice. The various face-swapping applications also fit into this category, such as the now-infamous FaceApp, which critics often point out have worrying long-term implications for privacy breaches and personal data being siphoned off by bad actors.

 

AI Being Used by Hackers?

As the FaceApp controversy indicated, it can be hard for most people to predict where their data will end up. Numerous regulations have been and continue to be introduced to combat this reality, but the speed (and anonymity) of the Internet make it near impossible for every piece of information to be tracked. If this deepfake phishing scam signifies anything, it is that resources, time and effort make the difference in capturing and executing on data.

Research has revealed startling implications for what type of cyber attacks AI could allow, but until this particular attack, there was no evidence that it was actually a tenable strategy in the real world. While machine learning dwarfs human intelligence in data aggregation automation, it is not as cost-effective on the execution side for what hackers usually want. The technology that powered the deepfake phishing scam was not anything new – the possibility of such an attack was being raised at least a year before it was proven.

Deepfakes have generated a great deal of fear and paranoia, but the real danger lies in something that hackers have always exploited: human complacency and ignorance. Many will want to rely on technology to protect against this new threat, but the best defense against any manipulation attempt is knowledge and best judgment. AI may have put video and audio alteration into everyone’s hands, but these types of cases are not more common because of a human’s ability to spot fake media – as long as they are aware and prepared.

Contact SWK Technologies to Learn How to Beat Spoofed Media

Deepfakes are just the latest in a long line of altered media, and phishing at its core will always be about making you lower your guard through some kind of emotional trigger. SWK can help you prepare everyone in your organization for whatever new avenue of attack hackers are leveraging by reinforcing your best and last line of defense – the human element.

Contact SWK Technologies to learn more about our cybersecurity solutions and how we can help you strengthen your network.

1
Previous
Next
FormCraft - WordPress form builder