The modern-day cybercriminal may possess technical knowledge that greatly surpasses that of the average digital consumer, but that doesn’t mean crafting custom malware is their modus operandi when it’s time to compromise their target. They prefer to take the path of least resistance, and many times that means exploiting the human psyche.
The evolution of consumer behavior in the digital age has played right into this. The speed in which we consume content and engage via our devices has accelerated increasingly over the last decade. When we narrow the scope to email, response times are said to be just under two minutes.
In today’s prototypical work setting, employees fire off emails at the speed of light, responding to colleagues they frequently communicate with even quicker without second-guessing who they’re messaging with. This has given rise to phishing attacks, and more recently and increasingly, business email compromise campaigns (BEC).
Also known as CEO fraud, BEC attacks are on the rise as cybercriminals continue to rake in billions—that’s right, billions—of dollars on an annual basis.
According to the FBI Internet Crime Complaint Center’s (IC3) 2018 Internet Crime Report, the 20,373 complaints filed to the IC3 tied to BEC attacks accounted for an estimated loss of over USD$1.2 billion in 2018 alone. That’s quite the jump from the 15,690 complaints filed in 2017, which resulted in USD$675 million in losses. The story here is that BEC attacks continue to be effective for attackers, so chances are they aren’t going to be letting up any time soon.
If you’ve frequented our site content before chances are you’ve read some of our extensive coverage on the topic where we’ve even provided an example of an email threat of a CEO fraud attack. From payroll to wire transfer scams, these attacks require little sophistication or the need to exploit technical vulnerabilities. They’re all about exploiting humans.
But what happens down the line when employees begin to wisen up and heighten the curiosity tied to the emails they receive? At that point attackers may need to take their tactics to the next level; enter deepfake technology.
What Is Deepfake Technology?
In its simplest terms, producing a deepfake requires AI-based technology to alter video content. Deepfakes first gained notoriety in 2017 when a Redditor named “deepfakes” leveraged deep learning technology to stitch the faces of popular celebrities, such as Scarlett Johansson, onto people featured in pornographic films. From there, deepfake algorithms quickly proliferated, making it somewhat easier to target both political figures and TV personalities. Why? Producing a deepfake at the time required a large number of the target’s photos and given the number of readily available photographs of public figures, they were and are prime targets.
You may be asking, “What does this have to do with BEC attacks and email?” Well, until deepfake technology evolves to the point where it can be used in video chats, attackers will have to result to silicone masks (which has been done already). But given that voicemail to email is a business phone system feature that’s widespread, security leaders should be wary of deepfake audio.
There has already been one case of artificial intelligence-based (AI) software being used to spoof one executive’s voice into convincing the CEO of a UK-based energy firm to transfer more than USD $200,000. Although this incident in Europe was the first known voice-spoofing attack, it could be a tactic that cyber swindlers leverage more down the road.
Is it Time to Worry?
The effectiveness of BEC attacks is still prevalent, so we may not see attackers glom onto leveraging AI-software to weave deepfake tactics into their widespread campaigns just yet. While there’s only been one reported case of cybercriminals leveraging deepfake audio to dupe a firm into transferring money, it’s not to say that a targeted attack won’t take advantage of the technology.
The example of the fooled CEO is an indication of what we might see in the near future, says Ziv Mador, vice president of security research at Trustwave.
“One thing we know about cybercriminals is that they’re very quick adopters of new and existing technologies,” Mador says. “Take online banking as an example. They quickly realized the potential to exploit it by creating banking Trojans and stealing credentials when victims log into their bank accounts.”
While traditional social engineering attacks via email have been tremendously successful, Phil Hay, senior research manager of email security and malware analysis at Trustwave SpiderLabs, agrees that as consumers and employees wisen up and have better email protections in place, attacks will have to evolve into other formatted tactics such as deepfake audio.
“If the target’s juicy enough for attackers they’ll definitely make it happen,” Hay says. “It’s conceivable and not at all farfetched.”
Apart from creating custom file payloads and malware, malicious threat actors have always leveraged new technology that’s not intended for their use. As studies surrounding AI evolve and AI research institutions continue to make strides in the deepfake video and audio arena, cyber swindlers will take notice.
Take Google Duplex for example. This AI-driven service is intended to help users make appointments over the phone without any interaction required by the user. At the Google developer’s conference in May 2018, CEO Sundar Pichai demoed the AI voice, which was able to understand and respond to the person on the other end of the call. Given how attackers have made iterations to existing technology to benefit their malicious needs, one can see how this could be perceived as a ripe opportunity for cyber miscreants to exploit.
What You Should Do About It
Although it isn’t the most significant threat that the Trustwave SpiderLabs team has seen, nor the most prevalent, it is concerning because deepfake technology is a legitimate tool attackers can use, Mador says.
“Most organizations only have a limited amount of people that can perform the actions that these attackers are after,” Mador says, referring to wire transfers. “These are employees that usually work in the finance department.”
Until credible deepfake-detecting software is developed, security leaders should ensure that employees, especially those typically targeted, have a heightened sense of curiosity tied to the emails they receive, no matter who it’s from or what it is, according to Mador.
“Every organization has to identify the employees who are at greater risk and make sure they train them on the threats they’re facing,” he says.
Two specific areas that organizations can focus on to bolster protection from these email-based threats include:
Secure Email GatewayDeploying email security within your environment that can offer an entry point for undesired non-business content, such as phishing and BEC attacks, into your network. The ideal solution should offer policy control and reporting, in addition to a multi-layered approach that reduces false positives.
Cybersecurity Awareness TrainingA heightened sense of suspicion doesn’t come naturally for employees. Successful security awareness training programs empower all employees within the business (not just the major targets) to practice secure computing and become more infosec conscious. This should also include a phishing service that sends targeted emails to test user responses to social engineering attacks they may face.
Marcos Colón is the content marketing manager at Trustwave and a former IT security reporter and editor.