Are Deepfake Attacks an Immediate Threat or Future Concern for Organizations?

Marcos Colón

April 23, 2024

Are Deepfake Attacks an Immediate Threat or Future Concern for Organizations?

The world is up in arms over deepfakes. Specifically, who has access to this technology and how we can identify what’s real and what isn’t. But these fraud attempts are more than just fake videos of celebrities hawking illegitimate products or politicians saying unsavory things. Deepfakes are being used more and more in attacks on business interests around the world. 

In a recent discussion with Bogdan Botezatu, director of threat research and reporting at Bitdefender, he provided guidance to help navigate the hype to determine exactly how deepfakes and other generative artificial intelligence (GenAI) tools pose a threat to organizations’ cybersecurity posture. Most importantly, he helps determine if these attacks are something organizations should be worrying about today. 

Why are deepfakes so dangerous or disruptive? 

Bogdan Botezatu: Deepfakes are the latest technology in a long line of tools that have been used to conduct scams. For millennia, con artists have perpetrated fraud by tricking people into thinking they are something or someone they are not. Long lost relatives show up when an inheritance is being doled out. A doctor with questionable expertise offers a miracle cure. A Nigerian prince needs your help repatriating millions of dollars. The list goes on and on.  

But what makes deepfakes so dangerous is that they allow threat actors to target many individuals on an extremely personal level in increasingly sophisticated ways. These fake identity scams used to be performed in person one victim at a time, but deepfakes delivered over electronic communication channels like email, text, chatbots, in-app messaging, and video conferencing can be personalized and scaled pretty easily. The most successful attacks make victims question reality itself – which can be a very effective strategy and evil thing to do. 

How are threat actors currently using deepfakes in their attacks? 

BB: Deepfakes have taken phishing scams to the next level. Threat actors can compile an enormous amount of highly-specific and personal information about people and organizations and feed that into GenAI tools to mimic an authority figure. This goes beyond just voice and appearance to include specific behavior, syntax, preferences, and tendencies. Coupled with social engineering campaigns that identify what financial software the organization uses, the banks where they have accounts, and internal processes and policies, deepfakes can be incredibly accurate, personal, and difficult to detect.  

Voice cloning tools allow threat actors to verbally ask subordinates to initiate a fraudulent transaction over the phone. Publicly available chatbots powered by GenAI allow them to carry on a human-like conversation over text or in-app messaging apps. And GenAI video creation services allow them to conduct meetings over video conferencing apps to deliver fraudulent marching orders to a group of people at once.  

It’s true that people have gotten savvier at recognizing deepfake photos (just look at any number of six-fingered figures in some AI-generated images), but the success rate is still relatively low. Out of curiosity, I conducted a non-scientific study where I asked professional photographers to identify deepfake photographs mixed among legitimate images. Even among these experts, people who take photographs for a living and even use photoshop to touch them up, had a less than 50% recognition rate. The recognition rate for deepfake video is even lower since the images are moving and it’s hard to detect subtle inaccuracies. 

Are there any recent examples you can point to? 

BB: The first example I can remember was in 2016 when a group of hackers successfully scammed €40 million from a German manufacturing company by convincing the chief financial officer (CFO) of its factory in Romania that an executive from headquarters requested the transaction. While the initial ask was delivered via email, a follow-up phone call using voice cloning technology sealed the deal and convinced the victim that the request was legitimate. 

More recently, we’ve heard about an attack on a major automotive company where the CFO was tricked into wiring a large amount of money to a fraudulent account. We’ve also gotten reports of an executive of another Fortune 500 company that sat in on a fake Zoom call with senior level executives that was chilling in its ability to carry on the scam for such a long time. As deepfake technology has evolved to the point that anyone can produce Hollywood quality content, we’re going to hear a lot more about successful deepfake attacks that will stretch our understanding of what is real and what isn’t. 

What can organizations do to protect themselves from deepfakes? 

BB: Unfortunately, there’s no tool out there that can identify deepfake attacks with absolute certainty. Watermarks and other tagging technology isn’t likely to stop the attacks either. Organizations are going to have to develop and enforce policies that prevent fraudulent transactions from going through unchallenged, educate its employees to better spot abnormal behavior, and maintain good cyber hygiene throughout their IT environments to prevent the initial breach. 

Protecting the organization from deepfakes dependent on establishing and maintaining trust at all times. Safeguards need to be created and enforced when it comes to conducting transactions and other risky actions. For example, requiring multiple sign-offs on large transactions, putting a limit on the amount of funds that can be moved, allowing transactions to be sent only to verified accounts, or building in a waiting period or middleman to give you time to authenticate the request. 

Mandating deepfake and fraud detection training is also a good idea. You can’t rely on users to recognize every fraud attempt, but skewing the identification rate in your favor a few points is never a bad idea. Continue to hammer home the ideas that there are no free lunches, if something is too good to be true it likely is and everything must pass the smell test. Just giving people the power to challenge abnormal behavior without stigma or repercussions can go a long way in enabling a strong anti-fraud culture. 

Finally, maintaining good cyber hygiene can proactively stop these attacks from occurring in the first place. The most sophisticated attacks need a legitimate communication channel to deliver the deepfake, requiring credential theft of an email, end device or application. Maintaining a robust security posture that protects these endpoints and stops lateral spread to other business systems is a good way to proactively prevent the initial breach and mitigate the impact of a successful attack.  

What can we expect in the future as deepfake technology continues to evolve? 

BB: There’s no doubt that deepfakes are growing more sophisticated and more common. I wouldn’t be surprised if we’ll see a major deepfake attack in 2024, especially with the U.S. presidential election coming up in the fall. But I’m a big believer in fighting fire with fire. Cybersecurity tools powered by AI and machine learning (ML) are going to be key to protecting organizations around the world from these types of attacks. Most critically, they are going to learn how to distinguish legitimate behavior from fraudulent behavior by constantly observing the actions of users and executives in real time. Is someone trying to access data they shouldn’t? Is an account being accessed by an unknown entity? These types of behavioral analysis done at the IT and authentication level will help organizations identify these increasingly sophisticated fraud attempts. 


Deepfakes and other Gen AI technology are going to change the world – for good and for bad. Organizations are going to have to be on high alert for these highly sophisticated attacks that have the ability to get people to question reality itself. Putting policies and safeguards in place to prevent abnormal behavior, better user training to detect fraud attempts, and maintaining good cyber hygiene across the organization are the best ways to stop deepfakes from penetrating your organization today and into the future.

Learn about how Bitdefender cybersecurity experts can help protect organizations through its services.



Marcos Colón

By leveraging his background as a journalist and editor, Marcos Colón has been specializing in cybersecurity content creation for over a decade. Known for his proficiency in communicating complex topics effectively, he bridges the gap between technical aspects and audience understanding. His interviewing skills and commitment to creating engaging narratives have made him a distinctive voice in the cybersecurity sphere.

View all posts

You might also like