Georgia Turnham, Security Advisor, Trustwave SpiderLabs
On Nov. 5, Georgia Turnham, Security Advisor at Trustwave SpiderLabs, will conduct a webinar discussing the emerging threat known as 'Deepfakes'. During this session, Turnham will talk about the scale of the issue, its unfettered growth and the continual improvements that make them believable.
Deepfakes, what are they?
Deepfakes are a relatively new phenomenon, only picking up the name deepfake as recently as 2017. Put simply, a Deepfake is a synthetic or artificially created piece of media (image, video, or audio) that makes it appear someone said or did something that they, in fact, did not do. Kind of like special effects in a movie. The creators of these pieces of media leverage artificial intelligence and machine learning techniques to create an output that is as authentic and legitimate as if it were the person themselves saying those things.
The danger posed by these threats is potentially severe. They can undermine and destabilize democracies, spread disinformation, and attack an individual's credibility. However, the scary thing is, because deepfakes are still so novel, we haven't seen the full extent of the danger posed by these technologies. The fact these threats rely on the 'seeing is believing' idiom, is what makes them so hard to counter.
What is a typical deepfake attack?
There is no 'typical' deepfake attack. Ultimately the attack itself is going to be curated based on the objectives of that attacker. For example, an attacker will take a snippet of audio, find a photo or a video of their victim, and superimpose it onto another piece of media. The result is a piece of media showing the victim taking part in an activity or conversation in which they were not involved. This activity is done to spread false messages with a degree of legitimacy or blackmail the victim.
What are the attackers' general goals? Do they differ much from a threat actor who uses other methods to gain initial entry?
An attacker's goals vary, but attackers commonly use Deepfakes for extortion and blackmail scams and misinformation and disinformation campaigns. The main difference between these threat actors and those who, for example, use phishing or brute force techniques as an entry vector is that in this application, the attacker performs reconnaissance, and often the attack is conducted without the victim's knowledge.
Can you give an example of a successful deepfake attack?
One of the most notable deepfake attacks occurred in 2019 when threat actors targeted a chief executive at a U.K. energy firm. The executive "supposedly" received a phone call from someone claiming to be the company's Germany-based CEO. The U.K. executive was told of an urgent request to transfer €220,000 to one of the company's Hungarian suppliers and that he must complete the transfer within the hour. It wasn't until the attacker called back several days later asking for more money to be transferred that the U.K. executive became suspicious. Unfortunately, the attackers were not caught.
Another case involved former U.S. President Donald Trump and Speaker of the House Nancy Pelosi. Attackers created and posted online a video that made Pelosi appear inebriated at an event to tarnish her reputation. And sadly – it worked. The videos went viral, and this led to calls and speculation from the general population.
What is the best defense against a deepfake attack?
Defense is twofold..
User awareness is the primary defensive weapon. Aiming to educate users on the hallmarks of deepfake content:
- Faces with distorted features or movements and a lack of blinking.
- Jerky or unsynchronized movement.
- Differences or shifts in lighting.
- Unclear or at times, robotic audio.
How about a sneak peek at what the webinar will discuss?
The first is that the cost and accessibility of deepfake software is far easier and cheaper than what one might expect – which is what makes the threat so pervasive.
Second, the talk will cover the emergence of anti-deepfake solutions, legislations and research projects.