A National Security Threat: It’s Time to Get Proactive Against Deepfakes

In October, the Senate passed a bill called the Deepfake Report Act of 2019. If passed by the House of Representatives, where it currently sits, the bill will require the secretary of Homeland Security to file annual reports about deepfakes and their impact on national security. Meanwhile, in China, deepfakes have been made a criminal offense. Both moves speak to the seriousness of their rise. But the deepfake threat will continue to grow in 2020, and the implications are tremendous – especially with elections right around the corner.

What is a deepfake?

Deepfakes, which VentureBeat recently dubbed one of the trends that defined 2019, are an extremely sophisticated form of digital impersonation. Via machine learning and facial mapping software, a person’s face or voice can be inserted into a video without their permission. Videos and recordings can thus make that person say something they would never say and can either be shared online or used as a form of blackmail. Fake videos of Nancy Pelosi and Mark Zuckerberg have already made the rounds, celebrity faces have been seamlessly dubbed over pornography, and a digital artist even rendered videos of opposing UK politicians endorsing each other.

Deepfakes can also be used in mobile communications. Attackers can use rich-content text messages (“smishing”) or voice mail messages (“vishing”) as forms of social engineering to get targets to do what they want. For example, fraudsters used AI to impersonate a CEO’s voice to demand a $243,000 monetary transfer. But as the recent passage of the Deepfake Report Act of 2019 shows, deepfakes have grown beyond the business world and also threaten our nation’s security.

The term “deepfake” was coined in 2017, when it was only being done by small groups in an artisan manner. Now, it’s blossomed into an industry, and deepfakes-as-a-service will come to the forefront this year for both fun and malicious purposes. The technology behind deepfakes has improved, too. Originally, deepfakes focused heavily on voice but now can actually fake a subject’s face. Samsung researchers have even built a capability to derive a realistic video from just one still image of a subject. And while the doctoring used to be done offline, it can now be done in almost real-time. This evolution makes deepfakes even more convincing and concerning.

The government deepfake threat

Undoubtedly, the government must be prepared for the fact that as deepfakes evolve, they will continue to be used in ways that undermine national security, from allowing the faking of credentials and possibly clearance checks to undermining the security of our elections. Deepfakes can also be used to effectively discredit political candidates and push inaccurate messages to voters. Suppressing a “fake” or rumor is nearly impossible once it’s out there, and could impact election outcomes. The actual people working on election campaigns are at risk of attacks as well. They have access to strategies and timelines that may be of interest to bad actors – and bad actors may think deepfakes are the best way to bribe such information out of them if they cannot simply break in.

The question, of course, is how the government can fight against malicious deepfakes in order to protect national security. Employee awareness, particularly to prevent successful ransomware or data leakage attacks, is a good first step and can raise the bar scammers must jump over in order to be successful. As with any phishing scheme, bad actors try to create a sense of urgency that prevents people from slowing down and seeing signs that the scam is just that.

It is important to train federal employees to be vigilant and be able to recognize the difference between a deepfake and the real thing. Often, this simply involves taking a few moments to think about whether or not something seems appropriate. Is it realistic that a federal CIO would send a message to an employee asking them to hand over sensitive information or login credentials? If something seems amiss, it probably is.

Detecting deepfakes

Still, it’s not realistic to expect every employee to recognize a deepfake, especially as their realism advances. From a technology perspective, web and email security solutions can prevent interactions with deepfakes at the initial lure, while extra checks at the business process level, particularly for things like money or file transfers, can help agencies identify unusual activity. User behavior monitoring can also help flag when an employee has fallen for a deepfake, as their actions will deviate from the normal baseline or trigger specific analytics, prompting a lockdown of their account.

Last but not least, while still in their early days, tools specifically designed to detect deepfakes should also be on the radar of government agencies. When people Photoshop something to erase or change the background, it affects characteristics of the image like pixel density. Similarly, deepfakes can be detected by tools that look at the data content of what’s being displayed – changes that won’t be visible to the naked eye.

The deepfake industry is rapidly maturing into a significant cybersecurity threat against our nation. These attacks will undoubtedly become more prevalent as we approach the 2020 elections. Agencies must resolve to do whatever they can to identify, defend, and proactively remediate deepfakes before they can do long-lasting damage.

(Visited 836 times, 1 visits today)

Nicolas (Nico) Fischbach is global CTO at Forcepoint, where he led cloud-first transformation as the CTO for the company's cloud security business, overseeing technical direction and innovation. Before joining Forcepoint, he spent 17 years at Colt, a global B2B service provider, and was responsible for company-wide strategy, architecture and innovation.

Leave a Reply

Latest from Cybersecurity

Go to Top
X
X