81.8 F
Washington D.C.
Wednesday, May 1, 2024

Digital Identity and AI: How to Keep Deepfakes from Spreading Disinformation

Imagine knowing nothing about NASA’s 1969 Apollo 11 mission, and then you watch a video of former President Richard Nixon breaking the news that it failed and all astronauts involved died. While the mission was actually a success, a recent deepfake video produced by MIT and Modzilla shows how easy it is becoming to bend the reality of a historical event. What this video shows is how far AI-generated deepfake technology has come in recent years, and if it’s used for malicious purposes state-sponsored cyber attackers can spread disinformation on a global scale through social media to sway individuals’ opinions on topics, such as an election or international policies.

Last month, Twitter removed nearly 33,000 bogus and bot accounts that were spreading political disinformation from the People’s Republic of China, Russia and Turkey. While this shows social media platforms are monitoring for disinformation spread by state-sponsored groups, possibly with deepfake content, the damage can be done long before disinformation is taken down. For example, Facebook users shared the top 100 false political stories over 2.3 million times in the first 10 months of 2019 in the United States, according to Avaaz.

A cornerstone of a democracy is allowing citizens to vote for what they believe is right, and ensuring those votes count. As such, organizations that serve as a primary news outlet for millions of people, like social media platforms, must ensure the information shared equips voters with facts. To combat AI-generated deepfakes leading up to events like the 2020 U.S. presidential election, these organizations can fight fire with fire and use ethical AI algorithms to identify and remove deepfake content.

How would ethical AI help?

Ethical AI can be used to combat deepfake videos that are spreading disinformation by spotting clues that show that something has been tampered with. This might include how the individual holds their head when speaking, the rhythm of their voice, or any discrepancies in their facial expressions or hand gestures. By using baseline data and AI, discovering these inconsistencies in behavior can give media platforms a leg up. However, a more powerful use of ethical AI is secure authentication, which allows organizations to confirm someone is who they say they are when they sign in.

Secure authentication to reduce fraudulent content

Media platforms can use identity and access management (IAM) capabilities to help secure digital experiences and even combat deepfakes spreading disinformation. IAM can help validate that users are who they say they are, including users who upload legitimate, validated videos to social media platforms. This can be accomplished by leveraging a variety of strong authentication solutions.

If organizations use secure authentication techniques, ethical AI algorithms can build a library of trusted digital identity data because the source of content is verifiable. Social media platforms that implement identity management approaches can adjust login journeys to identify both legitimate and suspicious users and uncover threats. Enterprise-grade identity solutions can also give social media platform users control over the data collected about them and their connected devices.

Consider whether the content in question is ‘digitally signed’

Finally, ethical AI can also help by determining whether the content in question is digitally signed in a tamper-proof way. Digitally signed content means that the user portrayed confirms that it is him or her. This will create a definitive chain of custody for content, allowing social media platforms to know where a video came from and where it has been (as one example). This goes hand-in-hand with digital identity, too. If there’s a video of a politician speaking about climate change from an event in France, but that same politician states they were never there and did not make the video, then ethical AI can flag that discrepancy to the social media platform for further review.

Implementing this technology to make it easy for people to spot the fakes is possible now. Just imagine a future where videos or other forms of content get assigned a “certainty score” to help people judge its validity before viewing it. With the aid of AI, massive amounts of content can be scored like a restaurant review on Yelp to give organizations the context they need to either remove it from their platform or provide a score to help people make a call about its worth.

With the 2020 U.S. presidential election approaching, it is imperative both the private and public  sector work together to defeat deepfake videos from spreading disinformation. Citizens must be able to make informed decisions with information they can trust.

author avatar
Ben Goodman
Ben Goodman is a certified information systems security professional (CISSP). He currently serves as the senior vice president of global business and corporate development at ForgeRock. In his current role, Goodman is responsible for corporate development, global strategic partnership, and technology ecosystem efforts across the enterprise. Additionally, he leads ForgeRock’s ecosystem development team in an effort to support and extend the company’s industry-leading technology ecosystem.
Ben Goodman
Ben Goodman
Ben Goodman is a certified information systems security professional (CISSP). He currently serves as the senior vice president of global business and corporate development at ForgeRock. In his current role, Goodman is responsible for corporate development, global strategic partnership, and technology ecosystem efforts across the enterprise. Additionally, he leads ForgeRock’s ecosystem development team in an effort to support and extend the company’s industry-leading technology ecosystem.

Related Articles

Latest Articles