It’s Time to Talk Seriously About Deepfakes and Misinformation

Reading Time: ~ 4 min.

Like many of the technologies we discuss on this blog—think phishing scams or chatbots—deepfakes aren’t necessarily new. They’re just getting a whole lot better. And that has scary implications for both private citizens and businesses alike.

The term “deepfakes,” coined by a Reddit user in 2017, was initially most often associated with pornography. A once highly trafficked and now banned subreddit was largely responsible for developing deepfakes into easily created and highly believable adult videos.

“This is no longer rocket science,” an AI researcher told Vice’s Motherboard in an early story on the problem of AI-assisted deepfakes being used to splice celebrities into pornographic videos.

The increasing ease with which deepfakes can be created also troubles Kelvin Murray, a senior threat researcher at Webroot.

“The advancements in getting machines to recognize and mimic faces, voices, accents, speech patterns and even music are accelerating at an alarming rate,” he says. “Deepfakes started out as a subreddit, but now there are tools that allow you to manipulate faces available right there on your smartphone.”

While creating deepfakes used to require good hardware and a sophisticated skillset, app stores are now overflowing with options creating them. In terms of technology, they’re simply a specific application of machine learning technology, says Murray.

“The basics of any AI system is that if you throw enough information at it, itcan pick it up. It can mimic it. So, if you give it enough video, it can mimic a person’s face. If you give it enough recordings of a person, it can mimic that person’s voice.”

There are several ways deepfakes threaten to redefine the way we live and conduct business online.

Deepfakes as a threat to privacy

A stolen credit card can be cancelled. A stolen identity, especially when it’s a mimicked personal attribute, is much more difficult to recover. The hack of a firm dedicated to developing facial recognition technology, for instance, could be a devastating source of deepfakes.

“So many apps, sites and platforms host so many videos and recordings today. What happens when they get hacked? Will the breach of a social media platform allow a hacker to impersonate you,” asks Murray.

Businesses must be especially careful about the data they collect from customers or users, asking both if it’s necessary to collect and if it can be stored safely afterwards. If personal data must be collected, security must be a top priority, and not only for ethical reasons. Governments are starting to enact some strict regulations and doling out some stiff fines for data breaches.

Ultimately, Murray thinks those governments may need to weigh in more heavily on the threat of deepfakes as they become even more indistinguishable from reality.

“We’re not going to stop this technology. It’s here. But people need to have the discussion about where we’re heading. In the same way GDPR was created to protect people’s data, we’re going to need to have a similar conversation about deepfakes leading to a different kind of identity theft.”

Deepfakes as a cybersecurity threat to businesses

It’s important to note the ways in which deepfakes can be used to target businesses, not just to spoof individuals.

“These business-related instances aren’t too common yet,” says Murray. “But we’re at the beginning of

Continue reading

This post was originally published on this site