Twitter CEO Jack Dorsey was up on stage Monday evening at the Recode Decode Conference talking crypto, Deepfakes, and--most importantly--the fake videos that pop up on Twitter when someone is reported to have said something they didn't say.

"None of this is new," Dorsey told the crowd. "Reinvention is something that is just a way to clean it up. We're still not done. There's a lot more to do."

"For our community, to have a stable platform, it means we're all in."

Meanwhile, Deepfakes are disappearing. The Deepfakes feature that takes screenshots of video streams (like a person's facial expressions or gestures) to create manipulated videos, which then appear on Twitter. Now, according to a report from Business Insider, Twitter is finally implementing an algorithm that picks out the most likely Deepfakes to show up.

How Did Deepfakes Get Out?

Deepfakes came to the spotlight about four months ago, when a woman was recorded giving a deposition in Russia where she appeared to scream a racist slur at the Western media. The video clip of the hearing was posted on social media, and was soon reposted around the world.

It was only a matter of time before they found their way onto Twitter.

What Are Deepfakes?

The technology stems from an artificial intelligence feature called NUKE, which is for "no-code input-output search engine for neural network." In other words, it uses neural networks to create these fake videos, similar to the way AI artists create automated videos with images and text placed in them. These fake videos appeared on Twitter without people's consent, but some lawyers aren't convinced that is enough to use them.

According to The Guardian, representatives from the United States, UK, Russia, China, Germany, and France recently came together to create a task force to collaborate on the digital manipulation problem.

Do They Even Exist?

We don't know yet--they are too new and not yet fully developed. But every day we're learning more and more about their nefarious side.

Deepfakes can generate videos that seem to be real, but in fact are just beamed directly from a video frame or the video stream of another person.

In other words, somebody else already said what you said, you were just recorded.

So-What Can Be Done?

Here are some ways Twitter has started to take steps to deal with it:

Deepfakes feature has been removed

Twitter will now take a closer look at who is running the accounts posting these fake videos, and may take action against them.

Twitter will also stop the individual from posting a Deepfake. That would be more like a DDoS attack, but because these types of hackers don't live in specific geographies--they just hide behind a fake person to get at your email and password.

Twitter will now take a closer look at who is running the accounts posting these fake videos, and may take action against them. Deepfakes will not show up in "promoted tweets."

Twitter will start to add an alert that alerts people that a Deepfake might be hiding in their tweets.


Twitter, similar to Facebook, has had a long history of fighting with "fake news" but trying to tackle the growing cybercrime scene from all angles.

The Deepfakes feature was just the latest. Deepfakes have already been used for breast cancer awareness in Russia and photo manipulation in a shared quiz.

And while Twitter certainly got a reputation for fake news recently, it may have the capability to stop bad actors from spreading FakeNews.