From marring the reputation of celebrities, showing them in pornographic videos to making business leaders speak nonsense, Deepfake tech has been used to create the wildest stuff possible.
It poses a major threat to the quality of information we get, and now, an industry expert has claimed that the fake videos created this way will soon become too real to be flagged.
First, you should know what is Deepfake?
First popularized by an anonymous Reddit user in 2017, Deepfake is the technique of using artificial intelligence engines for creating eerily realistic fabricated videos.
The clips feature the face of one person in another one's video and are so realistic that you won't be able to tell that the person in a clip isn't who you think he/she is.
The tech has been evolving progressively
Over the last two years, Deepfake tech has evolved so much that fabricated videos have started to look very realistic.
The voice, lip movement of the superimposed face remain perfectly in sync with the rest of the video, which has allowed threat actors to employ the tech to show famous American actresses in pornographic videos and business and political leaders saying/doing inappropriate stuff.
Soon, you won't be able detect Deepfakes
Currently, you can still find some flaws in Deepfakes, like eye blinking frequency, to tell that the video isn't real.
However, Hao Li, who is a computer science professor at the University of Southern California and a renowned pioneer in Deepfakes, believes that these blips will be ruled out and you wouldn't be able to tell if an AI-generated video is real or fake.
Perfectly real Deepfakes in six to twelve months
Speaking with the students at MIT a few days ago, Li claimed that the tech to create impossible to detect Deepfakes would be within the public's reach in two-three years.
However, a couple of days later, he revised that timeline, telling CNBC that everyday users will be able to create perfectly real Deepfakes in just six to twelve months.
Timeline revised owing to grown interest in Deepfake tech
When questioned about the revised timeline, Li claimed that he reduced the predicted time period owing to the grown interest in Deepfake technology and popularity of Zao, the app that lets users deepfake their faces on celeb videos.
However, smart detection tech is also on the way
If perfectly real Deepfakes are on the way, the way to flag them is also in development.
Li is working with Hany Farid, a computer science professor at the University of California at Berkeley, to detect realistic Deepfake.
"We are working together on an approach that assumes that deepfakes will be perfect," Li told MIT attendees, but didn't provide specific details of the tech.