Israeli tech firm Canny AI may have perfected the use of deepfakes to dub your videos into any language. Their service makes good use of deepfakes, as opposed to the most common applications of the technology. This has garnered the company widespread acclaim from their clients.
Deepfakes, a portmanteau of “deep learning” and “fake,” refer to AI-generated videos that simulate the appearance of a person. They are created by feeding AI hours of footage of a person’s face.
The AI identifies focal points on the face, then learns their positions and relationships with each other. It would then reconstruct the face unto a subject’s face, effectively putting a digital mask over it.
Deepfakes make a person’s face look like someone else’s. It makes a person appear doing something that they never actually did.
Harmful misuse of deepfakes
Deepfakes, as with any technology, have the potential to cause harm to others.
John Villasenor of the Brookings Institution warns that it could be used to make politicians appear to say or do things that never really happened. Deepfakes, therefore, lend itself as a powerful tool for misinformation and deception.
Meanwhile, an MIT Technology Review report states that deepfakes could be used to spread fake news, which could “influence everything from stock prices to elections.”
Deepfakes have even been used for less tasteful content, such as replacing the faces of people in pornography with the faces of other people.
The accessibility of the technology required to create deepfakes made its number double within the past nine months, reports cyber-security company Deeptrace. They also found that 96% of all deepfakes existing on the internet are pornographic.
Combating deepfake on social media
Recognizing the dangers of deepfake, and the threat it poses to people’s security, Facebook and Microsoft launched the DeepFake Detection Challenge. Through this program, Facebook has partnered with leading universities in the US to train AI to detect even the most convincing Deepfake videos.
Other tech companies such as Google, Amazon, and Twitter have also put themselves up to the task of fighting against misinformation and deception.
To help AI to detect deepfakes more effectively, Google released a treasure trove of deepfake samples. Meanwhile, Amazon is contributing $1 million in Amazon Web Services credits towards the DeepFake Detection Challenge for the next two years. Twitter, on the other hand, seeks feedback from users and experts on how to enact policies that will help combat deepfakes and reduce the harm it does to its community.
Using deepfake for good
Not all uses of deepfakes are used to harm others. Canny AI’s use of deepfake is less sinister. The Israeli company’s motto is “Storytelling without Barriers,” and it’s fitting to the service they’re offering.
Canny AI accepts footage materials from their clients, which could range from TV shows, advertisements, or PSAs. Clients could then choose the language they want their footage to be converted to.
Finally, Canny AI uses its deepfake technology to dub their clients’ videos to any language, with convincing lip-sync to match the audio.
Canny AI’s service allows studios to reach a wider audience with fewer costs. By shooting just one footage, their clients also save time and effort from producing multiple versions of the same footage in different languages.
The tech company’s responsible use of the technology has to set an example to others on how deepfakes could be used for the better.
Meanwhile, people will have to learn how to spot deepfakes themselves. The information age is starting to be filled with misinformation, necessitating us to move on to the knowledge age.
“In order to move from the information age to the knowledge age, we must do better in distinguishing the real from the fake,” says Professor Hany Farid of UC Berkley.