Even as the debate over fake news and shady WhatsApp forwards dominates drawing room conversations, a threat far more perverse and problematic in nature manages to remain largely undetected.
Called deepfake videos, this tool of misinformation has transitioned in form and grown to become more sophisticated than any other method of spreading fake news we've seen before. Over the past year, fake videos generated using artificial intelligence (AI)-based codes have swollen in number and currently populate the internet without many of us even being aware of their existence.
The technology behind deepfake videos
The product of machine learning and AI-powered image manipulation, deepfake videos use the same CGI tricks that big-budget movie studios use to bring dead actors to life on screen. However, the technology behind deepfake does so at a fraction of the cost.
The process involves using deep learning algorithms provided by Google and the raw processing power of a GPU installed on a user's computer. Being essentially an intelligent face-swapping technology, the user provides an installed software with a large dataset of images, videos and audio of the two people involved in the deepfake video.
However, as the software does more than simply swap faces, the data set is used by a facial mapping algorithm that trains itself and then recreates a person's face – down to skin tone, facial expression, lighting and other information – which is then superimposed onto another's person's body.
The software can also be fed libraries of audio on the given person to create a fake audio layer that makes the morphed deepfake more convincing.
The many faces of deepfake
Highly adept at blurring the line between what is real and not, deepfake videos first gained attention after a Reddit user decided to flood Pornhub with realistic-looking fake 'celebrity' porn. Soon enough, a software that allowed individuals to leverage the power of their desktop computers to create more than just fake celebrity porn emerged.
That's when things started to get ugly real quick.
While creeps and jilted lovers used the sophisticated technology to cross all lines of consent and create fake revenge porn, social media trolls leveraged the power of their desktops computers (and their failing integrity) to create deepfake videos to tarnish the reputation of anything and anyone they were opposed to.
Mission Lok Sabha 2019
Even though nobody seems willing to acknowledges it, political parties – and their paid trolls – appear to have become exceedingly fond of deepfake videos.
Many time zones away in the United States, the Republicans and Democrats frequently use deepfake videos to attack each other, while across the Atlantic, the Europeans too seem to be fans of the technology – over the past year, Germany's Chancellor Angela Merkel and British Prime Minister Theresa May have both been targets of deepfake videos.
And ahead of the 2019 Lok Sabha polls – results of which, fears abound, could likely be swayed by fake news – these doctored videos look likely to make their presence felt in India too.
Suffice to say, this technology, which lets anyone make videos of real people appearing to say things they’ve never said, is a real threat to democracy. For a country that's already finding it difficult to battle the menace of fake news, deepfake videos not only pose the threat of swaying the upcoming election results, but also exposing our society to hate and communal strife.
The impact of the internet being flooded with fake news and deepfake videos, however, would not just end there. Too much fake news and video also open us to the possibility of shrugging off real news as fake. Imagine a dystopian world where the story of an 8-year-old Kashmiri girl in Kathua is ignored for weeks as a hoax - because it seems too brutal to be true.
What can be done?
Very little really.
Currently, deepfake technology faces some bottlenecks, but with silicon as well as the core technology growing at a rapid pace, experts expect deepfake to overcome these soon.
For now, deepfake-based software can only just move the source video’s lips and face, and not their entire head. Deepfake-created videos also suffer from a fatal flaw – artificially created subjects in them do not blink their eyes. However, new reports suggest developers have created a new artificial intelligence-powered system that can create photorealistic videos in which the artificially created subjects "can sway, turn their heads, blink their eyes, and emote."
Another problem with deepfake videos is the time and resources it takes to create them. Currently, it takes anywhere between a day to half a week to create a realistic deepfake video. However, the new system promises to fix that. As per the report, the new AI system will need only a few minutes with the source libraries. Hence, both of these are steps that could make the technology even more accessible to the public.
Thus, for the short term at least, the only course of action appears to be to adapt to the new reality, one where we will just have to trust our own better judgement and research extensively before believing anything we read, see or hear.
Also read: How Vir Kotak is opening Indian market to a taste of new beers