The complexity of deepfake was not a heavy topic of discussion as it is today. This AI enabled video and audio building methodology led to major chaos in the world of modern technology which gained popularity with the help of a meek Reddit thread. In 2017, a redittor named “deepfake” found his passion in creating fake videos with sexually explicit content by swapping faces with famous celebrities, all by using a machine learning algorithm. Shortly after this discovery, the creator gained around 15,000 subscribers and a community was formed. “Deepfake” is now used as a noun for the kind of neural-network generated fake videos. It is a given that creating one video would require someone with enough technical dexterity, however FakeApp was known to be the first application which could allow normal individuals to create videos like these. It is a community-developed desktop application to run the deepfakes algorithm without installing Python or Tensorflow, in addition to this, all the users need is a proper graphic processing unit like the kind required for video games. Running the entire process, from data extraction to frame-by-frame conversion of one face onto another, would take around 12 hours or so. What started off as an episode of technological experiment gained massive following within months, soon the community would consist of 90,000 members.
In a more recent example, Chris Ume, a Belgian video effects artist used a free, open-source software called DeepFaceLab as the basis of his deepfakes. He created three videos of Tom Cruise on TikTok gaining close to 20 million views by employing powerful graphics processing computer hardware to run this A.I. system. He trained it on a database of 13,000 images of Tom Cruise that capture the actor from every conceivable angle.
So how exactly is a deepfake created?
Simply put, the creation of deepfake requires a technique called ‘generative adversarial network’ or GAN. This involves tethering two neural networks, like the working of our brains, and then helping them to work in a tandem. From these two networks, one learns from the existing data (e.g. an already existing data with images of someone’s face) and the second one is trained to try to identify real images of that particular person from a database of faces that includes some images of the deepfake target.
As complicated as it sounds, the community is working towards making this entire process a one-button affair for any individual to easily use this technology on their phone.
Is creating fake videos the full potential of the deepfake technology? Definitely not.
Broadly speaking, the use of this technology, for businesses, is primarily in two areas. One being in terms of creating deepfakes for the customers, ethically or unethically, and ultimately creating advanced content. On the other hand, individuals and companies are working towards the security, privacy and fake detection spectrum of the ecosystem.
Organisations involved in generating such content categorise themselves as synthetic data generators which is deemed as a moral path in the space. Larger companies, though very few, and even startups are looking at this space massively. The technology is being used in a wide array of sectors from transforming entertainment experiences to the use of deep generative models that raise new possibilities in healthcare. For example, Nvidia, an American tech company, is using deepfake technology to design games. Reuters, another well known organisation, has generated synthetic presenter-led news reports with the help of deepfake. Speaking of cultural experiences, Dalí Lives, an exhibition in St. Petersburg displayed a life-sized deepfake of the surrealist artist Salvador Dalí that had been created via 1,000 hours of machine learning of the artist's old interviews. In addition to this, video production and media houses are using this technology to replace a few of the expensive older methods. However, it is difficult to put down an exhaustive list of players engaging in products and services apart from common applications like Reface.
The use of this technology by troublesome communities and for other malicious motives has given space for another corner to develop in the deepfake ecosystem. Developers, today, are looking for creative solutions to protect individuals or their clients from such attacks which are directly targeted towards their privacy or any kind of defamation. There have been multiple instances where fake videos of politicians nudging political propaganda and also some pornographic content have been made accessible to the public, causing unwanted uproar. This solution segment is gaining popularity particularly in the BFSI sector. Deepfake experts apply their knowledge in detecting deepfakes across databases to prevent further damage. These services are gradually attracting handsome monetary returns as security emerged as a major concern in modern day technology. Group Cyber ID, Sentinel and Sensity are some startups which are actively delivering in this side of the ecosystem.
What does the market look like? Are investors excited?
Considering geographical markets, this technology is untapped. There are rising companies in the European as well as the North American markets but only a handful of them have met with a certain level of success. The Indian market, not surprising, is still at a nascent stage. With a concrete presence for only 3-4 years, these companies are still making an attempt to sell the idea and the technology behind to the customers. For Indian companies, the deepfake technology poses a good opportunity for offering cost effective yet innovative solutions to their clients. The increasing interest of players in the Indian cinema as well as video content industry for synthetic generated data is a good sign for upcoming individuals looking to explore this space. In terms of detection and security, the market potential is huge, a recent study of the Indian antivirus software market estimated that the cybersecurity space is to reach Rs. 14,782 CR by 2024.
In India and internationally too, deepfake has caught the attention of the cheque writers. Initially, companies in this space were preferred by institutional investors with expert personnel and enough risk capital. However, in the last couple of years, angels and family offices too want a seat on the table. There is, with no doubt, an increase in the awareness about the ecosystem and what it means for the future of generated data and its uses. Sensity AI, a Dutch visual threat intelligence company, founded in 2018 ended up raising €1.2 million through grants and seed rounds through 2019 and 2020. A domestic example, Kroop AI, part of 100X.VC’s portfolio, is involved in detection and localisation of data in India, one of the markets most vulnerable to fake content. For the deepfake market, if it can be termed as one owing to its nascency, it is difficult to quantify the potential. But it does look like a promising journey for entrepreneurs as well as investors.
Published on: September 17, 2021