In April 2019, Rana Ayyub, an Indian journalist, became the victim of a deepfake porn plot. A fabricated video of the journalist appeared online showing her engaging in a sexual act, which she was never a part of. Within 48 hours, the video appeared on almost half of the phones in India with Ayyub’s personal information being shared along with the video.

What followed was a series of humiliating phone calls, messages and social media attacks, which left her traumatised and had a “silencing effect”, forcing her to self-censor and be less of herself online. No strict action was taken and eventually, the United Nations had to intervene, asking the Government of India to protect Ayyub.

More recently, South Korean society highlighted the dramatic rise in the number of deepfakes of K-pop stars. An anonymous petition was also started demanding strict punishments for the creators and consumers of K-pop deepfake porn, which garnered more than 33k signatures.

Deepfakes are a form of synthetically created media using deep learning, that can create anything—from superimposing images onto events to make satirical videos, to making the dead come back to life, to making pornographic films of unsuspecting women out of thin air. The possibilities are endless and horrifying all the same. Photographic manipulation has been used for propaganda for decades, but what makes deepfakes different is that a malicious actor can easily make any unsuspecting victim say or do whatever they want. The wide use of deepfake technology has sparked debates about how it poses a threat to democracy by targeting public figures and manipulating politics. However, the more pressing issue that deepfake technology poses is non-consensual, fabricated pornographic film.

Deepfake porn first made a public appearance in 2017, when a Redditor with the username ‘deepfakes’ was spotted making fake celebrity porn. Before Reddit banned the user for involuntary pornography, the account had amassed around 90,000 subscribers. According to a 2019 report by Sensity AI, an AI research firm, out of 15,000 deepfakes that were discovered, about 96% of them were pornographic, and those targeted in pornographic deepfakes were almost exclusively women.

The deepfake technology allows the perpetrator to superimpose private or publicly available images or videos onto the source picture or footage to create synthetic porn. This is facilitated by the readily available media on platforms such as Instagram, Facebook, Snapchat, etc. Moreover, deepfakes can be created with the ease of consumer-level apps that are widely available for free.

In 2019, a controversial app called DeepNude made its appearance on the Internet that allowed its users to create naked pictures from the clothed ones with just a few clicks. Advertised as “Your X-ray Vision App”, the app was specifically designed to target women as it only worked on female bodies. In June 2019, after much furore, DeepNude’s creator claimed that they had taken down the app; however, it resurfaced again in 2020, along with many more copies and options, waiting to be exploited by perpetrators. More concerning is the number of forums that discuss how deepfakes can be created, making it dangerously easy to make one or have one made.

Even though deepfake porn imagery is not real, its consequences are very much so. The wide array of morphing options that are now available gives the perpetrator greater control over the victim’s face and body. By creating a sexual identity, not of the individual's own making and exhibiting it to others without their consent, the victim is stripped off of their integrity, dignity, privacy and sexual expression, and subjected to mental trauma. Moreover, at times, the deepfake porn imagery is accompanied by two pernicious phenomena—downstream distribution or the re-posting of images on the Internet by third parties, and doxxing or publishing of personal information along with intimate details.

How are deepfakes reigned in legally?

The relatively new deepfake technology has started to spread like wildfire on the Internet, which has prompted many social media platforms to preemptively ban synthetic media. Tech giants such as Facebook, Google and Twitter have actively responded by introducing measures to curb the spread of harmful synthetic media on their platforms. For example, Twitter introduced its manipulated media policy, which involves labeling tweets and warning users of fabricated media, and removing them, in rare cases.

Nonetheless, digital platforms cannot be solely burdened with the responsibility of regulating deepfakes. There is a need for the law to address the growing threat of synthetic media, especially deepfake pornography and to set in tone a consequence that might act as a deterrent for the future.

Recently, the Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, under which, Rule 14(1)(p) obligates the intermediaries to take down non-consensual material which exposes the private area of any person, shows such person in full or partial nudity or shows or depicts such person in any sexual act or conduct within 24 hours of the receipt of the complaint. The Rule also applies to “artificially morphed images”. Additionally, the intermediaries are required to appoint a Grievance Officer who is required to acknowledge the complaint within 24 hours and resolve it within 15 days. However, the Rule puts the responsibility solely on the internet intermediaries and provides ineffective recourse to the victim.

The existing Indian laws do not target the menace of non-consensual deepfake pornography. Under the Information Technology Act, 2000 (or the IT Act), Section 67 deals with the punishment for publishing or transmitting obscene material in electronic form, and Section 67A provides punishment for transmitting sexually explicit material in electronic form. These may be coupled with Section 66E of the IT Act which deals with violation of privacy or with Section 509 of the Indian Penal Code, 1860 which prescribes punishment for word, gesture, or any act done to insult the modesty of a woman. Neither the IT Act nor the IPC addresses the offence of ‘morphed’ or synthetic media.

In the landmark judgment in Puttaswamy v. Union of India, the Apex Court of India upheld the right to privacy as a fundamental right under the Constitution of India, and significantly emphasised the concept of informational privacy. The right to informational privacy recognises the control of an individual over the collection and dissemination of material that is personal to them.

The latest Personal Data Protection Bill, 2019 also recognises the spirit of an individual’s right to privacy as a supreme fundamental right and makes consent a centerpiece of the framework. If enacted, it would regulate the use of deepfake technology. The PDP Bill makes it obligatory for the data fiduciary to process the data for lawful purposes and take consent from the data principal to process the personal data, which could include images as well as videos of the data principal. The use of images and videos shared—consensually or non-consensually—to create and publish deepfakes could be interpreted as the processing of personally identifiable data and would require the consent of the data principal (or the target). Therefore, if enacted, the Bill would implicitly ban the creation and sharing of non-consensual deepfakes.

As deepfake technology becomes more accessible with the growing number of deepfake communities and the increased commodification of apps and services, the internet is very likely to become an even riskier place for women. It, therefore, becomes imperative for the law to be in tune with this rapidly developing form of sexual abuse. The existing laws provide inconsistent and ineffective protection against deepfake porn, leaving the victims with very limited recourse. The lack of appropriate legal provisions to deal with synthetic media has revealed the urgent need to introduce a framework for the regulation of deepfake technology. There is also a need for a clarion call to raise public awareness and bring the issue to the forefront to initiate discussion among the stakeholders to come up with better laws, technical solutions, and assistance from social media intermediaries.