IPR Newsletter – Recognising the Performers Right: A Deep Look at Deepfake and Performer Rights : December 2024
Deepfake
The creation and manipulation of digital information has been completely transformed by the introduction of deepfake technology. Artificial intelligence and machine learning algorithms enable deepfakes to create incredibly lifelike voice, video, and photos of people. Although there are many acceptable applications for the technology in media and entertainment, performer rights, privacy, and the possibility of abuse are major issues. In the age of deepfakes, performers are especially susceptible to abuse because their voices and likenesses are essential to their line of work.
Deepfakes synthesis or modify media content using Generative Adversarial Networks (GANs) and other AI approaches. Combining the words “deep learning” with “fake,” the phrase “deepfake” highlights the fact that the content is created using artificial intelligence. Although deepfakes are occasionally employed for artistic endeavours like satire or filmmaking, their widespread use has resulted in cases of impersonation, false information, and damage to one’s reputation.
Performer’s Rights
Performers’ intellectual property and personality rights are directly threatened when their image, voice, or likeness is used without permission. The interests of actors, musicians, and other artists who lend their skills to artistic creations are traditionally protected by performers’ rights. These rights can be broadly divided into two categories: Economic Rights, which provide performers the power to decide how their performances are used and monetize them, and Moral Rights, which safeguard the performer’s reputation and the integrity of their performance.
Performer rights are protected under copyright laws in several countries, including India. For example, performers are granted exclusive rights to their performances under Section 38 of the Indian Copyright Act, 1957. These safeguards, however, frequently fall short in addressing the difficulties presented by deepfake technology.
Adverse Consequences of Deepfake Technology
Deepfake technology makes it possible to mimic a performer’s voice or visage without their permission. The moral and financial rights of the performer may be violated by this unapproved use, which may result in phony performances, modified ads, sexual material, or defamatory videos. Because deepfakes make it difficult to distinguish between real performances and artificial ones, performers also lose control over their identity and image. Economically speaking, deepfakes can take the role of hiring actors, especially for voiceovers or commercials, which would limit their choices for employment and income.
Furthermore, since it gets harder to tell the difference between real and modified media, the proliferation of deepfake films on social media, streaming services, and viral content can harm a performer’s reputation beyond repair. In situations where deepfakes are employed maliciously, the problem gets considerably worse. There have already been major global issues around performers’ non-consensual explicit content, phony endorsements, and misleading interviews. For example, explicit deepfake content, which circulates swiftly on internet platforms, has targeted many actors and celebrities.
Effect on Performers’ Rights
In addition to violating performers’ rights, this kind of abuse causes emotional hardship, mental anguish, and damage to one’s reputation. Performers continue to be disproportionately vulnerable due to the ease with which deepfakes can be produced and distributed. The legal structures in place offer little defence against the misuse of deepfakes. Although copyright laws typically protect performers’ rights, synthetic media are frequently exempt from these restrictions. Indian law does not fully establish personality rights, which give people the ability to regulate the commercial use of their name and likeness.
Although sensitive image transfer without agreement is punishable by privacy regulations like Section 66E of the Information Technology Act of 2000, these laws are vague when it comes to deepfake content. Similar to this, defamation rules do not cover every issue raised by synthetic media, but they can offer remedies if a deepfake damages a performer’s reputation. Furthermore, the performer frequently bears the burden of evidence, which calls for them to demonstrate the harm and untruth of the content—a difficult task in the digital sphere.
International Law on Deep Faking and Performers’ Rights
Numerous international conventions and accords recognize and defend the rights of performers, who are vital contributors to the creative sector. The fundamental rights and safeguards that performers have regarding their artistic creations are outlined in these agreements.
The Rome Convention, adopted in 1961 and revised in 1971, 1987, and 1996, grants performers several key rights to protect their artistic creations. These include controlling the use of their performances, authorizing or prohibiting activities like broadcasting, recording, or rebroadcasting, and preventing others from making recordings without permission. Performers also have the right to receive fair compensation for their performances, including royalties for reproductions and distributions. Additionally, they have moral rights to protect their reputation and honour, preventing distortion or mutilation that could harm their image.
Countries all throughout the world have started to address the deepfake problem. Unauthorized deepfake content is illegal in some US states, including California, especially when it comes to commercial and pornographic uses. People have ownership over their personal data under the General Data Protection Regulation (GDPR) of the European Union, which may include deepfakes. However, there is still a lack of uniformity in global regulations, which makes it difficult to enforce laws in cross-border cases of deepfake misuse. China, for example, requires the labelling of deepfake content and forbids the unauthorized creation of synthetic media. These examples underscore the significance of targeted legal measures to combat deepfake misuse.
Furthermore, any performance falling under the purview of performers’ rights is expressly covered by the UK’s Copyright, Designs and Patents Act (CDPA), regardless of whether it took place before or after the act’s introduction. This stance has also been adopted by Ireland and the New Zealand Ministry of Economic Development. Royalties must be paid for recordings and performances that occurred before the Copyright Act was passed in the US, according to the Music Modernization Act of 2018.
Legal Framework in India
India urgently requires a comprehensive regulatory framework to address deepfakes and protect performer rights. First and foremost, personality rights must be clearly acknowledged and protected by the law. Performers must be able to contest and demand payment for any unapproved use of their voice, image, or likeness thanks to these rights. Second, protection against synthetic media should be included to the list of performers’ rights under the Copyright Act. This would guarantee that intellectual property rules would apply to deepfakes that violate performers’ rights. Third, laws governing deepfakes ought to require the disclosure and labelling of synthetic material in addition to sanctions for its production or distribution without authorization. Lastly, in order to recognize and report deceptive content, technology-based solutions—like AI tools for deepfake detection—need to be encouraged. It is necessary to hold platforms that host user-generated material responsible for promptly eliminating distorted media.
In addition to legal actions, deepfakes bring up important moral issues of identification, consent, and artistic freedom. Performers ought to be able to give their permission before their voice or image is used, particularly in digital media. To ensure that innovation does not come at the expense of individual rights, sectors such as advertising and movies must set ethical standards for the proper use of deepfake technology. Furthermore, reducing the misuse of deepfakes depends on raising public knowledge of their risks. Promoting digital literacy and teaching viewers how to spot manipulated information will lessen the negative effects of deepfakes on actors and society as a whole.
Addressing the moral dilemmas raised by deepfakes requires aggressive action from the entertainment sector as well. Although the technology can be used to de-age actors or recreate deceased stars in movies, these applications need to be strictly regulated by licensing agreements and sufficient consent. For instance, the estates of performers who have passed away ought to have authority over the posthumous use of their likeness. This would enable for innovative uses of deepfake technology in entertainment while preventing exploitation.
The Performers’ Protection Act was enacted in 1957 and provides additional protections for performers’ rights in India. The act defines a “performer” as any person who performs or presents any work, whether it is a musical, dramatic, or artistic performance.
Indian law’s Performers’ Protection Act gives artists vital rights to protect their artistic creations. It gives artists the sole authority over how their performances is fixed, giving them the power to approve or disapprove of recording their performances. This includes the capacity to stop illegal audio or video recordings. Furthermore, performers are entitled to regulate the use of their performances, including the power to allow or prohibit their broadcasting, recording, or rebroadcasting.
The right to fair and equitable recompense, including royalties for the distribution and replication of their performances, is another significant component of these rights. The Act additionally establishes criminal sanctions for the unapproved use of a performer’s work, guaranteeing that any violation will result in legal repercussions, including fines or jail time.
Under intellectual property law, performers have several options for redress when their rights are infringed. Injunctive relief, which is a court order that stops the party infringing from using the performer’s work going forward, is one of the most popular remedies. These injunctions, which shield the performer from additional harm, might be either temporary or permanent, contingent on the seriousness of the infraction.
Damages are an additional remedy that can be used to reimburse the performer for actual losses incurred because of the infringement. Statutory damages, which are determined by the severity of the infringement and do not need the performer to demonstrate actual losses, may also be granted in certain circumstances. In addition to these, performers may pursue alternative remedies including restitution, which aims to recoup any unjust enrichment obtained by the infringer, or accounting for profits, which mandates that the party infringing must report any gains made from unlawful use.
Conclusion
Performance rights are increasingly at risk due to deepfake technology, which calls for legislative and regulatory changes. Although current legislation offers some protection under the frameworks of copyright, privacy, and defamation, a more comprehensive and focused strategy is needed to handle the issues raised by synthetic media. To protect performers in the digital era, it will be crucial to acknowledge personality rights, bolster performer protections, and promote ethical norms. India needs to find a way to combine adopting new technologies with defending the rights of those whose identities are in danger of being exploited. To build a digital environment where innovation and individual rights coexist peacefully, cooperation between governments, tech companies, content platforms, and performers will be crucial. Unchecked deepfakes run the risk of destroying the creative industries that depend on the identities and skills of performers as well as eroding public confidence in digital content.