Context: Deepfakes have become a potential tool to jeopardise the individual’s privacy and have the potential to rupture the social fabric of a nation.
Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. Deepfakes “leverage powerful techniques from machine learning (ML) and artificial intelligence (AI) to manipulate or generate visual and audio content with a high potential to deceive”.
Issues with deepfakes
- Misinformation and propaganda: Deepfake technology has the potential to spread false information and propaganda as it is difficult to differentiate between real and fake content.
E.g., Deepfakes can be used to influence elections.
- Damage to personal and professional reputations: Deepfake technology can be used to create fake videos/images of individuals that can damage their personal and professional reputations and further lead to harassment and extortion. E.g., Circulation of pornographic material using celebrity faces.
- Threat to National Security: Deepfakes can also be used to carry out espionage activities. Doctored videos can be used to blackmail government and defence officials into divulging state secrets. E.g.,
- India’s non-friendly neighbours and non-state actors can create propaganda videos that can be used for the radicalisation, recruitment of terrorists or inciting violence.
- Ukrainian President Volodymyr Zelensky revealed that a video posted on social media in which he appeared to be instructing Ukrainian soldiers to surrender to Russian forces was actually a deepfake.
- Cybersecurity risk: Deepfake technology can be used to create fake videos and images that can be used in phishing scams, or to spread malware or viruses.
- Financial frauds: Deepfakes have been used for financial fraud (using voice samples for account verifications).
- Ethical issues: Some individuals can exploit the increasing awareness and prevalence of deepfake technology to their advantage by denying the authenticity of genuine content, particularly if it shows them engaging in inappropriate or criminal behaviour.
Legal framework in India
- In India, the legal framework related to AI is insufficient to adequately address the various issues that have arisen due to AI algorithms.
- Currently, very few provisions under the Indian Penal Code (IPC) and the Information Technology Act, 2000 can be potentially invoked to deal with the malicious use of deepfakes.
- Section 500 of the IPC provides punishment for defamation.
- Sections 67 and 67A of the Information Technology Act punish sexually explicit material in explicit form.
- The Representation of the People Act, 1951, includes provisions prohibiting the creation or distribution of false or misleading information about candidates or political parties during an election period.
- However, these rules do not address the potential dangers posed by deepfake content. China is one of the few countries which has introduced regulations prohibiting the use of deepfakes.
- Regulations and legislation: The union government should introduce separate legislation regulating the nefarious use of deepfakes and the broader subject of AI. E.g., The proposed Digital India Bill can address this issue.
- Development of technology to detect deepfakes: Need to invest in the development of technologies that can accurately detect deepfake videos and images to protect individuals and organisations from misinformation and propaganda.
- Awareness: Public awareness campaigns to educate people about the potential dangers of deepfake technology.
- Collaboration between industry and academia: Industry and academia need to work together to find solutions to the issues surrounding deepfake technology.
The legislation must provide provisions to address the malicious use of deepfake technology in criminal acts but should be accommodative so as not to hamper innovations in AI.