Context: Grok, an AI chatbot developed by xAI, has sparked controversy due to its unfiltered responses on X (formerly Twitter). Some responses included misogynist slurs, misinformation claims, and politically charged statements highlighting the growing need for AI regulations to prevent misinformation and ensure accountability.
Relevance of the topic:
Prelims: Safe Harbour Principle.
Mains: Need to regulate AI.
Concerns associated with Generative AI and its Regulation
- Amplify Misinformation: Generative AI systems if trained on biased data or developed with inherent biases, will generate biased outputs and prejudicial content. Since Grok allows direct publishing onto a social media platform (X), this content can spread unchecked and has a potential risk of faster dissemination of misinformation.
- Lack of Transparency: AI Algorithms often have a black-box approach, i.e., one cannot thoroughly explain how the variables led to the resulting prediction. E.g., Grok’s AI-generated replies to users do not always carry citations or links to sources/web pages, limiting verifiability. This can further amplify misinformation.
- Risk of Censorship: This may lead to companies self-censoring just due to fear of regulatory actions by governments. That creates a chilling effect on freedom of expression and can inhibit innovation.
- Accountability for AI-generated output: Article 19(1)(a) grants freedom of speech but with reasonable restrictions. But these rights apply only to humans, and not AI systems. AI responses are machine-generated and lack personal intent. This makes it difficult to set legal accountability and determine liability for the responses made by AI.
- Extension of Safe Harbour to AI: Social Media Intermediaries get safe harbour under the Section 79 of IT Act 2000. Grok is not a human user, but a computer program producing answers from massive internet data. This raises the question of whether safe harbour can be extended to AI-generated content.
Safe Harbour Principle:
- Safe harbour is a legal provision that provides protection from a liability or penalty. Under Section 79 of IT Act 2000, intermediaries such as X, Meta etc. are protected from any legal liability for content posted on their platforms by the users.
- As per the law, since the content posted on the social media platforms is owned by the users and does not belong to such companies, the provision gives them protection from prosecution.
Way Forward
- Moderation of AI-bots by Developers: Developers should be more transparent about the datasets used for training to ensure diversity, and conduct thorough red-teaming and stress testing of AI bots to mitigate potential harms.
- Extend liability on the deployer in the event of wilful neglect and when no adequate measures are taken to moderate outputs. However, liability on deployers may depend on a case-to-case basis. E.g., In 2024, Air Canada was directed by a civil court to honour a false refund policy made up by an AI chatbot on its website.
- Strengthen International collaboration and regulations for the responsible development and deployment of generative AI models and Chat bots.
Legislators must continue to strike a balance between ethical duty, protecting digital rights and free expression and technological innovation.
