Context: Amid rising concerns over deepfakes and synthetic media, the Union Government has amended the IT (Intermediary Guidelines & Digital Media Ethics Code) Rules, 2021. The changes mandate clear labelling of AI-generated content and impose sharply reduced timelines for takedown of unlawful material, signalling India’s shift towards stricter AI governance.

What Has Been Notified?
The amendments require photorealistic or synthetic AI-generated content to carry prominent disclosures so that users are not misled into treating it as real. Intermediaries must remove court- or government-flagged unlawful content within 3 hours, and non-consensual deepfake content within 2 hours, a significant tightening from earlier 24–36 hour windows.
Platforms are also required to seek user self-declaration on whether content is AI-generated; failure triggers platform-level labelling or removal. Importantly, routine edits and quality-enhancing AI tools—such as camera touch-ups—are excluded through a narrowed definition of synthetic content.
Why Was This Needed?
AI-driven misinformation and deepfakes spread rapidly. Studies suggest that over 60% of harmful online content reaches peak circulation within six hours, often before corrective action is possible. India has also witnessed a surge in non-consensual intimate imagery (NCII), with NCRB data showing cybercrime cases rising by over 31% between 2022 and 2023.
Given India’s scale—over 850 million internet users—the government expects intermediaries to exercise higher due diligence proportional to their technological capacity. The amendments also align India with OECD AI Principles and G20 AI Safety Guidelines, embedding ethical responsibility into AI deployment.
Key Concerns
Despite their intent, the rules raise operational and rights-based challenges. A 2–3 hour takedown window may be impractical where illegality is context-dependent or notices lack detailed reasoning.
Fear of penalties and loss of safe harbour protection could encourage precautionary takedowns, chilling satire, journalism, and legitimate speech.
Smaller platforms and start-ups may struggle with compliance due to limited access to real-time AI detection tools and moderation staff, creating uneven regulatory burdens.
The Way Forward
To balance safety and free expression, India needs clearer illegality tests with predefined indicators for NCII, impersonation, and election-related misinformation. Risk-based, graded timelines—immediate for NCII but longer for context-sensitive speech—would reduce over-censorship.
An independent digital content ombudsman could provide time-bound review of wrongful takedowns. Finally, shared public infrastructure—such as national deepfake detection facilities and hash databases—can help smaller platforms comply without stifling innovation.
Conclusion
India’s AI content rules mark a decisive move from passive platform immunity to active algorithmic accountability. Their success will depend on careful implementation that protects dignity and privacy without undermining democratic speech.








