SC Directions on Online Content Regulation

The Supreme Court has issued significant directions to the Union Government to establish a robust framework for regulating abusive, obscene, and harmful online content. The Court observed that the surge in user-generated content—often unverified, defamatory, or targeting vulnerable groups—requires stronger state oversight without undermining constitutional freedoms.

image 1

Key Observations and Directives of the Supreme Court

1. Need for an Independent Regulator

The Court held that existing self-regulatory models followed by digital platforms are ineffective, as they lack neutrality and enforceability. It called for a statutory, autonomous regulator to ensure accountability across social media, OTT platforms, and other online intermediaries.

2. Preventive Rather Than Reactive Mechanisms

Currently, harmful content is removed only after it becomes viral, causing reputational, psychological, and sometimes irreversible harm. The bench stressed the need for real-time moderation capabilities, early-warning tools, and content-flagging systems to curb the initial spread of harmful material.

3. Free Speech and Reasonable Restrictions

While reaffirming the protection under Article 19(1)(a), the Court emphasised that restrictions under Article 19(2)—relating to decency, morality, and public order—must be precise and narrowly tailored. Vague phrases like “anti-national attitudes” or “hurting sentiments” are prone to misuse unless backed by judicially tested standards.

4. Clear Definitions for Content Categories

Ambiguity in defining harmful or prohibited online content can lead to over-censorship. The Court urged the government to adopt narrow and well-defined categories aligned with global best practices and constitutional jurisprudence.

5. Strong Age-Verification Models

Simple disclaimers (“18+ only”) are inadequate. The bench suggested exploring Aadhaar-based or comparable high-assurance age-verification systems to prevent children from accessing pornography, violent content, or self-harm-inducing media.

6. Protection for Persons with Disabilities (PwDs)

Noting the rise in online ridicule targeting PwDs, the Court recommended enacting a specific penal law, akin to the SC/ST (Prevention of Atrocities) Act, to safeguard dignity and prevent harassment.

Existing Regulatory Framework

  • Ministry of Electronics & IT (MeitY) and Ministry of Information & Broadcasting (MIB) oversee online content.
  • IT Act, 2000:
    • Section 79 – Safe harbour for intermediaries subject to due diligence.
    • Section 69A – Government power to block content in the interest of national security.
    • Section 67 – Penalises publication or transmission of obscene materials.
  • IT Rules, 2021: Introduced due-diligence norms, content-classification, traceability requirements, and grievance redress; increased obligations on significant social media intermediaries.
  • Digital Personal Data Protection (DPDP) Act, 2023: Regulates consent-based processing of personal data.
  • Other Statutes:
    • Indecent Representation of Women (Prohibition) Act, 1986 (IRWA)
    • POCSO Act, 2012
  • Shreya Singhal (2015):
    • Struck down Section 66A for being vague and unconstitutional.
    • Held intermediaries liable to remove content only upon court order or government direction.
    • Upheld Section 69A as constitutionally valid.

The Court’s latest directive signals a shift toward a more coherent and preventive digital-governance framework, balancing free expression with safety, dignity, and constitutional morality.

Share this with friends ->

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 20 MB. You can upload: image, document, archive. Drop files here

Discover more from Compass by Rau's IAS

Subscribe now to keep reading and get access to the full archive.

Continue reading