Context: The proposed ‘Digital India Bill’ holds out the promise of not only upgrading the current legal regime but also redefining the contours of how technology is regulated.
Proposed Digital India Bill
- The Ministry of Electronics and IT (MeiTY) is organising consultations on the proposed “Digital India Bill” in order to replace the 23-year-old Information Technology (IT) Act.
- Objective: To upgrade the current legal regime to tackle emerging challenges such as user harm, competition and misinformation in the digital space.
- Proposed Changes: Include categorisation of digital intermediaries into distinct classes such as e-commerce players, social media companies, and search engines to place different responsibilities and liabilities on each kind.
- The current IT Act defines an “intermediary” to include any entity between a user and the Internet, and the IT Rules sub-classify intermediaries into three main categories:
- Social Media Intermediaries (SMIs): SMIs are platforms that facilitate communication and sharing of information between users.
- Significant Social Media Intermediaries (SSMIs): SMIs that have a very large user base (above a specified threshold) are designated as SSMIs.
- Online Gaming Intermediaries
- Broad Definition: The definition of SMIs is so broad that it can encompass a variety of services such as video communications, matrimonial websites, email and even online comment sections on websites.
- Stringent Obligation: The IT rules also lay down stringent obligations for most intermediaries, such as a 72-hour timeline for responding to law enforcement requests and resolving ‘content take down’ requests.
- Similar Treatment for Different Platforms: Licensed intermediaries with a closed user base and presenting a lower risk of harm from information going viral are treated at par with conventional social media platforms. This adds to their cost of doing business and also exposes them to greater liability without meaningfully reducing risks presented by the Internet.
- The European Union’s Digital Services Act is one of the most developed frameworks. It introduces some exemptions and creates three tiers of intermediaries — hosting services, online platforms and “very large online platforms”, with increasing legal obligations.
- Australia has created an eight-fold classification system, with separate industry-drafted codes governing categories such as social media platforms and search engines. Intermediaries are required to conduct risk assessments, based on the potential for exposure to harmful content such as child sexual abuse material (CSAM) or terrorism.
- Moving beyond product-specific classification: While a granular, product-specific classification could improve accountability and safety online, such an approach may not be future-proof. As technology evolves, the specific categories we define today may not work in the future.
- Fewer Categories: There is a need for a classification framework that creates a few defined categories, requires intermediaries to undertake risk assessments and uses that information to bucket them into relevant categories.
- Minimising Obligations on Smaller Intermediaries: As far as possible, the goal should also be to minimise obligations on intermediaries and ensure that regulatory demands are proportionate to ability and size.
- Micro and small enterprises, and caching and conduit services (the ‘pipes’ of the Internet) can be exempted from any major obligations
- Further, there is a need to clearly distinguish communication services (where end-users interact with each other) from other forms of intermediaries (such as search engines and online-marketplaces).
- Given the lower risks, the obligations placed on intermediaries that are not communication services should be lesser. However, they could still be required to appoint a grievance officer, cooperate with law enforcement, identify advertising, and take down problematic content within reasonable timelines.
- Special Obligation on Intermediaries Offering Communication Services:
- They could be asked to undertake risk assessments based on the number of their active users, risk of harm and potential for virality of harmful content.
- Further, the largest communication services (platforms such as Twitter) could then be required to adhere to special obligations such as appointing India-based officers and setting up in-house grievance appellate mechanisms with independent external stakeholders to increase confidence in the grievance process.
- Alternative approaches to curbing virality, such as circuit breakers to slow down content, could also be considered.
For the proposed approach to be effective, metrics for risk assessment and appropriate thresholds would have to be defined and reviewed on a periodic basis in consultation with industry. Overall, such a framework could help establish accountability and online safety, while reducing legal obligations for a large number of intermediaries. In doing so, it could help create a regulatory environment that helps achieve the government’s policy goal of creating a safer Internet ecosystem, while also allowing businesses to thrive.