Artificial Intelligence

Taming the Algorithm: India’s New Rules for Regulating AI-Generated Content

Context: Amid rising concerns over deepfakes and synthetic media, the Union Government has amended the IT (Intermediary Guidelines & Digital Media Ethics Code) Rules, 2021. The changes mandate clear labelling of AI-generated content and impose sharply reduced timelines for takedown of unlawful material, signalling India’s shift towards stricter AI governance.

image 10

What Has Been Notified?

The amendments require photorealistic or synthetic AI-generated content to carry prominent disclosures so that users are not misled into treating it as real. Intermediaries must remove court- or government-flagged unlawful content within 3 hours, and non-consensual deepfake content within 2 hours, a significant tightening from earlier 24–36 hour windows.

Platforms are also required to seek user self-declaration on whether content is AI-generated; failure triggers platform-level labelling or removal. Importantly, routine edits and quality-enhancing AI tools—such as camera touch-ups—are excluded through a narrowed definition of synthetic content.

Why Was This Needed?

AI-driven misinformation and deepfakes spread rapidly. Studies suggest that over 60% of harmful online content reaches peak circulation within six hours, often before corrective action is possible. India has also witnessed a surge in non-consensual intimate imagery (NCII), with NCRB data showing cybercrime cases rising by over 31% between 2022 and 2023.

Given India’s scale—over 850 million internet users—the government expects intermediaries to exercise higher due diligence proportional to their technological capacity. The amendments also align India with OECD AI Principles and G20 AI Safety Guidelines, embedding ethical responsibility into AI deployment.

Key Concerns

Despite their intent, the rules raise operational and rights-based challenges. A 2–3 hour takedown window may be impractical where illegality is context-dependent or notices lack detailed reasoning.

Fear of penalties and loss of safe harbour protection could encourage precautionary takedowns, chilling satire, journalism, and legitimate speech.

Smaller platforms and start-ups may struggle with compliance due to limited access to real-time AI detection tools and moderation staff, creating uneven regulatory burdens.

The Way Forward

To balance safety and free expression, India needs clearer illegality tests with predefined indicators for NCII, impersonation, and election-related misinformation. Risk-based, graded timelines—immediate for NCII but longer for context-sensitive speech—would reduce over-censorship.

An independent digital content ombudsman could provide time-bound review of wrongful takedowns. Finally, shared public infrastructure—such as national deepfake detection facilities and hash databases—can help smaller platforms comply without stifling innovation.

Conclusion

India’s AI content rules mark a decisive move from passive platform immunity to active algorithmic accountability. Their success will depend on careful implementation that protects dignity and privacy without undermining democratic speech.

AI for Public Good: India’s Shift Towards Inclusive Digital Welfare

Context: India is hosting the fourth AI Impact Summit with a renewed focus on “sarvajana hitaya, sarvajana sukhaya”—using Artificial Intelligence (AI) to promote welfare, inclusion, and public well-being. The emphasis is shifting from global debates on AI safety to harnessing AI as a tool for socio-economic transformation.

image 31

AI as a Tool for Welfare Transformation

AI-driven innovations are increasingly shaping India’s public service delivery:

  • Food Security: Smallholders contribute nearly 70% of global food production, yet face productivity challenges. AI-enabled advisories improve yields and climate resilience. For instance, Kisan e-Mitra answers around 20,000 farmer queries daily in multiple languages.
  • Income Enhancement: Precision agriculture tools optimise fertiliser and pesticide use. Telangana’s Saagu Baagu programme has reportedly doubled chilli farmers’ incomes while reducing chemical inputs.
  • Healthcare Access: Telemedicine platforms help address doctor shortages. The eSanjeevani digital health service has completed about 389 million consultations by mid-2025.
  • Skill Development: Digital learning and skilling initiatives such as DIKSHA have reached over 275 million users, with a large share from rural areas.

Why Welfare-Oriented AI Is Critical for India

  • Agricultural Productivity: AI-based advisories can enhance efficiency, reduce costs, and strengthen climate adaptation for farmers.
  • Universal Healthcare: India’s doctor–patient ratio of nearly 1:11,000 makes AI-enabled diagnostics and telemedicine essential.
  • Skill Gap: Only about 5% of India’s workforce has formal training; AI-driven platforms enable personalised and scalable skilling.
  • Inclusive Growth: With rural internet access around 24% compared to 66% in urban areas, AI-driven welfare can bridge regional and gender disparities.

Key Challenges

  • Digital Divide: Limited rural connectivity and digital gender gaps restrict access to AI services.
  • Talent Shortage: A shortage of skilled AI professionals slows innovation and adoption.
  • Technology Dependence: Over 90% import reliance for semiconductors exposes India’s AI ecosystem to geopolitical risks.

Way Forward

  • Outcome-Based AI: Measure success through welfare indicators—higher farm productivity, early disease detection, and learning outcomes.
  • Digital Public Infrastructure (DPI): Integrate AI with platforms like digital health, education, and payments for scale.
  • Infrastructure Alignment: Strengthen broadband, energy, and domestic semiconductor manufacturing.
  • Regulatory Balance: Promote “good-enough” and accessible AI solutions while ensuring ethical and secure deployment.

By aligning AI with inclusive development, India can create a model where technological innovation directly improves livelihoods, strengthens human capital, and accelerates the vision of Viksit Bharat 2047.

Brain–Computer Interface (BCI): Bridging the Human Brain and Machines

Context: As reported in The Hindu, Brain–Computer Interfaces (BCIs) are moving beyond experimental laboratories into real-world applications, accelerating the global neurotechnology revolution. Neurotechnology refers to mechanical or digital tools used to record, analyse, or influence the human nervous system, particularly the brain.

image 37

What is a Brain–Computer Interface?

A Brain–Computer Interface (BCI) is a system that enables direct communication between the brain’s electrical signals and an external device, bypassing the neuromuscular pathways.

Its primary objective is to restore, enhance, or substitute cognitive and sensory-motor functions, especially for individuals suffering from paralysis, stroke, or neurodegenerative diseases.

Key Components of a BCI System

  1. Signal Acquisition: Electrodes capture neural electrical activity from the brain.
  2. Signal Processing: Raw signals are filtered to remove noise and extract meaningful patterns.
  3. Translation: Artificial Intelligence and Machine Learning algorithms convert neural patterns into digital commands.
  4. Device Output & Feedback: Commands control external devices (e.g., robotic limbs, cursors), while feedback helps users improve accuracy.

Types of BCIs

  • Non-Invasive BCIs: Sensors placed on the scalp (EEG, fMRI); low risk but lower signal resolution.
  • Partially Invasive BCIs: Electrodes placed beneath the skull but outside brain tissue (ECoG); better signal quality with moderate risk.
  • Invasive BCIs: Electrodes implanted directly into brain tissue; high precision but higher infection risk (e.g., Neuralink, Blackrock Neurotech).

Key Applications

  • Medical: Mobility assistance for paralysis, speech recovery in stroke patients, Parkinson’s and epilepsy treatment, and vision-restoration research.
  • Cognitive Enhancement: Neurofeedback-based training for attention, memory, and performance improvement.
  • Security & Defence: Secure authentication and hands-free control of advanced systems.
  • Human–Machine Interaction: Thought-controlled gaming, VR/AR navigation, and smart-home systems.

Why India Needs BCI Adoption

India’s neurological disease burden doubled between 1990 and 2019, with stroke contributing 37.9% of DALYs (Lancet Global Health). An ageing population, coupled with rising dementia cases, makes assistive neurotechnology essential. With a projected USD 6 billion global BCI market by 2030, indigenous innovation can boost startups, patents, and India’s status as a neurotechnology hub.

India’s Current Standing

India holds about 2.5% of the global BCI market (2024). Notable developments include IIT Kanpur’s BCI-controlled robotic hand, C-DAC’s Vivan-BCI for children with special needs, and startups like BrainSight AI working on neurological mapping and screening tools. India’s BCI ecosystem is currently dominated by non-invasive EEG-based systems.

Global Landscape

The United States leads with companies like Neuralink and Synchron. Europe focuses on collaborative neurorehabilitation research.

China’s Brain Project (2016–2030) integrates cognition research and brain-inspired AI, while Japan and South Korea emphasise rehabilitation, robotics, and gaming-oriented BCIs.

Australia’s AI Copyright Policy: Balancing Innovation and Creator Rights

Context: Australia’s Attorney-General has rejected a policy proposal from a think tank that sought to grant technology companies unrestricted access to copyrighted material for training Artificial Intelligence (AI) systems. The government instead reaffirmed that technological innovation must not come at the cost of creators’ rights.

image 8

This move places Australia among a small group of nations emphasizing ethical and consent-based AI development, diverging from the U.S. “fair use” approach and China’s “data-first” model.

Australia’s AI Copyright Policy

1. Government’s Stand:
The Australian government maintains that technology should not advance “at the expense of creators.” It argues that unrestricted scraping of copyrighted works by AI models undermines artistic and journalistic integrity, threatening creative industries.

2. Formation of CAIRG:
The Copyright and AI Reference Group (CAIRG) was established to design balanced, rights-based policies. CAIRG comprises representatives from the tech sector, creative industry, academia, and legal bodies. Its mandate is to develop national guidelines for ethical AI training and data use.

3. Proposed Legal Reform:
Australia is considering introducing a mandatory paid licensing framework under the Copyright Act.
This would:

  • Require AI developers to obtain permission before using copyrighted material.
  • Ensure fair compensation and consent for creators.
  • Establish transparency mechanisms for datasets used in AI training.

Comparative Perspective

  • United States: Allows AI developers to use copyrighted material under the “fair use” doctrine, subject to certain limits.
  • European Union: Mandates “opt-out” consent, giving creators the right to restrict their works from AI datasets.
  • China: Promotes open data access for AI under state supervision to accelerate innovation.
    Australia’s approach, by contrast, emphasizes creator consent as a non-negotiable principle.

Significance of the Policy

  • Upholding Creator Rights: Ensures AI development respects intellectual property, in line with UNESCO’s AI Ethics Framework (2021).
  • Human-Centric Innovation: Demonstrates that technological and cultural goals can coexist, reinforcing public trust in AI.
  • Global Leadership: Positions Australia as a thought leader in rights-respecting AI governance, influencing debates in other democracies.
  • Cultural Integrity: Protects artists, writers, and content producers from data exploitation by large tech firms, ensuring sustainable creative economies.

Conclusion

Australia’s AI Copyright Policy exemplifies a human-centric and ethically grounded approach to digital innovation.

By prioritizing consent, compensation, and creator control, the country seeks to balance AI’s transformative potential with fairness and accountability — setting a precedent for democracies striving to regulate artificial intelligence responsibly.

India’s Technological Future: Towards Deeptech Sovereignty

Context: Union Minister Piyush Goyal recently emphasised that India must transition from digital adoption to technological creation — aiming for deeptech-led sovereignty and reducing reliance on foreign technologies.

What is Technological Sovereignty?

Technological Sovereignty refers to a nation’s ability to develop and deploy its own technologies using indigenous infrastructure, ensuring autonomy in data, innovation, and strategic capabilities — a cornerstone of national sovereignty in the digital age.

India’s Dependence on Foreign Technology

  • Electronics: Over 65% of chips and 80% of high-end components are imported (MeitY, 2024).
  • Defence: About 60% of defence equipment depends on foreign Original Equipment Manufacturers (SIPRI, 2023).
  • Renewables & EVs: 90% of solar wafers and 70% of lithium-ion cells come from China.
  • Pharma Inputs: 68% of Active Pharmaceutical Ingredients (APIs) are still imported despite PLI efforts.

Consequences of Technological Dependence

  • Economic Drain: High import bills widen the current account deficit — electronics imports exceeded $70 billion in 2024.
  • Innovation Deficit: India holds less than 1% of global AI patents, reflecting limited indigenous innovation.
  • Employment Loss: Deeptech manufacturing employs less than 2% of India’s tech workforce (NASSCOM, 2023).
  • Digital Sovereignty Risks: Over 75% of India’s cloud infrastructure is managed by foreign firms (IDC, 2024), raising concerns over data autonomy and national security.

The Way Forward

1. Deeptech Push

Strengthen innovation in AI, quantum computing, space tech, and semiconductors.

  • The ₹1 lakh crore Anusandhan Fund (2025) will accelerate deeptech R&D.

2. R&D Incentives

Raise national R&D expenditure (currently <1% of GDP) and provide tax benefits to private research.

  • Learn from Israel’s Innovation Authority, which co-funds up to 50% of R&D costs.

3. Chip Independence

Expand the India Semiconductor Mission (2021) with $10 billion incentives for chip design, fabrication, and assembly units.

4. Building a Skilled Pipeline

Develop high-end skills in STEM, retain researchers, and strengthen global scientific collaboration.

  • Initiatives like the VAIBHAV Summit and SERB Overseas Fellowships connect diaspora scientists with Indian research institutions.

5. Nurturing Deeptech Startups

Scale up Startup Fund of Funds 2.0 to support early-stage ventures focusing on AI, robotics, and clean tech through risk capital and mentorship.

Conclusion

India’s next leap lies not in importing innovation but in inventing the future. Achieving technological sovereignty will determine India’s strategic independence, global competitiveness, and its role as a deeptech leader of the 21st century.

AI in India’s Healthcare System 

Context: India has a doctor-patient ratio of 1:1,457 (below WHO’s norm of 1:1,000) and nearly 65% of the population in rural areas lack specialist access. In this scenario, Artificial Intelligence (AI) in the healthcare sector can emerge as a game-changer. 

Relevance of the Topic: Mains: Use of AI in Healthcare and Associated Challenges.

From early disease detection to optimising health records, AI is rapidly transforming how India delivers healthcare.

Opportunities of AI in Healthcare: 

  • Early Disease Detection and Diagnostics: 
    • AI applications are already being used in rural Odisha to detect TB through cough recordings and to identify breast cancer cases via smartphone-based mammogram apps.
    • Google’s DeepMind has achieved 99% accuracy in detecting breast cancer surpassing even expert radiologists.
    • AI is also being applied in detecting eye diseases, skin cancers, and neurological disorders like Alzheimer’s, enabling timely intervention and reducing the burden on doctors.
  • Personalised and Precision Medicine: AI models can predict how an individual patient will respond to drugs, reducing side effects and improving treatment outcomes. E.g., 
    • In oncology, AI helps in identifying targeted therapies, thereby improving survival rates for cancer patients.
    • AI-enabled wearables monitor blood sugar in real time, alerting doctors and preventing emergencies.
  • Drug Discovery and Vaccine Development: AI can reduce the decade-long process of drug discovery by predicting effective compounds quickly and at lower cost. Pharmaceutical companies are using AI to fast-track vaccine development. E.g.,  In 2020, AI identified a new antibiotic against drug-resistant bacteria, a discovery that would have taken years otherwise.
  • Efficiency in Healthcare Delivery: AI chatbots are handling routine patient queries, reducing paperwork and administrative bottlenecks, thereby reducing waiting times and freeing up doctors for complex cases.
  • Public Health and Early Warning Systems: AI is being used to track disease outbreaks, analyse wastewater, and model at-risk populations, enabling governments to deploy resources effectively and prevent crises from escalating. During Covid-19, AI models flagged the outbreak weeks before official alerts, proving their utility in crisis prediction. 
image 39

Challenges and Ethical Concerns: 

  • Data Localisation and Suitability: Most AI systems are  trained on Western data, often mis-fitting Indian contexts. E.g., A skin cancer AI tuned on light skin tones may misdiagnose darker ones.
  • Accountability and Transparency: AI algorithms often work as “black boxes”, producing results without clear reasoning. Incorrect or biased diagnoses could harm patients if left unchecked. 
  • Equity and Accessibility: While AI can democratise access to healthcare, it risks widening inequalities if available only to wealthy urban populations.
  • Balancing Innovation and Regulation: Excessive regulations, as seen in parts of Europe, may stifle innovation and delay adoption of AI in healthcare.

Way Forward

  • India must train AI models on diverse datasets that reflect India’s genetic, environmental, and socio-economic realities.
  • Regulations must enforce transparency in AI decision-making, independent audits, and strict ethical guidelines to ensure accountability.
  • The government must ensure AI-enabled tools and treatments reach rural populations and marginalised groups. Access should not be skewed by geography, income, or literacy.
  • India must strike a balance by fostering innovation while ensuring ethical safeguards and equitable access.

AI offers India an unprecedented opportunity to transform India’s healthcare system. However, challenges related to data localisation, accountability, ethical use, and equitable access must be addressed.

PARAM-1: India’s Foundational LLM 

Context: In July 2025, the government-backed BharatGen released PARAM-1, a bilingual Large Language Model (LLM) built from scratch to reflect India’s linguistic and cultural realities, focusing on Hindi and English.

Relevance of the Topic: Prelims: Key Features of PARAM-1.

Foundational AI

  • Foundational AI: Large-scale AI models trained on very large datasets and over which numerous specific applications can be built, including generative AI. 
  • Large Language Models (LLMs) are a type of Foundational AI model trained with vast datasets with at least one billion or more parameters. E.g., AI-powered tools like ChatGPT, Gemini, Perplexity, DeepSeek, Grok. 
  • Small Language Models (SLMs) are compact AI systems typically having fewer than 1 billion parameters (ranges from millions to a few billion parameters). Cheaper to run and maintain, and ideal for specific use cases. 

In its mission to build open source Large Language Models (LLMs) for Indian researchers and developers, BharatGen, the government-backed AI Initiative, has released a LLM called PARAM-1.

About PARAM-1 

  • PARAM-1 is a 2.9-billion parameter bilingual foundational AI model developed by the BharatGen team. 
  • It reflects India’s linguistic and cultural realities- with 25% of its training data in Hindi and the rest in carefully curated English. 

Key Features: 

  • Bilingual focus: Trained in Hindi and English, incorporating government documents, literary works, educational and community content.
  • Script-aware Tokeniser: 
    • Tokeniser is the first step in how a language model processes text. It breaks sentences into smaller units, or tokens, which the model can interpret.
    • Standard tokenisers (built for English) perform poorly on Indian scripts, splitting words into too many fragments. 
    • PARAM-1 addresses this with a script-aware tokeniser that recognises Hindi and other Indic scripts, creating fewer and more meaningful tokens. This improves both accuracy and efficiency.
  • Three-phase training focuses on language fluency, factual consistency, and long-context understanding. This allows the model to gradually develop fluency, retain factual information, and improve performance on tasks that require reading and reasoning over longer texts.
  • India-centric evaluation: Tested on Indian benchmarks like MILU (competitive exam questions) and SANSKRITI (cultural knowledge), besides global ones like MMLU and ARC.

Limitations:

  • Currently supports only Hindi and English, excluding India’s wider linguistic diversity. Raises concerns over the model’s inclusivity, especially in a country where linguistic identity often intersects with regional politics and access to services.

AI in Warfare and India’s Preparedness

Context: According to a research report by Delhi-based Centre for Joint Warfare Studies, an autonomous think tank, Artificial Intelligence (AI) is set to rapidly transform the landscape of warfare with deeptech being deployed for tasks ranging from autonomous weapons systems to intelligence gathering and cybersecurity. 

Relevance of the Topic: Mains: How AI is transforming the landscape of warfare and India’s Preparedness.

Use case of AI in Warfare includes

  • Development of autonomous weapons systems that can select and engage targets without human intervention.
  • Analysing vast amounts of data to identify potential threats. 
  • Tracking enemy movements, and forecasting future attacks.
  • Creating realistic battlefield simulations to enable field evaluation trials as well as allowing soldiers to train in virtual environments to prepare for real-world combat scenarios.
image 13

Countries around the world have started integrating AI in Warfare

China 

  • China is using the AI models to improve artillery systems by reducing the time between shots and increasing accuracy.
  • Chinese military drones are equipped with generative AI that allows them to detect and destroy enemy radars automatically.
  • China combines AI across land, air, sea, space, cyberspace, and electromagnetic spectrum. This gives it a strong edge in multi-domain operations.

Pakistan

  • Pakistan’s Air Force set up a Centre of Artificial Intelligence and Computing (CAIC) in 2020.
  • During Operation Sindoor, Pakistan likely received LIVE satellite images and data from China. AI may have been used to quickly process this data, helping Pakistan track Indian troop movements in real-time.

Ukraine

  • Ukraine has equipped its long-range drones with AI that can autonomously identify terrain and military targets, using them to launch successful attacks against Russian refineries. 

Israel

  • Israel has also used its Lavender AI system in the conflict in Gaza to identify 37,000 Hamas targets. As a result, the current conflict between Israel and Hamas has been dubbed the first “AI war”.

India

  • The Defence Research and Development Organisation (DRDO) established the Centre for Artificial Intelligence and Robotics (CAIR) in 1986, with the aim of developing autonomous technologies for military use.
  • CAIR has worked on a wide range of applications including combat systems, path planning, sensor integration, target identification, underwater mine detection, patrolling, logistics, and localisation.

However, despite this early start, India faces several key challenges in effectively harnessing AI for modern warfare.

image 21

Challenges for India in AI Warfare

  • Lack of Energy Infrastructure
    • AI technologies need continuous, high-power electricity for data centres and simulations. India has low nuclear power capacity (around 7.5 GW), much less than countries like South Korea.
    • Overdependence on solar and wind energy without backup storage makes the power grid unstable.
  • Inadequate AI Infrastructure: India lacks large-scale, defence-specific AI data centres. Limited access to high-performance computing for real-time battlefield analysis and decision-making.
  • Fragmented Research & Development: Agencies like DRDO’s CAIR have been working since 1986, but progress has been slow. No large-scale, coordinated national mission focused on AI for defence.
  • Weak Civil-Military Fusion: Unlike China or the U.S., India does not have strong collaboration between private tech firms, startups, academia, and the military. Defence R&D is mostly government-driven, limiting innovation speed.
  • Lag in C4ISR, Space, Cyber, and Electromagnetic Domains: India lags behind China in C4ISR capabilities- Command, Control, Communication, Computers, Intelligence, Surveillance, and Reconnaissance, particularly in the domains of space, cyberspace, and the electromagnetic spectrum.
  • Lack of National Policy or defence doctrine on AI integration: No clear national policy or defence doctrine on AI integration in military strategy. Regulatory and bureaucratic delays slow down tech adoption in defence forces.
  • Limited Private Sector Participation: Private sector involvement in nuclear energy and AI defence systems is limited. Without private innovation and investment, India cannot scale up AI infrastructure quickly.

AI is transforming modern warfare into an “agentic” battlefield, where autonomous systems, rapid decision-making, and multi-domain dominance decide outcomes.

India’s domestic AI Foundational Model

Context: In part of its ambitious India AI Mission, the government has selected three more start-ups- SoketAI, Gnani AI and Gan AI to build indigenous AI foundational models.
This is in addition to selection of Sarvam AI to build India’s indigenous foundational AI large language model (LLM), an open source 120 billion parameter AI model. The start-up has launched two models- Sarvam-1 model (2 billion parameters) and Sarvam-M (24 billion parameters) model with hybrid reasoning capabilities. 

Relevance of the Topic:Prelims: Key Terms related to Artificial Intelligence; India and AI- Government Efforts. 

Key Terms related to Artificial Intelligence

  • Artificial Intelligence:
    • AI is the capability of a machine to imitate intelligent human behaviour and perform complex tasks similar to how humans solve problems. 
    • E.g., Perform cognitive tasks like thinking, perceiving, learning, problem-solving and decision-making. 
  • Machine Learning: 
    • Machine learning techniques, including Artificial neural networks (ANNs), are used to achieve the goals of AI
    • Machine learning is a subfield of AI, and ANNs are a specific type of machine learning algorithm that uses interconnected nodes to learn from data.
  • Artificial Neural Networks (ANNs):
    • Algorithms that are inspired by the structure and workings of human brains, and have the capability to identify and learn patterns in data. 
    • Breakthroughs in ANNs have enabled the development of LLMs and AI-tools like AlphaFold (An AI tool that can predict protein structures).  
  • Artificial General Intelligence (AGI): 
    • Machines capable of ‘thinking’ and ‘acting’ autonomously through a process of self-learning or artificial general intelligence (AGI).
  • Foundational AI: 
    • Large-scale AI models that are trained on very large datasets and over which numerous specific applications can be built, including generative AI. 
  • Generative AI: 
    • AI models that use machine learning algorithms to create new and original content, such as images, text, code, audio, or even video with the help of natural-language prompts. E.g., DALL-E for image generation, ChatGPT for text generation. 
    • Generative AI models can be Large Language Models (LLMs) or Small Language Models (SLMs).
  • Large Language Models (LLMs):
    • LLMs are a type of Foundational AI model trained with vast datasets with at least one billion or more parameters. 
    • LLMs have shown an exceptional proficiency to understand and interact in human languages in a meaningful way.  
    • E.g., AI-powered tools like ChatGPT, Gemini, Perplexity, DeepSeek, Grok. 
  • Small Language Models (SLMs): 
    • SLMs are compact AI systems typically having fewer than 1 billion parameters (ranges from millions to a few billion parameters).
    • Cheaper to run and maintain, and ideal for specific use cases. 
image 174

India’s challenges to build its own Foundational Model

India has been focused on building AI-based applications for specific work, like in healthcare or drug discovery. But it aims to develop its own foundational model. 

  • Building foundational models is an extremely resource-heavy and expensive exercise.
    • It involves massive computational infrastructure, enabled through specially designed state-of-the-art chips called Graphics Processing Units (GPUs).
      • Training advanced deep learning models demands substantial GPU clusters (thousands of GPUs run in hyperscale data centres, as big as one million square feet) and high-performance computing (HPC) facilities.
      • Shortage of GPUs (currently in high demand and short supply). AI Mission seeks to procure at least 10,000 of GPUs, which will require high skill and expertise.
    • The Model has to be trained on very large datasets which consume an enormous amount of electricity. E.g., LLMs like GPT-3 consumed nearly 1,300 megawatt-hours (MWh) of power. 
  • Building applications on top of other country’s models can bring in layers of vulnerabilities. E.g.,
    • Models trained on global datasets often lack local nuances and can insert foreign biases, thereby producing unwanted or erroneous results.
    • In applications related to defence or national security, a foreign model always carries potential dangers of sabotage or leaking sensitive data.

India and AI: Government Efforts: 

  • For India work on AI is at a relatively nascent stage. In 2024, India had launched a Rs 10,000-crore IndiaAI mission to build capabilities in AI. 
  • India aims to build its own LLM within 10 months (till the end 2025).
    • The central government had received at least 67 proposals to build the India-specific models.
    • A high-level technical committee will evaluate the proposals. 
    • The intellectual property of the models will remain with the entity, with provision for a perpetual license for use by the government for public use.
  • The government has also selected 10 companies to supply 18,693 graphics processing units or GPUs (high-end chips needed to develop machine learning tools) crucial for developing a foundational model.
    • The initial aim of the IndiaAI Mission was to procure 10,000 GPUs.
  • The government will launch a common compute facility from where startups and researchers can access the computing power. To ease access to these services, the government will give a 40% subsidy to end users on the total price.

Also Read: Inclusive AI: AI Action Summit 2025 

India needs to create a centralised AI infrastructure, allocate substantial funding, including on procuring GPUs, and private-public Industry-academia participation. 

Rise of AI Powered Autonomous Satellites

Context: Rise of AI-powered autonomous satellites has the potential to transform space operations, but at the same time it has created new legal, ethical, and geopolitical challenges. 

Autonomous Satellites

  • Autonomous satellites are designed to perform their functions with minimal to no human intervention by utilising a suite of advanced technologies and algorithms.
  • Onboard intelligence in satellites is called satellite edge computing and allows satellites to analyse their environment, make decisions, and act autonomously like self-driving cars on the ground.

Applications of Autonomous Satellites

  • Automated space operations: Independent manoeuvring in space to perform tasks like docking, inspections, in-orbit refuelling, and debris removal. 
  • Self-diagnosis and repair: Monitoring their own health, identifying faults, and executing repairs without human intervention.
  • Route planning: Optimising orbital trajectories to avoid hazards and obstacles or to save fuel.
  • Targeted geospatial intelligence: Detecting disasters and other events of interest in real-time from orbit and coordinating with other satellites intelligently to prioritise areas of interest.
  • Combat support: Providing real-time threat identification and potentially enabling autonomous target tracking and engagement, directly from orbit.

Challenges associated with Autonomous Satellites: 

As satellites become more intelligent and autonomous, the stakes rise geometrically: 

  • AI Hallucinations and misidentification of threat: A satellite hallucinating can misclassify a harmless commercial satellite as hostile, and respond with defensive actions. This could potentially escalate tensions between nations.
  • Legal Vacuum and Liability Ambiguities: Existing treaties like the Outer Space Treaty (1967) and Liability Convention (1972) are premised on human control. If an autonomous satellite causes damage or collision, it is unclear who bears legal responsibility- the state, private operator, software developer, or the AI itself. This creates a normative gap in international law complicating enforcement and redressal.
  • Geopolitical and Security Risks: AI’s dual-use capabilities (i.e., civilian + military) create misinterpretation risks in geopolitically sensitive contexts. 
  • Ethical Concerns: AI satellites collect enormous volumes of surveillance and environmental data. Without safeguards, this data can be misused for military, commercial, or surveillance purposes.

Outer Space Treaty (1967):

  • Also known as Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies, is the foundational international treaty governing space exploration and use.
  • Opened for signature in 1967, it establishes several key principles including-
    • prohibition of weapons of mass destruction in space
    • commitment to peaceful uses of space
    • outer space is the province of all mankind. 
  • India ratified the Treaty in 1982.

Key Articles of the Treaty : 

  • Article I: Outer space shall be free for exploration and use by all states; access must be on the basis of equality.
  • Article II: No state can claim sovereignty over outer space or celestial bodies.
  • Article IV: Prohibits the placement of nuclear weapons or any weapons of mass destruction in outer space.
  • Article VI: States are responsible for national space activities, including those by non-governmental entities. Activities must be authorised and continually supervised by the state.
  • Article VII: States are internationally liable for any damage caused by their space objects to other states or their property.

Convention on International Liability for Damage Caused by Space Objects (1972): 

  • It elaborates on liability provisions in Article VII of the Outer Space Treaty.
  • India is a signatory and has ratified the Liability Convention. 

Key Provisions:

  • Absolute Liability: Regardless of fault, launching states are strictly liable for damage caused by their space objects on Earth or to aircraft in flight. 
  • Fault-Based Liability: For damages occurring in outer space, liability is based on proving fault.
  • Joint Liability: If multiple states are involved in launching a space object, they are jointly and severally liable.

Claims Mechanism: Claims must be presented through diplomatic channels, and a claims commission may be established for disputes.

Way Forward

  • AI-driven satellite systems must be tested and certified by neutral international bodies to ensure safety and predictability. Bodies like United Nations Committee on the Peaceful Uses of Outer Space (UNCOPUOS) or International Standards Organisation could:
    • Test AI response to critical scenarios like collision risk, sensor malfunctions, or communication failures.
    • Conduct adversarial testing by feeding unexpected or manipulated data to check how AI responds under stress.
    • Mandate decision-logging mechanisms so that every autonomous action, especially manoeuvres, can be audited later for accountability.
  • Adopting pooled insurance and strict liability regimes similar to aviation and maritime sectors can ensure fair, predictable compensation mechanisms without lengthy legal disputes.
  • Formulation of clear international rules on how AI satellites collect, store, and share data, to protect privacy and prevent misuse.

With thousands of autonomous systems projected to operate in low-earth orbit by 2030, the probability of collisions, interference or geopolitical misinterpretation is rising. Autonomous satellites demand a new regulatory architecture that balances innovation with responsibility, and sovereignty with global cooperation. 

Practice MCQ: 

Q. Consider the following statements with reference to the Outer Space Treaty (1967):

1. It prohibits any nation from claiming sovereignty over outer space.

2. It requires that all space activities be authorised and continually supervised by the state.

3. It explicitly regulates the use of artificial intelligence in space missions.

Which of the above statements is/are correct?

(a) 1 and 2 only

(b) 1 and 3 only

(c) 2 and 3 only

(d) 1, 2 and 3

Answer: (a) 1 and 2 only

Mains Practice Question: 

Q. Explain how the increasing autonomy of satellites through AI poses new challenges to space safety and security. What regulatory and technical frameworks are needed to address them? 

SMRs can bridge rising Energy demand of AI 

Context: Generative Artificial Intelligence (AI) has eased access to art and reduced the time and the effort required to complete certain tasks. But this ease comes at a significant energy cost. Small modular nuclear reactors could be the energy answer to support booming AI and data infrastructure. 

Relevance of the Topic: Mains: Artificial Intelligence: Challenges 

Energy cost of Artificial Intelligence: 

  • Data centres (the backbone of AI operations) consume enormous electricity and contribute 1% of global greenhouse gas emissions. 
  • A simple search request made through ChatGPT (an AI-based virtual assistant) consumes 10 times the electricity consumed by a Google Search. 
  • Training advanced AI models can emit up to 552 tonnes of carbon dioxide equivalent which is comparable to the annual emissions of dozens of cars. 
  • Projections indicate that these data centres could account for 10% of the world’s total electricity usage by 2030. The majority of this electricity comes from fossil fuel sources. 
  • India currently has sufficient capacity to generate electricity for its own domestic AI needs. Yet, with increasing adoption and ambitions, proactive planning is imperative.

Potential of SMRs is bridging the Energy demand

Leveraging nuclear energy, specifically Small Modular Reactors (SMR), could be a possible alternative to support booming AI and data infrastructure. This is possible as: 

  • SMRs are designed to be compact and scalable. Their flexibility allows them to be deployed closer to high-energy-demand facilities, such as data centres, which require consistent and reliable power to manage vast amounts of computational workloads. In contrast, traditional large-scale nuclear power plants demand extensive land, water, and infrastructure.
  • SMR can provide 24X7, zero-carbon, baseload electricity making it an ideal alternative to renewable sources such as solar and wind by ensuring a stable energy supply regardless of weather conditions.
  • Their modular construction reduces construction time and costs when compared to conventional nuclear plants, enabling faster deployment to meet the rapidly growing demands of AI and data-driven industries. 
  • SMRs offer enhanced safety features, with passive safety systems that rely on natural phenomena to cool the reactor core and safely shut down, reducing the risk of accidents. This makes them more acceptable and easier to integrate into regions where large-scale nuclear facilities would face opposition. 
  • SMR’s ability to operate in diverse environments, from urban areas to remote locations, also supports the decentralisation of energy production, reducing transmission losses and enhancing grid resilience.

In India's case, the cost of electricity from SMRs is predicted to fall from ₹10.3 to ₹5 per kWh after the reactors are functional, which is less than the average cost of electricity.

Challenges in adoption of SMRs: 

  • Significant policy shifts are required to create a robust regulatory framework that addresses safety, waste management and public perception. 
  • Adoption requires substantial upfront investment, as the technology is still maturing and may face issues of cost competitiveness when compared to established energy sources. 
  • Coordinating SMR deployment with existing renewable energy initiatives will require careful planning to maximise synergies while minimising redundancy. 

Sustainable AI Adoption

  • AI companies need to be transparent about their energy consumption, i.e., disclose their environmental impact. 
  • Such data would provide further insights on where energy is being used the most, and encourage R&D to create a more sustainable model of AI development.

Also Read: Environmental Impacts of Artificial Intelligence 

The public-private partnership model presents a realistic solution to the challenges of sustainable AI development. By leveraging the strengths of both sectors, this model can facilitate the efficient development of SMRs alongside other forms of renewable energy to support advancements in AI. 

Facial Recognition Technology 

Context: Delhi Police is planning a city-wide rollout of facial recognition technology (FRT) later this year in 2025. Experts warn that the increasing integration of such technology across platforms may come at a cost.  

Relevance of the Topic: Prelims: About Facial recognition technology.

Facial Recognition Technology: 

  • Facial recognition is a cutting-edge biometric technology that identifies or verifies an individual by analysing their facial features. 
  • The algorithm-based technology creates a unique digital map of a person’s face by detecting and analysing facial features such as the distance between the eyes, shape of the jaw etc. This faceprint is then compared to a database of stored images for identity verification or identification. 

Automated Facial Recognition System (AFRS)

  • AFRS uses a large database containing millions of facial images including those from CCTV footage, social media, and official records. 
  • When an unidentified image is captured (E.g., from a surveillance camera), AFRS uses artificial intelligence to find a matching pattern in the database and identify the person.
Automated Facial Recognition System (AFRS)

There are two types of Matching:

  • 1:1 Verification: Confirms if the face matches a single image (e.g., unlocking your phone).
  • 1:N Identification: Compares the face to an entire database to identify an unknown individual (E.g., identifying suspects in law enforcement). Delhi Police usually use FRT for 1:N identification.

Limitations of FRT

  • Accuracy Issue: The system may wrongly identify someone (false positive) or fail to recognise the correct person (false negative). Accuracy drops with poor angles, low light, or occlusions like masks or sunglasses.
  • Limited Datasets: Studies have shown higher error rates for women, children, and people with darker skin tones, especially, when systems are trained on datasets lacking diversity. Delhi Police treat matches above 80% similarity as positive results, while matches below 80% as false positive results which require additional corroborative evidence. 

Facial Recognition System in Delhi: 

  • Since 2018, the Delhi Police has been using the Facial Recognition System (an Israeli software) to monitor high-security events in the Capital. 
  • FRS vans are armed with cameras, computers, and automatic number plate readers (configured to scan faces instead of license plates) and stationed in different parts of the two districts every day, scanning faces and alerting them of potential hits.
  • Apart from fixed cameras, Prakhar vans with mobile cameras scan crowds and crime-prone areas. 

Safe City Project

  • Delhi Police plans to expand FRS under the Safe City Project with 10,000 high-resolution CCTV cameras across the capital, whose LIVE feed will be beamed directly to a command centre at the police headquarters. 
  • Implementation: Centre for Development of Advanced Computing, under the Ministry of Electronics and Information Technology. 
  • CDAC will be responsible for setting up C41 (Integrated Command, Control, Communication & Computer Centre) where integrated video feeds will be beamed. These feeds will be analysed in real time, with AI models capable of identifying over 20 faces in a crowd, even under partial visibility or disguised appearances.

However, its use raises serious concerns about privacy and misuse. Without a clear legal framework, it has a chilling effect on civil liberties, there is a risk of misidentifying individuals, profiling, and violating fundamental rights.