India Introduces Regulations to Combat Deepfakes and AI-Generated Misinformation

Date:

New Delhi: In a decisive step to safeguard India’s digital landscape from the escalating dangers of artificial intelligence manipulation, the Union Government has rolled out comprehensive updates to its intermediary guidelines. The Ministry of Electronics and Information Technology (MeitY) formally notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 on February 10, establishing the country’s first dedicated legal framework for synthetically generated information (SGI).

These amendments build directly on the foundational 2021 rules while introducing targeted measures against deepfakes, deceptive impersonations, and other AI-crafted falsehoods that threaten public trust, personal dignity, and national security. The provisions will activate on February 20, 2026, providing platforms a brief preparation period amid ongoing global discussions about responsible AI deployment.

India Introduces Regulations to Combat Deepfakes and AI-Generated Misinformation
From February 20, 2026: every synthetic video, image or audio in India must carry clear AI labels, traceable origins, and vanish within 3 hours if unlawful.

Pioneering Legal Definition of Synthetic Content

For the first time under Indian law, authorities have codified a clear definition of synthetically generated information. The rules classify SGI as any audio, visual, or combined audio-visual material artificially or algorithmically produced, generated, modified, or altered through computer resources. The defining characteristic is that the output appears convincingly real, authentic, or genuine—portraying people or occurrences in ways indistinguishable from actual human beings or lived events.

This targeted scope primarily addresses harmful applications, such as fabricated videos showing public figures making false statements or manipulated audio clips used for fraud. Crucially, the framework includes deliberate exclusions to protect everyday and constructive uses. Automatic smartphone camera adjustments, accessibility features like screen readers converting text to speech, scholarly research datasets, model training materials, and minor non-substantive edits or corrections fall outside the regulated category.

Government sources indicate the finalized wording reflects refinements from stakeholder consultations following an October 2025 draft, narrowing the scope to focus on genuine risks without unnecessarily hampering technological progress or creative expression.

Compulsory Disclosure and Technical Traceability Mechanisms

Central to the new regime is the requirement for transparent identification of synthetic material. All qualifying SGI must feature prominent, easily noticeable disclosures alerting viewers to its artificial origin—whether through visible overlays on images and videos or audible announcements in audio files.

Platforms must further incorporate persistent metadata or equivalent technical provenance tools into the content wherever feasible. These embedded markers enable tracing the material’s creation process, tools employed, and origin points, aiding investigations into misuse.

A firm prohibition prevents intermediaries from allowing the stripping, alteration, or concealment of applied labels and metadata. This permanence aims to maintain long-term accountability even if content spreads across services.

Significant social media intermediaries—those meeting user threshold criteria—face additional responsibilities. They must solicit explicit user declarations about whether uploaded material qualifies as SGI. Absent such confirmation for suspected synthetic posts, platforms should apply appropriate markings themselves or restrict access, especially for non-consensual or harmful instances.

While an earlier proposal mandated covering a fixed percentage of visual surface area with disclosures, the notified version grants platforms reasonable flexibility in achieving “prominent” visibility, responding to industry concerns about aesthetic and functional impacts.

Accelerated Removal Obligations and Strict Prohibitions

The amendments dramatically compress compliance windows for content moderation. Platforms previously enjoyed up to 36 hours to execute takedown directives. Now, upon receipt of valid orders from courts or designated government officials, intermediaries must disable access to unlawful material within three hours.

Particularly urgent categories—including child sexual abuse material (CSAM), non-consensual intimate or nude imagery (NCII), forged documents used misleadingly, and synthetic impersonations designed to deceive—trigger even swifter action in specified scenarios, though the core government/court directive standard remains three hours.

Intermediaries are explicitly barred from hosting or facilitating prohibited synthetic categories. Violations expose platforms to loss of safe harbour immunity under Section 79 of the Information Technology Act. The rules specify that deliberate permission, encouragement, or inaction regarding rule breaches equates to failure in due diligence, stripping liability protections and opening avenues for direct legal accountability.

Grievance resolution periods have also tightened, dropping from 15 days to seven days for addressing user complaints effectively.

Driving Forces Behind the Regulatory Push

Officials emphasize multiple pressing imperatives. Proliferation of indistinguishable fake media steadily undermines confidence in online information sources, potentially swaying opinions, disrupting elections, and fracturing social cohesion. The alarming increase in AI-facilitated CSAM and NCII demands immediate, traceable interventions to support law enforcement and victim protection.

Establishing a statutory category for SGI equips authorities with precise tools for mandating disclosures, enforcing rapid removals, and pursuing accountability. These measures complement existing laws like the Digital Personal Data Protection Act, 2023 (addressing data exploitation risks) and relevant sections of the Bharatiya Nyaya Sanhita covering forgery and misinformation.

Ongoing advisories from MeitY and active surveillance by the Indian Cyber Crime Coordination Centre (I4C) further strengthen the ecosystem against AI-enabled threats.

Platform Challenges and User Implications

Major platforms operating in India—including those hosting billions of interactions—must rapidly deploy enhanced detection systems, user declaration flows, labelling automation, and metadata infrastructure. The shortened timelines heighten pressure on moderation teams and algorithmic tools, particularly given India’s linguistic diversity and vast content volume.

Non-compliance carries substantial risks, from litigation exposure to potential operational restrictions. Users creating or sharing synthetic material will need to provide accurate declarations, while everyday consumers gain clearer indicators distinguishing genuine from fabricated posts and faster removal of abusive content.

Some observers express reservations about enforcement practicality, risks of mistaken removals under tight deadlines, and dependence on evolving detection accuracy. Others view the framework as a necessary evolution in digital governance.

A Forward-Looking Approach to AI Accountability

By formalizing obligations around AI-generated content, deepfake labelling, and expedited takedowns, India’s 2026 amendments position the nation as an early mover in balancing innovation with societal safeguards. As implementation begins on February 20—coinciding with important AI policy dialogues—the rules signal a commitment to fostering a more transparent, secure, and trustworthy online environment amid accelerating technological change.

FAQs

1. What exactly are the new rules trying to regulate, and what is “synthetically generated information” (SGI)?

2. What labelling and disclosure requirements apply to AI-generated or synthetic content?

3. How quickly must platforms remove unlawful or harmful synthetic content under the new rules?

4. What happens if social media platforms fail to comply with these new obligations?

5. Why were these rules introduced, and how do they fit into India’s broader approach to AI and online safety?

politicalsciencesolution.com
politicalsciencesolution.comhttp://politicalsciencesolution.com
Political Science Solution offers comprehensive insights into political science, focusing on exam prep, mentorship, and high-quality content for students and enthusiasts alike.

Share post:

Subscribe

spot_img

Popular

More like this
Related

You cannot copy content of this page