New Delhi: In a decisive step to safeguard India’s digital landscape from the escalating dangers of artificial intelligence manipulation, the Union Government has rolled out comprehensive updates to its intermediary guidelines. The Ministry of Electronics and Information Technology (MeitY) formally notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 on February 10, establishing the country’s first dedicated legal framework for synthetically generated information (SGI).
These amendments build directly on the foundational 2021 rules while introducing targeted measures against deepfakes, deceptive impersonations, and other AI-crafted falsehoods that threaten public trust, personal dignity, and national security. The provisions will activate on February 20, 2026, providing platforms a brief preparation period amid ongoing global discussions about responsible AI deployment.

Pioneering Legal Definition of Synthetic Content
For the first time under Indian law, authorities have codified a clear definition of synthetically generated information. The rules classify SGI as any audio, visual, or combined audio-visual material artificially or algorithmically produced, generated, modified, or altered through computer resources. The defining characteristic is that the output appears convincingly real, authentic, or genuine—portraying people or occurrences in ways indistinguishable from actual human beings or lived events.
This targeted scope primarily addresses harmful applications, such as fabricated videos showing public figures making false statements or manipulated audio clips used for fraud. Crucially, the framework includes deliberate exclusions to protect everyday and constructive uses. Automatic smartphone camera adjustments, accessibility features like screen readers converting text to speech, scholarly research datasets, model training materials, and minor non-substantive edits or corrections fall outside the regulated category.
Government sources indicate the finalized wording reflects refinements from stakeholder consultations following an October 2025 draft, narrowing the scope to focus on genuine risks without unnecessarily hampering technological progress or creative expression.
Compulsory Disclosure and Technical Traceability Mechanisms
Central to the new regime is the requirement for transparent identification of synthetic material. All qualifying SGI must feature prominent, easily noticeable disclosures alerting viewers to its artificial origin—whether through visible overlays on images and videos or audible announcements in audio files.
Platforms must further incorporate persistent metadata or equivalent technical provenance tools into the content wherever feasible. These embedded markers enable tracing the material’s creation process, tools employed, and origin points, aiding investigations into misuse.
A firm prohibition prevents intermediaries from allowing the stripping, alteration, or concealment of applied labels and metadata. This permanence aims to maintain long-term accountability even if content spreads across services.
Significant social media intermediaries—those meeting user threshold criteria—face additional responsibilities. They must solicit explicit user declarations about whether uploaded material qualifies as SGI. Absent such confirmation for suspected synthetic posts, platforms should apply appropriate markings themselves or restrict access, especially for non-consensual or harmful instances.
While an earlier proposal mandated covering a fixed percentage of visual surface area with disclosures, the notified version grants platforms reasonable flexibility in achieving “prominent” visibility, responding to industry concerns about aesthetic and functional impacts.
Accelerated Removal Obligations and Strict Prohibitions
The amendments dramatically compress compliance windows for content moderation. Platforms previously enjoyed up to 36 hours to execute takedown directives. Now, upon receipt of valid orders from courts or designated government officials, intermediaries must disable access to unlawful material within three hours.
Particularly urgent categories—including child sexual abuse material (CSAM), non-consensual intimate or nude imagery (NCII), forged documents used misleadingly, and synthetic impersonations designed to deceive—trigger even swifter action in specified scenarios, though the core government/court directive standard remains three hours.
Intermediaries are explicitly barred from hosting or facilitating prohibited synthetic categories. Violations expose platforms to loss of safe harbour immunity under Section 79 of the Information Technology Act. The rules specify that deliberate permission, encouragement, or inaction regarding rule breaches equates to failure in due diligence, stripping liability protections and opening avenues for direct legal accountability.
Grievance resolution periods have also tightened, dropping from 15 days to seven days for addressing user complaints effectively.
Driving Forces Behind the Regulatory Push
Officials emphasize multiple pressing imperatives. Proliferation of indistinguishable fake media steadily undermines confidence in online information sources, potentially swaying opinions, disrupting elections, and fracturing social cohesion. The alarming increase in AI-facilitated CSAM and NCII demands immediate, traceable interventions to support law enforcement and victim protection.
Establishing a statutory category for SGI equips authorities with precise tools for mandating disclosures, enforcing rapid removals, and pursuing accountability. These measures complement existing laws like the Digital Personal Data Protection Act, 2023 (addressing data exploitation risks) and relevant sections of the Bharatiya Nyaya Sanhita covering forgery and misinformation.
Ongoing advisories from MeitY and active surveillance by the Indian Cyber Crime Coordination Centre (I4C) further strengthen the ecosystem against AI-enabled threats.
Platform Challenges and User Implications
Major platforms operating in India—including those hosting billions of interactions—must rapidly deploy enhanced detection systems, user declaration flows, labelling automation, and metadata infrastructure. The shortened timelines heighten pressure on moderation teams and algorithmic tools, particularly given India’s linguistic diversity and vast content volume.
Non-compliance carries substantial risks, from litigation exposure to potential operational restrictions. Users creating or sharing synthetic material will need to provide accurate declarations, while everyday consumers gain clearer indicators distinguishing genuine from fabricated posts and faster removal of abusive content.
Some observers express reservations about enforcement practicality, risks of mistaken removals under tight deadlines, and dependence on evolving detection accuracy. Others view the framework as a necessary evolution in digital governance.
A Forward-Looking Approach to AI Accountability
By formalizing obligations around AI-generated content, deepfake labelling, and expedited takedowns, India’s 2026 amendments position the nation as an early mover in balancing innovation with societal safeguards. As implementation begins on February 20—coinciding with important AI policy dialogues—the rules signal a commitment to fostering a more transparent, secure, and trustworthy online environment amid accelerating technological change.
FAQs
1. What exactly are the new rules trying to regulate, and what is “synthetically generated information” (SGI)?
The amendments introduce India’s first formal legal definition and regulation of synthetically generated information (SGI). This refers to any audio, visual, or audio-visual content that is artificially created, generated, modified, or altered using computer resources or algorithms, in a way that makes it appear real, authentic, or indistinguishable from content featuring actual people or real-world events.
The primary focus is on harmful uses like deepfakes (e.g., fake videos of celebrities or politicians), AI impersonations, synthetic voices for fraud, or deceptive manipulations. Importantly, the definition includes carve-outs to protect innocent or beneficial applications: routine smartphone photo/video edits (like auto-enhancements or filters), accessibility tools (e.g., text-to-speech), academic/research materials, training datasets for AI models, and minor technical corrections that don’t change the core meaning. This narrower scope (refined from the October 2025 draft) aims to target misuse without stifling everyday creativity or innovation.
2. What labelling and disclosure requirements apply to AI-generated or synthetic content?
All qualifying SGI must be prominently labelled to clearly indicate its artificial nature—visible to users before or while viewing (e.g., text overlays on images/videos or voice announcements in audio). Platforms must also embed persistent metadata or unique technical identifiers (where technically possible) to trace the content’s origin, creation tools, and history.
Once applied, these labels and metadata cannot be removed, altered, or suppressed by anyone on the platform. For Significant Social Media Intermediaries (large platforms like Facebook, Instagram, YouTube, X, etc.), users must provide declarations if their uploads contain SGI. If no declaration is given for suspected synthetic content, the platform must either add the label itself or remove/restrict it (especially for non-consensual or harmful cases). The rules give platforms flexibility in how “prominent” the label appears (dropping a rigid 10% coverage requirement from the draft), balancing usability with transparency.
3. How quickly must platforms remove unlawful or harmful synthetic content under the new rules?
The amendments drastically shorten compliance timelines. Previously, intermediaries had 24–36 hours to act on takedown orders. Now:
- Upon receiving a lawful order from a court or “appropriate government” authority, platforms must remove or disable access to unlawful content within 3 hours.
- For highly sensitive categories (e.g., child sexual abuse material (CSAM), non-consensual intimate imagery (NCII), deceptive forged documents, or impersonations meant to mislead), action must occur even faster in urgent cases, though the standard government/court directive remains at 3 hours.
Platforms are prohibited from hosting such prohibited synthetic content at all. Grievance redressal for user complaints has also been accelerated from 15 days to 7 days. These tight deadlines aim to enable rapid response to disinformation, cyber threats, and abuse.
4. What happens if social media platforms fail to comply with these new obligations?
Non-compliance carries serious consequences. Intermediaries risk losing Safe Harbour protection under Section 79 of the Information Technology Act, 2000. This immunity normally shields platforms from liability for third-party/user content if they follow due diligence.
The rules explicitly state that knowingly permitting, promoting, or failing to act on violations related to synthetic content constitutes a breach of due diligence. Loss of safe harbour could expose platforms to direct lawsuits, fines, or other legal actions for user-posted harmful material. Platforms must also notify users of potential penalties (e.g., content removal, account suspension) for rule breaches.
5. Why were these rules introduced, and how do they fit into India’s broader approach to AI and online safety?
The government cites urgent threats from unchecked AI-generated content: erosion of trust in digital media due to indistinguishable fakes, spread of disinformation affecting elections and society, rise in AI-enabled CSAM and NCII, and challenges for law enforcement in tracing origins.
These amendments complement other laws like the Digital Personal Data Protection Act, 2023 (data misuse in AI training), provisions in the Bharatiya Nyaya Sanhita (forgery, impersonation, misinformation), repeated MeitY advisories (2023–2025), and monitoring by the Indian Cyber Crime Coordination Centre (I4C). Overall, the framework seeks to promote transparency, swift enforcement, and accountability in the AI era while preserving legitimate uses through targeted exemptions—positioning India as one of the early countries to regulate synthetic media at scale.

