Business
Meta, Google and X given 3 hours to remove deepfakes in India — all you need to know

India’s amended IT Rules cut takedown timelines to three hours and mandate AI labelling, raising compliance pressure on Meta, Google and X.
India has tightened its regulatory grip on online content, ordering major social media firms including Meta, Google and X to remove unlawful material within three hours of being notified by authorities.
The amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, will take effect from February 20, sharply reducing the earlier 36-hour takedown window. The move significantly raises compliance demands on global technology platforms operating in one of the world’s largest digital markets.
The rules apply directly to services such as Facebook, Instagram and WhatsApp (owned by Meta), YouTube (owned by Google), and X. For the first time, they also extend explicitly to AI-generated and synthetic content, including deepfakes.
While the government has not formally explained the compressed timeline, the amendments come amid rising concern over impersonation scams, election misinformation and non-consensual AI-generated imagery.
Under the revised framework, intermediaries must remove or disable access to unlawful content within three hours of receiving an order from a court or competent authority. In cases involving non-consensual intimate material, the deadline is even shorter.
Failure to comply carries legal consequences. Platforms risk losing “safe harbour” protections under Section 79 of the IT Act, which shields intermediaries from liability for user-generated content if they meet due diligence requirements. Without safe harbour, companies could face civil or criminal exposure in India.
The amendments also introduce mandatory labelling obligations for AI-generated content. Platforms that allow users to create or share synthetic audio, video or visual material must ensure it carries a prominent label and, where possible, permanent provenance markers. These labels cannot be removed or suppressed once applied.
The rules define AI-generated material as content that is artificially created or altered to appear authentic, such as deepfakes. Routine editing, accessibility tools and legitimate educational or design uses are excluded.
Platforms must also deploy automated technical measures to detect and prevent unlawful synthetic content, including impersonation, child sexual abuse material, false electronic records, explosives-related content and deceptive manipulation.
India has increasingly used its IT rules to expand oversight of online speech. According to transparency reports cited by the BBC, more than 28,000 URLs were blocked in 2024 following government requests, a figure analysts expect could rise under the tighter regime.
The BBC reported that Meta declined to comment on the amendments, while Google and X were approached for responses. The Ministry of Electronics and Information Technology has also been contacted for clarification.
Digital rights groups and technology experts have questioned whether the three-hour mandate is operationally realistic. The Internet Freedom Foundation said the compressed timeline could turn platforms into “rapid fire censors”, leaving little room for meaningful human review.
Anushka Jain, a researcher at the Digital Futures Lab, told the BBC that while AI labelling could improve transparency, the shortened deadline may push companies towards full automation, increasing the risk of over-removal.
Technology analyst Prasanto K Roy described the new regime to the BBC as “perhaps the most extreme takedown regime in any democracy”, warning that compliance would be nearly impossible without extensive automation and minimal oversight.
For Meta, Google and X, India’s new framework signals a sharper regulatory stance as governments worldwide struggle to contain the misuse of generative AI. With more than a billion internet users, India is a critical market — and its approach could shape how global platforms design moderation systems for AI content far beyond its borders.
Whether the rules successfully curb deepfake harm without driving excessive censorship will become clearer once enforcement begins later this month.
Topics
Author
Loading...
Loading...






