AI & Emerging Tech
India moves to tighten AI rules with mandatory content labelling

New draft rules mandate clear labels on AI-generated visuals and audio to curb misinformation and digital impersonation.
India has proposed new regulations requiring artificial intelligence and social media platforms to label AI-generated content, in a bid to curb misinformation, deepfakes, and election-related manipulation. The proposal, announced by the Ministry of Electronics and Information Technology on Wednesday, follows similar moves by the European Union and China, Reuters reported.
The draft rules would compel companies such as OpenAI, Meta, Google and X (formerly Twitter) to visibly mark synthetic content created through AI systems. According to the government’s proposal, labels must cover at least 10% of the surface area of an image or the first 10% of an audio or video clip’s duration. The government said the measure aims to ensure “visible labelling, metadata traceability, and transparency for all public-facing AI-generated media.”
India, home to nearly one billion internet users, has seen a surge in deepfake content and digitally altered videos circulating on social media. Officials have raised concerns about the potential misuse of such content during elections in the country’s highly diverse and politically charged environment. The government said the rules were designed to prevent harm, “manipulation of public opinion, impersonation of individuals, and the spread of misinformation.”
Under the draft, social media platforms would also need to collect user declarations confirming whether uploaded material is AI-generated. They would be required to deploy “reasonable technical measures” to detect and verify synthetic media before or during publication. Public and industry feedback on the proposal has been invited until 6 November.
India’s framework goes further than many global counterparts by introducing a quantifiable visibility standard for AI disclosures. Dhruv Garg, founding partner at public policy research firm Indian Governance and Policy Project, told Reuters that the rule “is among the first explicit attempts globally to prescribe a measurable labelling requirement.” If implemented, he added, it would push AI companies to build automated detection and labelling systems directly into their tools.
The proposed regulations come as India emerges as a key market for generative AI firms. OpenAI Chief Executive Sam Altman said earlier this year that India is the company’s second-largest market by user volume, with adoption tripling over the past year. Industry observers believe that the new framework, while stringent, could shape global standards for AI transparency and accountability.
Meanwhile, Indian courts are handling a string of lawsuits linked to deepfake content. Bollywood actors Abhishek Bachchan and Aishwarya Rai Bachchan have filed a case in New Delhi seeking to block AI-generated videos that use their likenesses without consent, as well as to challenge YouTube’s AI training policies.
As AI adoption accelerates, governments are increasingly forced to balance innovation with safeguards against deception and digital harm. India’s draft rules, if finalised, could become a template for emerging economies seeking to regulate the fast-evolving AI ecosystem.
Topics
Author
Loading...
Loading...






