AI & Emerging Tech
Government plans stricter IT rules for Meta, Google and X

Proposed changes may make advisories binding and link compliance to legal protection for tech platforms.
India has proposed stricter IT rules that could make government advisories legally binding on platforms such as Meta, Google and X, marking a significant shift in how the country regulates online intermediaries.
The move, outlined in a government notice inviting public feedback, would mean that failure to comply with directives issued by the IT ministry could lead to loss of safe harbour protections—legal immunity that shields platforms from liability for user-generated content. Reuters reported that the proposal aims to “strengthen enforceability” and improve legal certainty.
SHIFT FROM GUIDANCE TO ENFORCEMENT
At present, advisories issued by the IT ministry—covering areas such as content takedowns and deepfake labelling—serve as guidance without direct legal consequences.
Under the proposed changes, non-compliance would be treated as a breach of safe harbour conditions, fundamentally altering the regulatory risk for large technology firms operating in India.
The proposal reflects a broader tightening of digital oversight. Earlier this year, the government reduced the timeline for platforms to remove flagged content to three hours, from 36 hours, and introduced additional obligations around AI-generated content and deepfakes.
TIGHTER CONTROLS ON AI AND CONTENT
The latest move builds on increasing regulatory attention to emerging risks linked to artificial intelligence and online misinformation.
Authorities have sought greater accountability from platforms on how they label synthetic media and respond to harmful or misleading content. The proposed rules indicate a push to move from voluntary compliance to enforceable obligations.
Meta, Google and X did not immediately respond to requests for comment, according to Reuters.
INDUSTRY AND POLICY IMPLICATIONS
The potential loss of safe harbour protection represents a material escalation in regulatory pressure. Without this shield, platforms could face direct legal exposure for content hosted on their services.
This could lead to more aggressive content moderation, faster compliance mechanisms, and tighter internal governance frameworks across tech companies.
As People Matters has noted in its coverage of evolving digital regulation and AI governance, organisations are increasingly being held accountable not just for technology deployment, but also for how they manage its societal impact.
Topics
Author
Loading...
Loading...






