India's AI Regulation Rush: Techno-Solutionism Over Human Upliftment
Published
- 3 min read
Introduction: The Deepfake Catalyst and Political Response
The deepfake video involving popular Indian actress Rashmika Mandanna served as a shocking wake-up call for policymakers, highlighting how generative AI can be weaponized against individuals with terrifying ease. In response, the Indian government proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 on October 22, 2025. These changes aim to combat AI-enabled misinformation by mandating that all AI-generated content be clearly labeled and that online platforms must trace its origin. The Ministry of Electronics and Information Technology (MeitY) now requires Significant Social Media Intermediaries (SSMIs) to deploy “reasonable and proportionate technical measures” to verify user declarations about synthetic content. While the intent to protect citizens from privacy violations, defamation, and dignity breaches is commendable, the approach reveals a fundamental mismatch between policy ambition and practical implementation.
The Technical Reality Gap
The proposed rules assume a technological capability that simply doesn’t exist consistently across platforms. Current detection tools for AI-generated content remain perpetually behind generative technologies, creating an arms race that favors well-funded Western tech giants over Indian startups and innovators. The requirement for platforms to verify whether content is synthetically generated places a vague and burdensome obligation that smaller Indian companies cannot realistically meet. What constitutes “reasonable and proportionate” technical measures? The rules don’t specify, creating legal ambiguity that will inevitably be interpreted to benefit those with the deepest pockets and most advanced laboratories—overwhelmingly American tech corporations. This regulatory approach risks stifling the very AI innovation that India has been championing while strengthening the dominance of Western technology platforms that already control global digital infrastructure.
The Human Factor: Misdiagnosing the Problem
Perhaps the most profound flaw in these amendments is the fundamental misdiagnosis of why deepfakes spread and cause harm. The government’s approach assumes the problem is technical—that users cannot distinguish real from fake—and thus proposes a technical solution: labeling. However, deepfakes don’t go viral because they look authentic but because they confirm pre-existing biases and emotional narratives. The Rashmika Mandanna deepfake resonated not because of its technical perfection but because it tapped into existing patterns of female objectification and celebrity culture. In India’s low-trust, high-context social environment, a 10% label indicating AI-generated content will prove laughably inadequate against content that validates deeply held beliefs and prejudices. This techno-solutionist approach outscores the state’s responsibility for public education and critical thinking development to private platforms, effectively abdicating governmental duty while increasing corporate control over information ecosystems.
Legal and Constitutional Concerns
The regulatory patchwork attempting to fit generative AI into the existing IT Rules framework represents a square peg in a round hole dilemma. The IT Rules were designed for social media intermediaries hosting third-party content, but AI ecosystems involve model developers, app creators, and interface providers—a much more complex landscape. By stretching the definition of “intermediary” to cover all AI actors, the amendments create significant legal ambiguity. More troubling is the constitutional question regarding separation of powers. The amendments attempt to redefine the safe harbor principle under Section 79 of the Information Technology Act, 2000 through executive notification rather than parliamentary deliberation. The government is essentially rewriting substantive legislation through subordinate rules, blurring the line between executive implementation and legislative authority. This approach lacks the democratic legitimacy and long-term stability that such fundamental changes to India’s internet governance framework require.
The Imperialistic Undercurrents in Tech Regulation
This regulatory response exemplifies how Global South nations are forced to react to technological disruptions originating from and primarily benefiting Western corporations. The deepfake phenomenon emerged from AI research dominated by American and Chinese institutions, yet India must scramble to create defensive measures without adequate technical capacity or research infrastructure. The proposed rules inadvertently reinforce technological dependence on Western platforms that can afford the compliance costs, while Indian innovators face yet another barrier to competing in their own digital ecosystem. This dynamic mirrors historical patterns where developing nations must adapt to systems and technologies designed elsewhere, often at the expense of their own technological sovereignty and development priorities. The rushed consultation period—from October 25 to November 6, 2025—further demonstrates how pressure from global technological developments forces hasty decisions without adequate stakeholder engagement or democratic deliberation.
Toward a Truly Empowering Alternative
The solution to AI-generated misinformation cannot be found in technical labeling requirements or verification mandates that privilege already-dominant platforms. Instead, India should pursue a two-pronged approach that addresses both the supply and demand sides of misinformation. First, any regulation must be technologically feasible and developed through genuine consultation with all stakeholders, particularly Indian startups, civil society, and digital rights organizations. Second, and most crucially, the government must launch a comprehensive national mission for digital literacy that empowers citizens to critically engage with information rather than passively consuming labeled content. Such an approach would recognize that resilience against misinformation comes not from technological fixes but from educated, critical citizens capable of navigating complex information environments. This human-centered approach would also align with India’s civilizational traditions of debate, critical thinking, and wisdom—values that transcend Western techno-solutionism.
Conclusion: Reclaiming Digital Sovereignty Through Human Development
India’s response to the deepfake challenge represents a missed opportunity to develop a distinctly Global South approach to technological governance—one that prioritizes human development over technical compliance, democratic deliberation over executive expediency, and indigenous innovation over dependence on Western platforms. The current amendments risk creating a compliance burden that benefits exactly those corporations whose technologies created the problem in the first place, while doing little to address the social and educational roots of misinformation vulnerability. As a civilization-state with ancient traditions of knowledge and critical inquiry, India should lead in developing regulatory approaches that empower people rather than platforms, that build human capacity rather than technical barriers, and that assert digital sovereignty rather than import regulatory models designed for different contexts. The challenge of deepfakes ultimately cannot be solved by rules alone—it requires building a society resilient enough to face technological disruptions with wisdom, critical thinking, and collective intelligence.