Karthik Nachiappan
7 May 2024Summary
The Indian government’s issue and subsequent withdrawal of a startling Artificial Intelligence (AI) advisory testifies to the nimbleness and agility of the government’s AI approach and plans.
On 1 March 2024, the Indian government issued an advisory on Artificial Intelligence (AI). The notice decreed that social media and related digital platforms should seek the ‘explicit permission’ of the Ministry of Electronics and Information Technology (MEITy) before deploying ‘under-tested and unreliable’ AI models and systems for internet users. Moreover, the advisory called for platforms to protect against bias or ensure platforms do not allow for algorithms that privilege particular positions or ideas which threaten political stability. The advisory drew sharp umbrage from foreign technology firms questioning its viability and purpose. Yet, the moves that followed this initial advisory reflect the government’s broader tack on AI that is focused on unlocking the economic potentials of AI while guarding against its social and political risks.
The immediate response to the 1 March 2024 advisory was scathing. Startups, in particular, warned that the move could kill their ventures before they succeed at developing AI applications and that the move will tilt the field in favour of bigger technology firms which can manage heavy regulatory compliance. The response elicited clarifications from the government that muddied the waters further. It was unclear whether the advisory would apply equally to all digital platforms or just those which could manage the regulatory burden. Moreover, questions also lingered on the process that would ostensibly unfold between platforms and MEITy.
Fundamentally, the advisory’s critical problem was the lack of information provided on its scope and effects. In terms of scope, the advisory’s targets were not clearly identified. What does ‘platform’ mean? Does it include only the big social media platforms and intermediaries deploying AI for various purposes? If yes, are then all other digital ‘platforms’ that have other obligations exempt from this advisory? The directive did not sufficiently clarify this aspect. Second, what are the implications of not adhering to the advisory? Since the move is not legislative, it does not directly refer to an existing law or regulation which, in effect, should delineate the role and responsibilities of the government and other actors. Finally, the advisory did not clearly define or explain what it meant by ‘under-tested and unreliable’ AI applications for the measure to have serious effects.
Further, on 15 March 2024, MEITy reversed its stance by issuing another advisory that revoked the initial 1 March 2024 order mandating government approval for AI models and systems. The updated advisory ostensibly reflected a change in tactics. Self-regulation was emphasised. The onus was back on AI firms to develop and use applications responsibly; the government signalled that it would not breathe down their necks and instead entrust them with deciding how to best balance innovation and risks produced by AI. The updated advisory, however, called for AI firms to guard against algorithms being used to create and peddle deep fakes. Undoubtedly, New Delhi sought to strike a balance between giving innovators and startups the freedom to develop and innovate new AI technologies while establishing broad guardrails as they were deployed. Agility is prioritised over regulation as India moves into the AI age.
MEITy’s revision will likely push AI heavyweights like OpenAI, Google, Amazon and Microsoft to develop and integrate their untested AI models into India’s digital ecosystem. This move will likely accelerate AI absorption and adoption and enable such services to be available for the population writ large. The focus, for now, is not over regulation that could stymie innovation but to be pragmatic and move as and when problems arise. That said, the government’s advisory does indicate that India’s AI ecosystem will be heavily scrutinised if AI applications facilitate the creation or modification of text or audio/visual information that could generate and fuel misinformation. In other words, the hammer will be nearby if the need for its use emerges.
India’s AI landscape is evolving rapidly. Regulatory confusion and uncertainty, however, has not deterred local and international interest in entering this space. Almost all major American BigTech firms are moving into India to develop and deploy local AI applications. Last September, the rising chip firm, NVIDIA, announced partnerships with leading Indian firms like Reliance and TATA to develop local cloud infrastructure and language models that could result in a cloud AI infrastructure platform powered by NVIDIA chips. These investments testify to the nimble pro-innovation approach adopted by the Indian government that is not devoid of an understanding of the risks and concerns involved in mainstreaming AI. India’s approach also appears to be domestically grounded, driven by domestic market conditions, innovation ecosystems and preferences of domestic AI firms and not influenced by AI governance largely defined in the Global North which sets out one-size-fits-all strategies for all countries investing in AI. That is a clear and unalloyed positive.
. . . . .
Dr Karthik Nachiappan is a Research Fellow at the Institute of South Asian Studies (ISAS), an autonomous research institute in the National University of Singapore (NUS). He can be contacted at karthiknach@nus.edu.sg. The author bears full responsibility for the facts cited and opinions expressed in this paper.
Pic credit: @OfficialINDIAai Twitter.