
Digital jurisprudence in India, in an AI era Premium
The Hindu
The jurisprudence around Generative AI is hazy and demands a comprehensive re-evaluation of existing digital jurisprudence
Even though Generative AI (GAI) stands as a transformative force, wielding power to revolutionise society in ground-breaking ways, existing legal frameworks and judicial precedents that have been designed for a pre-AI world may struggle to effectively govern this rapidly-evolving technology.
One of the most persistent and contentious issues in Internet governance has been the fixing of liability on “intermediaries” for content hosted by them. The landmark Shreya Singhal judgment addressed this by upholding Section 79 of the IT Act which grants intermediaries ‘safe harbour’ protection against hosting content, contingent upon meeting the due diligence requirements outlined in Section 3(1)(b) of the Information Technology (Intermediaries Guidelines) Rules. However, its application to Generative AI tools remains challenging.
There are contrasting views on the role of GAI tools. Some argue that they should be considered intermediaries since they are used almost like a search engine even though they do not host links to third-party websites. Others argue that they are mere “conduits” for user prompts, where altering the prompt leads to changes in output — essentially making the generated content akin to third-party speech, and, therefore, attracting lesser liability for the content generated.
In Christian Louboutin Sas vs Nakul Bajaj and Ors (2018), the Delhi High Court held that safe harbour protection applies solely to “passive” intermediaries, referring to entities functioning as mere conduits or passive transmitters of information. However, in the context of Large Language Models (LLMs), making a distinction between user-generated and platform-generated content is increasingly challenging. Additionally, liability in the case of AI chatbots arises once the information is reposted on other platforms by the user; mere response to a user prompt is not considered dissemination.
Generative AI outputs have already led to legal conflicts in various jurisdictions. In June 2023, a radio host in the United States filed a lawsuit against Open AI, alleging that Chat GPT had defamed him. The ambiguity in classifying GAI tools, whether as intermediaries, conduits, or active creators, will complicate the ability of courts to assign liability, particularly in user reposts.
Section 16 of Indian Copyright Act 1957 specifically provides that “no person” shall be entitled to protection of copyright except by the provisions of the Act. As in India, reluctance persists regarding the provisions of copyright protection to works generated by AI globally.
The critical questions are: should existing copyright provisions be revised to accommodate AI? If AI-generated works gain protection, would co-authorship with a human be mandatory? Should recognition extend to the user, the programme itself, and by extension, the programmer, or both? The 161st Parliamentary Standing Committee Report found that the Copyright Act of 1957 is “not well equipped to facilitate authorship and ownership by Artificial Intelligence”.