In a surprising twist, a student sued a university over AI exam submission. The Punjab and Haryana High Court issued a notice to OP Jindal Global University following a petition filed by Kaustubh Shakkarwar, an LLM student of Jindal Global Law School. Shakkarwar’s plea challenges the university’s decision to fail him over alleged AI-generated content in his examination answers.
Shakkarwar, who is enrolled in the Intellectual Property and Technology Laws program, completed his first-term exam in May for the subject Law and Justice in the Globalizing World. On June 25, the university’s Unfair Means Committee found his responses “88% AI-generated,” leading to his failure in the course. This decision was subsequently upheld by the Controller of Examinations.
Petitioner Denies Using AI, Questions University’s Policy
A law student student sues university over AI exam submission, arguing the lack of clear policies on AI use. The petition argues that the university has not explicitly prohibited the use of AI tools, nor has it categorized AI-generated content as “plagiarism.” Represented by advocate Prabhneer Swani, Shakkarwar contends that the lack of formal guidelines on AI use should prevent disciplinary actions against him. He further insists that his work was original and not produced by AI.
Shakkarwar also claims the university failed to provide any concrete evidence supporting the allegations. He argues that the absence of a policy against AI use makes the committee’s decision arbitrary and unfair.
Justice Jasgurpreet Singh Puri has asked OP Jindal Global University to submit a response and scheduled the next hearing for November 14.
Ambiguities in AI Policies and Student Accountability
The case filed by Kaustubh Shakkarwar against OP Jindal Global University highlights a key issue in academia: the unclear stance on AI-generated content. As AI tools become widely accessible, students increasingly use them as aids. However, educational institutions often lag in establishing clear guidelines regarding AI’s role in academic submissions. Shakkarwar’s argument hinges on this gap, asserting that he cannot be penalized for something not explicitly forbidden by university policy.
This situation raises questions about fairness in academic integrity enforcement. If AI-generated content is grounds for failure, universities should have explicit policies. Without clear regulations, students are left in a vulnerable position, uncertain of what constitutes a violation. Moreover, this lack of clarity could create inconsistencies in how academic dishonesty cases are handled across institutions, potentially impacting students’ futures unfairly.
Evolving Definition of Plagiarism in the AI Era
As legal battles over academic standards evolve, the student sues university over AI exam submission could influence future policies. Traditionally, plagiarism involves copying another person’s work without proper credit. However, the rise of AI tools complicates this definition. AI-generated content is not directly “copied” from existing sources but may still be classified as unoriginal. This case reveals the urgent need for institutions to redefine plagiarism in the AI context. Clear rules would not only protect students but also encourage ethical AI use as a supplemental tool rather than a shortcut.
For future cases, universities could adopt transparent policies on AI, outlining when and how students may or may not use these tools.
Also Read: Robot A.I. Specialist Raises Millions from Bezos for Next-Gen Robotics.