The academic world is grappling with the unprecedented integration of artificial intelligence (AI) into education, leading to complex questions about academic integrity and the reliability of AI detection tools. The case of Haishan Yang, a PhD student expelled from the University of Minnesota for allegedly using AI to cheat on an exam, has become a focal point in this debate. Yang’s lawsuit against the university and faculty members alleges manipulated evidence and raises critical concerns about due process and the potential for bias in AI-related disciplinary actions.
The Exam in Question: A Remote Assessment Under Scrutiny
In August 2024, while traveling in Morocco, Yang remotely took a preliminary exam, a crucial step toward commencing his thesis research. The eight-hour assessment required students to respond to three essay questions, permitting access to course materials but explicitly prohibiting the use of AI assistance.
Yang believed he had performed well on the exam. However, weeks later, he received notification that he had failed and was accused of employing AI, specifically ChatGPT, to generate his responses. This accusation triggered a student conduct review hearing, where Yang was compelled to defend himself against the claims made by his professors.
The Faculty’s Case: A Mosaic of Allegations
The case against Yang rested on four principal arguments presented by the faculty members:
-
Stylistic Discrepancy and Unfamiliar Concepts: The four professors evaluating Yang’s exam uniformly asserted that his writing style deviated significantly from his previous work and incorporated concepts not explicitly covered in the course.
-
ChatGPT Comparison Test: One professor, Dr. Hannah Neprash, conducted a comparison test, inputting the exam questions into ChatGPT and comparing the AI-generated responses to Yang’s submissions. She concluded that certain phrases, sentence structures, and core ideas bore striking resemblances.
-
Acronym Usage: Yang’s use of the acronym PCO (Primary Care Organization) in one answer was cited as further evidence. The faculty argued that this acronym was uncommon in the field and also appeared in ChatGPT’s output.
-
Prior AI-Related Incident: The university referenced a previous incident from a year prior when Yang submitted a homework assignment containing a “note to self” that read: “re write it (sic), make it more casual, like a foreign student write but no ai.” While Professor Susan Mason initially suspected AI use, she later withdrew the allegation. Nevertheless, Yang received a formal warning from the university at that time.
Adding to the complexity of the case, the university employed GPTZero, an AI detection tool, to analyze Yang’s exam responses. The reliability of AI detection tools has been widely questioned due to inconsistencies and false positives, casting doubt on the validity of this evidence.
Yang’s Defense: A Rebuttal of the Allegations
Yang adamantly denies using AI on the exam, offering the following counterarguments:
-
Shared Sources, Similar Outputs: Yang contends that the similarities between his answers and ChatGPT’s responses are a natural consequence of both drawing from the same body of publicly available literature on health economics. He argues that ChatGPT, trained on an extensive dataset, will inevitably produce outputs aligned with standard academic sources.
-
Alleged Manipulation of ChatGPT Responses: Yang suspects that one of the professors may have altered the ChatGPT-generated responses to make them appear more similar to his own answers. He claims to have identified ten key discrepancies between the ChatGPT responses used in the hearing and earlier versions shared among faculty members.
-
Personal Conflicts and Alleged Bias: Yang alleges that prior disagreements with certain faculty members influenced the accusations against him. A year prior to the exam, the university had rescinded his funding, citing unsatisfactory performance and behavior as a research assistant. However, Yang successfully appealed this decision with the support of his advisor, Professor Brian Dowd, who denounced the university’s actions as “an embarrassment.” The university subsequently apologized and reinstated his funding in exchange for Yang’s agreement not to pursue legal action. Yang now believes this prior conflict fueled bias against him during the cheating investigation.
While the majority of the faculty involved in the case supported Yang’s expulsion, his advisor, Professor Brian Dowd, strongly defended him.10 Dowd described Yang as “the best-read student” he had encountered and dismissed the evidence against him as “inconclusive.” In a letter to the panel, he expressed his bewilderment at the “level of animosity directed at a student” and suggested that there was no justification for such treatment. Despite Dowd’s support, the university panel unanimously voted against Yang.
Following the hearing, Yang was officially expelled from the University of Minnesota, which also revoked his student visa, effectively forcing him to leave the United States. In January 2025, Yang filed lawsuits in both state and federal court against Dr. Hannah Neprash and other university faculty members, accusing them of manipulating ChatGPT responses, denying him due process during the hearing, and unfairly targeting him due to prior conflicts.11 As of now, the defendants have not formally responded in court, and the university has declined to comment on the specific allegations.
The University of Minnesota has largely remained silent about the case, citing student privacy regulations. However, Senior Public Relations Director Jake Ricker issued a brief statement asserting that the university followed its policies and procedures and that its actions were appropriate. He indicated that the university’s perspective on the matter would be detailed in court filings. The case has ignited a fierce debate regarding AI policies in academia. Some argue for clearer guidelines and improved detection methods, while others express concerns about false accusations and the potential for AI bias in academic assessments.
The lawsuit filed by Haishan Yang has the potential to establish a significant precedent for how universities address AI-related academic misconduct. A ruling in Yang’s favor could compel universities to revise their AI detection policies and demand stronger evidence before penalizing students. Conversely, a ruling in favor of the university could reinforce stricter AI policies and legitimize AI detection tools as admissible evidence in academic misconduct cases. With AI becoming increasingly integrated into education, institutions globally are closely monitoring this case, as its outcome could significantly shape the future of AI ethics, student rights, and academic integrity.