The Dispatch


AI Litigation

The next great challenge for the higher education landscape.

By Niamh Ollerton

Share this page

Subscribe to QS Insights Magazine
“As researchers increasingly employ AI to co-author papers or generate research data, questions over intellectual property and authorship rights have intensified."
“Higher ed leaders need to foster a culture of change - one that guides students, faculty and staff to embrace the transformative power of gen AI over its perceived threats."

AI represents a deep fundamental shift in the way higher education institutions function, ushering in a new era of possibilities and potential impacts on teaching, learning, operations and more.

In study design, “AI as Oracle” can search, evaluate and summarise massive literatures; in data collection, “AI as Surrogate” allows scientists to generate accurate stand-in data points; in data analysis, “AI as Quant” tools seek to surpass the human intellect’s ability to analyse vast and complex datasets; whereas “AI as Arbiter” applications aim to objectively evaluate studies for merit and replicability, replacing humans in the peer-review process.

But as with many other areas where AI is used, there are ethical challenges and questions that often arise, and its latest risk is AI litigation.

Higher education is increasingly integrating AI in teaching and research, and it is because of this, risks of legal disputes are emerging - likely to accelerate at a fast and furious rate without proper guidelines in place.

The integration of AI into higher education teaching, learning and research means that universities now face challenges surrounding academic misconduct and intellectual property concerns when AI tools are used without clear guidelines - and recent controversies in the higher education space highlight these risks.

A rise in cases

Since ChatGPT became widely available in late 2022, students and instructors alike have been apprehensive and worried about the possibility of repercussions - with the worst outcome being expulsion - over AI use.

In a case in the US, a student’s family contested disciplinary actions after AI-generated work was flagged under unclear policies.

A third-year health economics PhD student at the University of Minnesota was expelled after faculty accused him of using AI on an exam, with the student, Haishan Yang, later suing the university, arguing that current AI detection methods are unreliable and biased, leading to wrongful expulsion.

According to university documents shared by Yang, all four faculty graders of his exam expressed “significant concerns” that the exam was not written in his voice.

Yang, however, denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said the AI detection methods used are known to be unreliable and biased, particularly against people whose first language isn’t English.

An anonymous student at the School of Management at Yale University brought a similar case forward, with the student suing the university in the first lawsuit of its kind involving AI use, academic honesty and potential bias in higher education.

The lawsuit specifically addresses biased AI detection methods that unfairly target particularly non-native English speakers.

The plaintiff, a French entrepreneur and investor living in Texas who was enrolled in a 22-month executive MBA programme, alleges that Yale’s Honour Committee process for academic discipline was mismanaged and discriminatory.

According to the lawsuit, the plaintiff’s final exam paper in his “Sourcing and Managing Funds” course was flagged and referred to the Honour Committee for “further investigation,” specifically into improper use of AI.

The student alleges that he "has been falsely accused of using artificial intelligence on a final exam" however, Yale used an AI detection program called "GPTZero" to flag the AI use. The plaintiff sued Yale University, its Board of Trustees and individual defendants involved in the disciplinary process.

This lawsuit alone demonstrates the blurred lines and direct contradictions on supposed AI use between student and university.

“Such instances underscore that overreliance on AI-detection tools and the absence of AI governance policies, can lead to inequitable outcomes and expose institutions to litigation,” says Vangelis Tsiligkiris, Professor of International Education at Nottingham Business School.

AI and university research

Of course it isn’t just examinations that are under scrutiny, the use of AI in research further complicates the higher education landscape.

One of the primary concerns with generative AI is its ability to source ideas from unpublished online materials without proper attribution, similarly, the use of AI-generated outputs as if they were the original work of the researcher is also a notable concern for HEIs.

“As researchers increasingly employ AI to co-author papers or generate research data, questions over intellectual property and authorship rights have intensified,” Tsiligkiris tells QS Insights Magazine.:

“Many publishers now insist that only human contributors are credited as authors, leaving a grey area for undisclosed AI input. Without clear policies on disclosure, researchers risk authorship disputes and potential retractions, which in turn could harm institutional reputation.”

HEIs must adopt robust, transparent policies to mitigate these risks.

“Clear guidelines should define permissible AI use and establish fair procedures for handling AI-generated content. Importantly, institutions should invest in comprehensive training for both staff and students, ensuring that all understand AI’s benefits and limitations.

“Emphasis must be placed on the necessity of human oversight in high-stakes academic decisions, rather than relying solely on automated detection tools,” Tsiligkiris says.

How to sustainably adopt AI into higher education

Many administrators and faculty have been apprehensive about its potential effects on campuses and classrooms, but when used effectively, few disagree that generative AI can be a powerful and intelligent collaborator for students, faculty and staff.

And given the near-impossibility of avoiding the use of AI tools in education in a digital world, banning technologies in education isn’t a viable solution - which is why HEIs must develop solutions that align with academic integrity principles.

“Higher ed leaders need to foster a culture of change - one that guides students, faculty and staff to embrace the transformative power of gen AI over its perceived threats,” advises Tsiligkiris.

The case from Yale demonstrates the importance of well-written AI governance policies and the responsible implementation of AI tools at universities and business schools.

Before AI technology is signed off by higher education institutions, they must be vetted for any potential inherent bias that can affect student success.

Institutions across the globe will continue to adopt and rely on AI technologies throughout the educational ecosystem, which is why higher education leaders must evaluate legal risks and carefully select AI detection methods.

The sustainable adoption of AI in higher education ultimately lies in balancing innovation with accountability, according to Tsiligkiris.

“By proactively developing coherent policies, promoting a culture of transparency, and ensuring equitable practices, universities can benefit from AI’s potential while avoiding the pitfalls of widespread litigation,” he says.

“Waiting for legal action to set precedents is too risky; instead, thoughtful governance now will safeguard academic integrity and institutional credibility in the long term.”