Subscribe to QS Insights Magazine

The Headlines


Disclosure or Disarray?

Universities are grappling with the complex ethical challenges of AI in academia, moving beyond detection to encourage transparent and responsible student use.

By Gauri Kohli

"Disclosure can encourage transparency and critical reflection, but it is not a viable long-term strategy for ensuring academic integrity."

Talking points

  • Universities face confusion and low student compliance with AI disclosure policies.
  • Defining legitimate AI use versus academic dishonesty remains a significant challenge.
  • True academic integrity in the AI era requires systemic pedagogical redesign, not just declarations.

True academic integrity in the AI era requires systemic pedagogical redesign, not just declarations.

The widely debated question of what constitutes original student work is being redefined across universities, leading many to explore a new frontier: AI disclosure. At some universities, the focus is not only on detecting AI use but also on encouraging transparency and responsible use. However, early signs show confusion, inconsistency and growing ethical concerns.

This is the ethical grey area universities are grappling with, where the lines between assistance and academic dishonesty are becoming harder to define.

Policies and Early Challenges

The core of this evolving strategy is compelling students to explicitly declare and explain how AI tools assisted their assignments. This typically involves detailing which tools were used, what prompts were entered, how the output was edited and whether the student verified the results. Universities like Princeton and Georgetown in the US are following this approach, requiring detailed explanations of AI integration in student submissions. Similarly, the University of Melbourne asks students to defer to their lecturers’ guidance on acceptable use of AI (if at all) and provide written acknowledgment of the use of generative AI and its extent.

However, the implementation of these policies has been anything but smooth. Dr Chahna Gonsalves, a Senior Lecturer in Marketing (Education) at King’s Business School, King’s College London, conducted a study in 2024 that revealed a significant hurdle: 74 percent of students who used AI tools did not declare this use, despite being prompted by mandatory forms. This stark figure points to a disconnect between institutional intent and student behavior.

“At the time, around 2023, we simply did not have clarity on how students were using AI tools to complete assessments,” she explains, highlighting that the initial declaration process was an “exploratory step toward cultivating a culture of transparency”, rather than a definitive solution.

“Disclosure can encourage transparency and critical reflection, but it is not a viable long-term strategy for ensuring academic integrity,” she tells QS Insights Magazine.

This low uptake, as Marc Watkins, Assistant Director of Academic Innovation at the Mississippi AI Institute for Teachers, suggests, could indicate a “communication gap or deeper issues of trust or clarity”. Students “respond favourably and recognise the importance of what is being asked” when given a clear pathway to disclose usage, he notes. Yet, in practice, as Dr Gonsalves’s research shows, a significant portion of students are either unaware, uncertain or hesitant to comply.

Professor Jeannie Marie Paterson, Director of the Centre for AI and Digital Ethics at the University of Melbourne, echoes this sentiment, stating that universities currently have “very little understanding about the overall use by students of generative AI and how they are using that technology”. For her, disclosure serves as an early response, offering “some insight to lecturers about student use” and acting as “a prompt to students to reflect on their own practices, and perhaps some friction in overuse.”

The Ethical Grey Zone

Experts say that one of the most pressing challenges arising from the use of AI in education is defining the ethical boundaries of collaboration. “Students and faculty increasingly don’t have shared experiences using this technology,” says Watkins. “Faculty resist AI while students embrace AI. We need to develop a more nuanced understanding of how and when this technology is employed in order to get to a place where AI best practices can arise. We aren’t there yet and won’t be for quite a while.”

“It is increasingly difficult to draw the line between legitimate and illegitimate uses,” observes Professor Paterson, noting that AI is “now embedded in operating systems and search engines”. She highlights that there is a normative and practical difference between authentic output and AI generated work. “The problem for universities is that it is very difficult to identify AI generated work with confidence,” she adds.

Dr Gonsalves argues that instead of focusing on “tiered disclosure”, universities should adopt “tiered use instructions”, clearly communicating the acceptable scope of AI for each task.

The lack of consistent guidelines creates a navigation nightmare for students, a point several academics repeatedly flagged during interviews. “In the US, most universities allow individual faculty freedom to decide what course-level AI policies should be,” explains Watkins. “This is challenging for students to navigate as different professors will have conflicting views about AI and academic integrity along with different levels of understanding about what AI can do.” This inconsistency, while allowing for flexibility, simultaneously leads to confusion and raises concerns about fairness.

Over-regulation, a Risk?

While the impulse to regulate AI use is understandable, experts caution against overly rigid or unclear policies that could inadvertently penalise students and stifle innovation. Dr Gonsalves highlights the significant risks for student equity, particularly for “students from non-traditional backgrounds, those with learning differences, or those working in a second language often rely on AI tools for support in ways that are legitimate and necessary”. Without clear distinctions between legitimate support and unacceptable substitution, these students risk being unfairly penalised.

Moreover, overly strict AI bans could have a “chilling effect on experimentation and curiosity”. In today’s professional landscape, AI is already deeply integrated into numerous fields. Prohibiting or discouraging its use in educational settings might leave students ill-prepared for the realities of the workforce. “As a marketing educator, I see an additional risk in that overly restrictive policies may actively undermine the type of innovation we seek to encourage,” says Dr Gonsalves.

Professor Paterson views disclosure as an “educational response” to both staff and students about generative AI use. In practice, disclosure helps clarify what’s acceptable for students. It may also help staff have better insight into how well students understand academic integrity policies on AI use.

She argues that disclosure “should not be the basis, or at least the sole basis, for penalising students who have used AI in ways that contravene university policies”. If it were, dishonest students who choose not to disclose would be at an advantage over those who are transparent.

Who’s Doing It Well

Despite the pervasive challenges, some institutions are taking thoughtful and practical approaches to AI disclosure, offering valuable lessons for the wider international education community.

Monash University in Australia is highlighted by Watkins as having “excellent overall frameworks for AI disclosure”, providing comprehensive guidance for students. Similarly, the University of Melbourne offers extensive guidance for students on acknowledging AI use, alongside information on responsible and ethical AI, the meaning of academic integrity in the age of AI, and permitted/prohibited uses across disciplines, points out Professor Paterson.

Dr Gonsalves brings up other examples: Newcastle University in the UK requires students to submit appendices detailing tool use, rationale and prompts. Fellow UK institutions, Lancaster and Birkbeck, also encourage explicit and reflective disclosures. Carnegie Mellon in the US mandates full prompt histories in some courses, and Austria’s University of Graz suggests a layered disclosure model incorporating citations, logs, and commentary. She also highlights the PAIR framework, developed at King’s College London, which emphasises pedagogical intention and reflective articulation of AI use.

However, Dr Gonsalves remains cautious about the effectiveness of these models in eliciting honest declarations. “There remains no real mechanism to verify disclosure; students have no compelling incentive to be truthful, and we lack the infrastructure to check compliance in any robust way,” she admits.

The Imperative of Standardisation

The question of standardising AI disclosure policies across departments or campuses is a contentious one. Watkins firmly believes that “inconsistency will create confusion”, advocating for open disclosure that extends beyond education into public life, where the use of AI by public officials, healthcare providers, and legal professionals should be “open and scrutable by the public, not opaque.”

Dr Gonsalves agrees that “standardisation is crucial for building institutional clarity and student trust”. However, she stresses that “standardisation should not mean rigidity”. That said, a centralised institutional framework with clear definitions and expectations can coexist with local, discipline-specific adaptations. For example, AI use in creative writing will naturally differ from its application in medical education or law. Policies must be flexible enough to reflect these nuances while remaining consistent enough to avoid a “patchwork of contradictory rules”.

Beyond Disclosure

Ultimately, the consensus among experts is that disclosure, while a necessary initial step, is not a comprehensive fix for academic integrity in the AI era. “I don’t think it will be where we end up,” says Watkins, who believes that what’s truly needed are “reliable detection mechanisms (there really aren’t any) and a type of social contract about expectations and AI”.

Dr Gonsalves argues that sustainable academic integrity will require systemic pedagogical redesign, not procedural declarations. This means a fundamental re-evaluation of what is valued in student work, how learning is assessed, and how critical engagement with digital tools is supported. She emphasises that institutions should assume widespread student use of AI tools (outside of invigilated exams) and redesign assessment practices accordingly, prioritising reflection-based assessments and AI-use appendices, among other things, that resonate with the realities of different disciplines.

Professor Paterson concludes by highlighting the student perspective: “Students are interested and largely responsive” when they understand the expectations. Her students report valuing the “integrity of their learning journey at university and the importance of developing real skills”, alongside a genuine interest in AI upskilling.

However, students have also reported that they are anxious about what AI means for their intended future career. In other words, students are not inherently resistant to responsible AI use, but rather seek clarity, guidance, and a framework that supports their learning and future careers.

As the conversations suggest, the answer lies not in simple declarations, but in reimagining how universities combine integrity, innovation and AI use into their academic culture.