The Cover
The academic panopticon
Is catching academic integrity violations creating a surveillance state in institutions and hurting learning outcomes?
Looking out for AI generated content may not be improving learning outcomes and instead be ignoring the root problems at hand.
By Eugenia Lim, Contributing Writer
"If we focus too much attention on detection, we’ll probably waste quite a lot of time."
Share this page
Since the launch of OpenAI’s ChatGPT, anxiety has hit the higher education sector, especially in areas of study that rely on essay writing as a means of assessment.
At first, this new AI frontier seemed to demand that educators find the best way to detect cheaters who use AI in their work. In the months that followed, various AI detection software entered the fray. Players such as ZeroGPT, Content at Scale and Turnitin now offer subscription models for teachers to detect AI-involvement in student work.
The promise is for teachers to be able to tell how “human” or “AI” an essay is in about as much time as it takes for a student to produce an AI generated essay. Similar to plagiarism detection tools, one would run the student’s work through the platform’s web dashboard, producing a probability score that reflects the likelihood that AI was used to generate the essay. It may sound effective, but experts say this is the wrong way to go.
Catching the cheater
“The predominant challenge with an over-reliance on AI detection tools is that they can instil an environment of surveillance, rather than nurturing genuine academic engagement,” says Sam Illingworth, an Associate Professor of Learning and Teaching at Edinburgh Napier University.
Speaking to QS Insights Magazine, he says relying on detection tools can potentially reduce academic pursuits to mere games of 'catch the cheater', undermining the richer relationship students should have with knowledge. “Academic integrity cannot be solely upheld by monitoring tools or software that detect plagiarism.”
Instead, he believes there must be a shift towards an understanding of assessment as a tool for learning, rather than simply as a certification of knowledge. “If the aim of higher education is to simply ensure that students can reproduce a body of knowledge, AI indeed poses a threat. But such a narrow focus fails to capture the transformative essence of education.”
"An over-reliance on AI detection tools... can instil an environment of surveillance."
One too many false positives
Confidence in the accuracy of these detection tools has also wavered.
Turnitin says its software has a false positive rate of less than 1 percent for documents that contain over 20 percent AI writing. It acknowledges there is a “small risk” for false positives, where students could be wrongly accused of misconduct. However, there is growing evidence that AI detection tools are less accurate.
A Stanford study released in April found that AI detectors falsely identify non-native English speakers as using AI, raising questions of their reliability and highlighting potential biases.
In June, Washington Post columnist Geoffrey Fowler reported that software to detect AI might “simply be impossible” and that false positives remain a bigger risk than earlier projected. His investigation found over half of his 16 samples were identified at least partly incorrectly.
“If people take cases to a board of discipline, then they are likely to get very stuck because the burden of proof will be difficult,” says Rachel Forsyth, project manager in the strategic development office at Lund University. “I think it’s the wrong way to go if we focus too much attention on detection, we’ll probably waste quite a lot of time if we do that.”
Forsyth, who has over 30 years’ experience in assessment design and as an education developer, says that if students think that what teachers value is only the essay, they are more likely to take a shortcut and use AI tools to get their work done.
“Even if we could detect [AI generated content], it’s not what we want to do, and there are lots of professions where students are going to go into and they are going to use these sorts of tools all the time,” she says.
Her sentiment is echoed by Cecilia Chan, Professor in the Faculty of Education at The University of Hong Kong (HKU). “How do you ensure students don’t cheat? We can’t do that. Since exams appeared, there is always cheating, and it has nothing to do with GenAI,” says Professor Chan.
According to Professor Chan, this means teachers must place emphasis on the learning process and not just deliverables. “We need to explain to our students - you might not get caught, but did you learn anything?”
"How do you ensure students don’t cheat? We can’t do that".
Embracing AI in the classroom
There is a growing consensus that teachers must incorporate AI into the curriculum or risk being obsolete. As a result, there is more pressure on teachers to zero in on the intended learning objectives, and the best methods to assess them.
Edinburgh Napier University’s Professor Illingworth says institutions ought to guide students in the exploration of AI, enabling them to appreciate the nuances, possibilities and limitations of such technology. He calls on universities to reimagine assessment in a way that recognises the broader role of AI tools in our society.
“With ChatGPT, for instance, students might be guided to dissect its responses, discerning the underlying sources and potential inaccuracies,” says Illingworth, pointing to the threat of misinformation. “Rather than returning to pre-ChatGPT, higher education must seize the moment, capitalising on the capabilities of AI to foster innovative, inclusive and deeply transformative teaching, learning and assessment approaches.”
Professor Chan also advocates that this is an opportunity to redesign and rethink the way higher education institutions test their students. She says teachers should expect that students are using GenAI in their assignments because it is simply unavoidable.
“Is it a game changer or a Pandora's box? I think that kind of depends on how we move as humans and as educators,” says Professor Chan.
The new normal
In order to navigate this era of pervasive AI, Professor Chan says institutions must provide support for teachers; they simply cannot forge ahead alone.
In her second role as the Director of the HKU Centre for the Enhancement of Teaching and Learning (TALIC), Professor Chan plays an instrumental role in crafting the university’s AI policy; one which fully embraces AI with open arms.
For one, HKU, like other Hong Kong universities, ensures all its students have equal access to AI tools. This way it can guarantee that students are on a level playing field. Professor Chan also ensures staff get acquainted with the world of generative AI for the purposes of lesson planning and assessment.
TALIC conducted AI clinics for staff during the past Summer term to allow teachers to understand the AI offerings available in the market. It also prepared AI literacy modules to ensure both staff and students are up to date with the latest developments and the ethical quandaries of using such tools.
She says this is key to ensuring that teachers do not fear redesigning their assessment in the age of AI. Teachers also have access to “AI help hotlines” to ensure they get the technical help they need, when they need it.
Since AI cannot and should not be ignored in higher education, Lund University’s Forsyth says teachers will likely end up employing a combination of methods to assess students.
Old-school examinations held under controlled conditions still work, additionally Forsyth points to “authentic assessments” such as a business case study, a clinical assessment of a patient or an architectural analysis, as excellent ways to test for understanding.
“These are not cheat-proof but can be supplemented with the work-in-progress assessments or an oral examination to check that the student understands the whole piece,” says Forsyth.