The impact of generative AI on assessment practices in higher education will be both a challenge and an opportunity, but probably neither a utopia nor a dystopia
By Claudia Civinini, contributing writer
"I think that hyperbole on this topic... is probably off base."
Extreme reactions to technological advances have always been the norm. Over two thousand years ago, debate raged over the introduction into education of a technology, itself over a millennia old: the written word. “[Learners] will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks,” Socrates commented, according to Plato.
Today, as AI, and particularly generative AI and Large Language Models (LLMs) such as ChatGPT have captured the public’s imagination, utopian and dystopian takes on what AI will do to education have been no different. Some have declared the technology the death of higher education as we know, while others see it as a way to finally achieve fully personalised learning.
"I think that hyperbole on this topic, on both ends of the spectrum, is probably off base,” says Dr James Genone, Vice President of Academic Solutions and Innovation at Minerva Project. “People are talking about how things will never be the same, how this changes everything and education is dead, or that writing is dead. I don't believe that.”
The truth is, we don’t precisely know yet how education will be affected. Dr Genone says that while there are some very important concerns that warrant attention, and that AI will change education as it will change the knowledge economy, its impact will take time to unfold.
Requiem for an essay
Mike Sharples is Emeritus Professor of Educational Technology at the Open University in the UK. An essay he wrote using ChatGPT-3 made headlines in December last year. According to him, institutions are "acutely aware" of the problem of students using generative AI to write standard assignments, but equally know banning the technology is not the right solution.
With the advent of ChatGPT-4, he believes AI’s ability to write effectively and plausibly has improved, not only in so far as replicating the academic style, but also referencing accurately, an art humans never quite mastered in the first place. "In the future, tools will become not only more accurate, but more embedded into the workflow, and so institutions can't rely on students writing a 3000-word essay and knowing it's their original work,” he says. "I have talked to universities, and they have said in so many words that the standard essay is dead now, for coursework.”
Generative AI’s ability to confidently produce writing across a broad range of topics has, inevitably, raised concerns around plagiarism. Author and AI expert Calum Chace understands the sector’s concerns around plagiarism, although sees it as an opportunity. "If I was in the business of testing knowledge with home assessments, I would be changing that immediately because half of my students would be using ChatGPT to write the assignment,” he says.
"It only ‘knows’ that 'I eat an apple' is far more probable than 'I eat a spaceship'."
Others, however, are more sceptical. Dr Laura Chaubard, Director General of the École Polytechnique in Paris, says that ChatGPT can only enable “a very bad student to pass as mediocre”. “I don't think that the content created on various subjects by ChatGPT can pass as the work of an excellent student in any field. But maybe it will come," she concedes. Her scepticism is due, in part, to LLMs being relatively limited systems.
Generative AI produces content, such as sentences and images, based on what the next most probable word or pixel is. Crucially, Dr Chaubard explains, LLMs like ChatGPT don’t have a conceptual understanding of the content, or output, they produce. “It has no knowledge, for example, of grammar or syntax, it only ‘knows’ that ‘I eat an apple’ is far more probable than ‘I eat a spaceship’,” she says. Improbable, but not impossible.
Using the notoriously uncurated internet as its knowledge base, LLMs have been known to confidently produce incorrect information, known within the sector as “hallucinations”. This, among other factors, is one of the main factors behind the current scepticism of some educators. Beyond the real extent of AI’s capabilities to enable cheating, however, plagiarism isn’t exactly a new problem.
Many are still cheating
In 2021, Dr Sarah Eaton, an Associate Professor of Education at the University of Calgary, Canada, published Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity. The book touches on AI in its final chapter reflecting on future of plagiarism.
Dr Eaton’s view is that the sector should find a way to incorporate technologies, such as paraphrasing software, machine translation, and other emerging technologies in an ethical way, instead of banning them. Banning will be difficult to enforce and makes little sense educationally as students will need to be able to work with AI once they graduate. At the same time, Professor Sharples further warns that bans will create a new digital divide between students who will find a way to use the technology anyway and those who won’t.
Dr Eaton also puts forward the idea of a "post-plagiarism world", a world in which hybrid writing, partly generated by AI and partly by humans, becomes the norm. "Cut and paste plagiarism would no longer apply then," she explains. “Over time, AI tools will become as everyday as those things."
Despite the plagiarism concerns of the sector, however, solutions in the short-term have their own downsides. The move by some institutions to move to invigilated and oral exams is fairly straightforward, but also costly and unsustainable in the long-term, Sharples explains.
"Generative AI most likely enhances a problem that already exists,” say Margaret Bearman, a professor at the Centre for Research in Assessment and Digital Learning at Deakin University in Australia, comments.
In her view, reverting to exam halls and handwriting as “retrograde”, adding that assessment is also a resource issue. “Assurance is really expensive. We need to think carefully: what is the priority area for the resource? What do you need to have the guarantee that the student can do?”
“[Learners] will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."
"Certainly, qualitative assessments can be valid and can be reliable, credible and trustworthy."
With plagiarism not an easy, or a new, problem to solve, bans being untenable, and invigilated exams too costly, it seems the most suitable solution is to tweak and adapt assessment strategies.
Systemic barriers to change, such as lack of time, resources and professional development, will need to be tackled according to Dr Eaton, who also believes universities will need to take an institution-wide approach. If staff are not trained on generative AI and its impact on the sector, she argues, this will create a vicious circle. The parameters for using this technology won't be outlined, assessment practices won't be updated, and plagiarism will continue as a consequence, "because it makes it really easy for students to misbehave".
Another struggle for universities is a mindset barrier that she calls "paradigm wars". Oral examinations, for example, clash against efforts towards assessment standardisation because they can be more qualitative, she says.
"We have two different camps for assessment. One is a very positivistic, quantitative-driven way, and the other is more qualitative. And so, we are getting back into these paradigm wars of quantitative versus qualitative, which is more valid?” she explains.
"Certainly, qualitative assessments can be valid and can be reliable, credible and trustworthy. However, if somebody is firmly entrenched in a positivist way of thinking, they may simply reject that notion because they don't believe that anything qualitative can be trustworthy.”
"You can’t be a good editor if you are not a good writer."
Better writers
Essays may be dead or "on life support," as Eaton puts it, but writing skills aren’t likely to become redundant. Professor Bearman observes that writing is not just about writing; it also helps people think and express emotions.
There are other things that AI can't do well. One is developing a line of argument. Another is coming up with original ideas. "AI is based on things that have already happened,” Professor Bearman says. “It can do recombination, it might even propose something that looks slightly different, but it's ultimately statistically based on what's happened before."
Equally, some believe that in a world flooded with AI-generated content, writing standards will become more sophisticated. "Students are going to go into a world where there's a tsunami of AI-generated text, and to be able to prosper and survive in that world, they will have to be able to write better than a computer,” says Professor Sharples.
“AI will be a repertoire tool, and AI-generated text is going to be a part of the new writing. We have got to make sure that students know how to write effectively with AI tools, and this is going to up the game."
It’s also not only a question of being able to write better than AI, but of being able to evaluate what AI creates. "You can't evaluate whether or not the AI has done a good job of producing some writing if you don't understand what the components of good writing are and how you produce them. So, everyone will need to continue learning those mechanics.,” says Dr Genone.
“You can’t be a good editor if you are not a good writer.”
The qualities of a good editor, such as a deep knowledge of the mechanics of writing, a flair for ruthless fact-checking, and a healthy dose of scepticism, among others, will become all the more important for students to acquire, especially as it’s already clear that AI has a problematic relationship with the truth – something that is feeding into our existing disinformation problem.
Living with the truth
With the amount of information available out there possibly expanding drastically as generative AI develops, Dr Genone says that the cost of information will become practically zero, but the premium on truth will rise. Previous generations of AI, he explains, were, in a way, more reliable in their outputs than generative AI, but came as a trade-off for their limited creativity.
“It didn't 'hallucinate', it didn't come up with these misleading or factually false statements that generative AI tools do,” he says.
"I don't think people are talking enough about the issue of deep fakes. There are going to be technological and policy solutions to this problem. But we need to orient people to be much more cautious about what they believe.”
Professor Bearman has a vision of how we can take AI with a pinch of salt: treat it as if it were a human. While we tend to assume that machines are cold and rational, we have no problems believing that people can lie and have cognitive biases. We need to treat AI in the same way, she says.
Another consideration is who controls the generative AI tools that will become part of our work routine. “We had the same issues with the internet and social media,” says Professor Sharples. “These tools were originally designed to help us communicate, and yet we haven't been able to place effective safeguards and controls on them. And with AI that is magnified.”
"[With the internet] teachers were supposed to become useless."
If anything, the rise of AI could lead to the introduction of further education opportunities, rather at limiting it. AI expert Chace warns that the ability to discern truth from fiction in the current world is something people have not yet developed. "There's going to be more sophisticated chains of evidence that are going to have to be assembled in order to let us know whether important videos are true or not, and maybe something similar will happen in education,” he says.
"We need to keep our thinking skills. But the way we do our thinking has changed throughout history." However, in a possible future when we don't write the first draft of anything anymore, Chace thinks writing, editing and critical thinking skills will be acquired by critiquing the output of the AI. "I am reasonably confident that we will find ways to train young minds in critical thinking, but again, I don't really know the details of how that's going to happen. "
History repeating
All interviewees observe neither the dystopian nor the utopian views of generative AI in education are likely. Despite the unforeseeable changes that AI will bring to the labour market, the skills that universities will need to teach their students will remain broadly the same.
“I think there are some short-term skills that students need to learn, such as prompt engineering,” says Professor Sharples. “But in the longer term, the important skills will be the ones that we want students to achieve anyway: to express yourself clearly, to construct an argument in a clear and precise way, to give instructions whether it's to a human being or to an AI, to evaluate evidence.”
The nature of education is unlikely to change, either. Chace explains that the principles of education, such as preparing people to lead the most flourishing lives they can, and equipping them to make the most of the opportunities that the world throws their way, will remain, even in a future when the labour market may look very different from now. "We will have to become very different kinds of creatures for that to no longer be the case," he says.
But higher education has been here many times before. The mass adoption of the internet produced concerns from educators reminiscent of Socrates’ thoughts on written text in education two millennia earlier. "Students were condemned not to learn anything anymore and teachers were supposed to become useless because knowledge was available anywhere anytime on the internet,” says Professor Chaubard.
"We now know that students keep learning and that teachers keep teaching and keep being very useful to students. Of course, teaching has adapted to the existence of a massive encyclopaedia available on our smartphone, and I am convinced that it will adapt again to the existence of generative AI."
And while labour markets do change with technological advances, Professor Bearman observes that the worst assumptions don’t necessarily come to fruition. “I think we need to understand that this is what happens when new technology comes,” she says.
“We have strong stories that we tell ourselves about technology being disruptive and difficult and dangerous. I appreciate the fear that some people have, but reminding everyone that we have had those fears before is probably quite useful."
Despite all the uncertainty, we can at least be fairly certain that overreacting, both positively and negatively, to technological advances will remain a quintessentially human ability.
ChatGPT is here
A beginner's guide to AI in higher ed
ChatGPT took the world by storm and caught higher education by surprise. Now that it is here and not going away, what is it, how does it work, and where are things headed? A beginners guide to understanding the new technology.
By Gopika Kaul, Head of Content at TCGlobal, India
AI is a machine’s ability to perform tasks and functions we usually associate with human being.
British computer scientist Alan Turing’s seminal 1950 paper, "Computing Machinery and Intelligence", sought to answer a deceptively simple question: can machines think? Like all good simple questions, its answer was, and still remains, complicated, leading to philosophical debates about what constitutes consciousness and feeling.
Interestingly, Turing chose to sidestep the consciousness and feeling debate entirely, and instead reframed the question that eventually led to the formulation of what is now known as the Turing Test. Originally called the Imitation Game, the test shifted focus from “thinking” to “behaviour” and explored whether machines could “act” like humans, rather than “thinking” like them. Turing thought they could. More than 70 years later, we’re faced with a technology that is not only seen as “intelligent” but also “creative”, qualities that once distinguished humans from machines.
What is artificial intelligence and machine learning?
Put simply, AI is a machine’s ability to perform tasks and functions we usually associate with human beings, such as learning, problem-solving, reasoning and creativity.
Machine learning (ML) is a branch of AI, where a system can imitate intelligent human behaviour, as Turing had predicted. ML is trained on massive amounts of data to mimic the way humans think and learn. Much like the human brain, the algorithms in ML are able to identify and learn from data and patterns to make predictions.
The most popular form of AI right now, generative, uses the data it's been trained on to create new outputs, such as text, images and audio. It uses deep learning to understand patterns within its data and then replicate them to generate highly realistic and complex content, imitating human creativity.
The sophistication of the latest models, and the potential advancements future models could make, has opened up unimaginable possibilities with what it can do. It seems to be the genie that can do anything, such as solving complex math problems, writing “original” text, creating art, and drafting legal documents. But appearances are not always what they seem.
There are a number of important points to consider, however. Firstly, generative AI is breaking everything into a set or sets of complex data points. It’s a strange way to look at the world, but for generative AI, images are just a series of pixels, texts are just characters and words, and sounds are audio frequencies.
Secondly, generative AI is then recognising the patterns that exist across the data, and then determining what is likely to come next. In an image of a basketball, for example, it is likely an orange pixel will be followed by a second orange pixel.
AI is a machine’s ability to perform tasks and functions we usually associate with human being.
What is ChatGPT?
In basic terms, ChatGPT is an AI chatbot that can generate human-like text. Developed by OpenAI, it generates responses based on the prompts provided. GPT stands for Generative Pre-trained Transformer, a type of large language model (LLM).
From being a virtual assistant and a bot that can carry out human-like conversations and create content, to being a virtual tutor, writing code, and helping in translation, it has sparked people’s imaginations on the potential of what it could do.
However, it has flaws. For one, it can generate wrong answers and do so with incredible confidence, making it dangerous. This is due to the way it puts together sentences, choosing the next most probable word. If, for example, it is tasked with describing a classroom, across all its data, there is a pattern that “classroom” will usually, but not always, be followed at some point with “desk”.
Its dataset is also based on the internet until September 2021. That means if ChatGPT is tasked with describing something that isn’t readily available online, or is given a vague prompt, it will generate what is most probable. For example, if given the text prompt, “Describe my friend”, it will generate text based on what is probable that your friend might be.
It has also failed many exams when tested, famously the Indian Civil Services Exam. It also can’t search the internet, like a search engine, although that will likely change soon and other LLMs are trying to integrate this feature.
How will AI impact education?
We don’t know yet. Some speculate generative AI will irreversibly alter the way we learn or even think about education. Others, however, see limitations in its capabilities. Some areas of consideration include whether students want to use AI as a learning tool and how the workplace and therefore graduate employability might change.
There is a lot of potential, however. From algorithms that create personalised learning to virtual tutors and new assessment methods, AI is revolutionising learning experiences for students in unimaginable ways. AI could also democratise higher education and increase accessibility to millions of students. AI-powered online learning platforms can provide flexible education options to those who may not have the ability or adequate resources to attend in-person classes.
What educators now need to understand is how to use AI-powered technologies to aid student learning and, more importantly, what the pitfalls are.
Some speculate generative AI will irreversibly alter the way we learn or even think about education. Others, however, see limitations in its capabilities.
The fact that AI will aid in personalised learning does not mean it will make the educator’s role obsolete.
Personalised learning
One of the most profound impacts that generative AI will have on education will be through personalisation. AI-powered tools can help adapt to each student’s individual needs and abilities better, be it the speed, learning style, or even level of understanding. This level of individualised personalisation will lead to better learning outcomes, especially for students with deeper needs. What does this mean in real terms?
AI will help create assignments and learning resources based on every student's needs. Imagine having a customised lesson plan for every student based on their unique strengths and weaknesses - no matter the class size.
The fact that AI will aid in personalised learning does not mean it will make the educator’s role obsolete or less important. Instead, it will empower educators who adapt and use it as a tool to help them the right tools to their advantage. AI will help educators, and help them better analyse students' performance and abilities, resulting in to create unique and create customised learning resources, lesson plans and assessments a more customised learning experience.
There are already many AI teaching tools that educators can use to provide better learning experiences. Apart from the fact that these tools help create more customised (and, in some cases, superior) content, they The tools can also help free up the teachers’ time, saving them from repetitive and administrative tasks.
Improving assessment
While there has been little change in the way students are assessed, artificial intelligence can potentially change the way students are graded and assessed. This is done by identifying patterns in learning that may not be that apparent to teachers, thus making it more personalised and accurate. AI-powered tools will use machine learning algorithms that can dive deep into each student’s paper and then come up with individualised and objective feedback.
This method might make assessments more objective and less prone to individual biases. However, on the flip side, the data it's been trained on could have inherent, and invisible, biases.
Despite the fears, some institutions are already adopting automated scoring. The University of Michigan's M-Write program is one example. Powered by generative AI, the automated scoring helps to identify the strengths and weaknesses of a student's writing, and then provides valuable feedback that is aimed at helping students improve.
Inaccuracy, misinformation and bias
For all its magic, ChatGPT also comes with its set of limitations and challenges. It's essential to know the pitfalls and not trust it blindly. AI models, no matter how “intelligent” and “creative”, are not the human brain. They provide responses based purely on patterns in the data they've been trained on and lack a deep understanding of the subject.
The fact that ChatGPT itself warns you about the possibility of giving out wrong information is telling. The input box of ChatGPT lays out a caveat before you enter a prompt: “ChatGPT may produce inaccurate information about people, places, or facts”.
The chatbot can generate inaccurate information but do so in a way that makes it hard to identify. With its polished and authoritative tone, some tend to take it as truth.
AI, and humans for that matter, has biases within its dataset. If we consider bias as a pattern, it will pick up those patterns and replicate them. For example, ChatGPT has already noticed a pattern within its data that writings on certain professions more often choose one gender over another. When asked to write about someone working in that career, it may then choose the most probable gender (based on its data), although that appears to have been rectified recently.
It also struggles with novel prompts. OpenAI has worked to remove biases and certain offensive or illegal content. For example, if given the prompt to list pirating sites, ChatGPT will refuse. However, until recently, if asked to provide the addresses of pirating site a person should avoid, it will then list them. This specific “reverse psychology" prompt has now been changed.
A significant concern with AI is also known as the “black box” challenge, whereby input goes in, output comes out, but we don’t know how or why it came to those conclusions. To combat this, some are pushing for more transparency in AI.
The fact that AI will aid in personalised learning does not mean it will make the educator’s role obsolete.
Our response to ChatGPT should be much like it was to Google when it first came out.
Pitfalls in grading
When it comes to assessments and grading, there's a risk that ChatGPT could be biased against certain groups of students. Moreover, academicians have warned that the AI model could lean towards awarding higher grades to students who write in a style that the software is more familiar with.
Here’s another challenge - most automated grading tools work off the dataset they’ve been fed, including data of past papers. What happens when it uses data in which the previous assessor may have graded a student incorrectly or unfairly, or data from outdated papers.
Then there’s also the argument about missing the finer nuances and contexts that needs addressing.
The way forward
The good news is education has been here many times before with other technological breakthroughs. The fact is that here’s a tool that can be immensely useful, and revolutionise the way we learn, so use it we must. With caveats and balance.
Our response to ChatGPT should be much like it was to Google when it first came out: it’s a great resource, use it with caution, use it fairly and do not trust it blindly.
Generative AI is intelligent and creative. Turing would have been pleased. It has the potential to help us learn and work better and smarter, reduce our workload, and help us collaborate like never before. However, it cannot, and must not, replace our efforts and critical thinking.
Generative AI must not replace higher thinking skills, it must augment them.
Love it, hate it, ignore it we can’t
AI is now an integral part of our lives and, whether we like it or not, of a student’s academic work. Institutions need to evaluate their environments and develop a set of rules that will enable students to use AI responsibly and develop a sense of information literacy, especially in a climate of frequent misinformation.
This is a big opportunity for educators to collaborate and work with AI. We don’t need to fear it, nor do we need to embrace it blindly. - Wwe need to use it as an assistive tool that can support us in all we do, - from teaching and evaluating to processing vast amounts of information and uncovering patterns that have so far not been possible for humans to do.
With the arrival of ChatGPT, we’re asking the same question that Alan Turing did seventy years ago: “If a machine can think, it might think more intelligently than we do, and then where should we be?”
Only time will tell.