The reasonable response to generative AI has been a challenge for higher education institutions. From total ban to total embrace, universities have yet struck the right balance for students, educators and researchers. Enter the Russell Group, who have established their own guiding principles
By John O'Leary, Contributing Writer
"The transformative opportunity provided by AI is huge and our universities are determined to grasp it."
Share this page
Universities are agonising about how to react to artificial intelligence (AI) programs like ChatGPT and other forms of generative AI. The Russell Group of leading UK research universities, including Oxford and Cambridge, believe they are showing the way with a set of guiding principles that all 24 members have agreed to abide by.
The five principles are necessarily general, but cover student and staff responsibilities, as well as institutional collaboration and academic standards. In a press release, the group says: “Our universities wish to ensure that generative AI tools can be used for the benefit of students and staff – enhancing teaching practices and student learning experiences, ensuring students develop skills for the future within an ethical framework, and enabling educators to benefit from efficiencies to develop innovative methods of teaching.”
The initiative was prompted by the rapid adoption of ChatGPT by students. A survey by Varsity, the Cambridge student newspaper, found that almost half of the 400 respondents admitted to using it in work set by their tutors. Similar proportions have been reported at universities in the United States and elsewhere.
Rather than try to ban ChatGPT and other programs, Russell Group vice-chancellors hope to capitalise on the opportunities offered by AI without sacrificing academic rigour and integrity. All group members reviewed their policies and guidance in drawing up the guidelines. Dr Tim Bradshaw, the Russell Group Chief Executive, said: “The transformative opportunity provided by AI is huge and our universities are determined to grasp it. This statement of principles underlines our commitment to doing so in a way that benefits students and staff and protects the integrity of the high-quality education Russell Group universities provide.”
"The question on everyone’s lips is exactly how universities will ensure academic integrity is upheld, particularly in non-exam assessment which plays such a significant role in higher education."
More specifics needed
The five principles cover a broad range of commitments to students in staff, including supporting AI-literacy, guidance on the effective and appropriate use of generative AI tools, and adapting teaching and assessment with an emphasis on ethical use and equal access. A commitment to academic rigour and integrity, such as transparency in using generative AI, and a pledge to enhance collaboration across key stakeholders, such as other universities and employers, in pursuit of further developing best practice round out the document.
So far, the principles have been generally well received, but some AI experts have questioned whether they are sufficiently detailed to remove the confusion surrounding the use of AI programmes. Mary Curnock Cook, Network Chair at Emerge Education, a European investment fund, as well as Chair of Pearson Education, says: "The Russell Group principles are coherent and useful. But the question on everyone’s lips is exactly how universities will ensure academic integrity is upheld, particularly in non-exam assessment which plays such a significant role in higher education. The sector will need to be more specific on this point, and soon."
Richard Mulholland, a South African-Scottish entrepreneur and the author of a recent study on the impact of AI on higher education, meanwhile sees the principles as too narrow in scope. “I feel that while the Russell Group is taking an important and necessary step with regards generative AI, it is my belief that universities should not simply be using these tools to evolve how they educate, but how they operate,” he says. “The question should not just be ‘how can AI work within our syllabus?’, but, ‘can AI allow us to deliver our syllabus in a more efficient way, empowering our youth to get into the workplace faster, and with better and more usable knowledge?’.
“That's when this becomes transformative and exciting.”
"The question on everyone’s lips is exactly how universities will ensure academic integrity is upheld, particularly in non-exam assessment which plays such a significant role in higher education."
Looking elsewhere
The Russell Group initiative is likely to be the first of many. In the UK, Education Secretary Gillian Keegan has launched a call for evidence on the use of generative AI in education as a whole, while the University and College Union is making its own assessment of AI’s impact in further and higher education. Researchers based at the universities of Bristol, Swansea and Keele are seeking the views of academics and professional services staff on the ways with which generative AI tools are being used in UK universities and might affect working practices.
Meanwhile in Japan, the education ministry has asked universities and technical colleges across the country to come up with their own policies or guidelines on using generative AI. The ministry had already issued interim guidelines for schools, but said it was preferable for universities and colleges to make their own decisions depending on their courses and programs.
Harvard University is the latest to publish initial guidelines for the use of ChatGPT and other generative AI programs. A five-point email to staff asked them to “protect confidential data”, defined as all information that is not already public, and warned them that they are responsible for any content they produce that includes AI-generated material, as AI models can violate copyright laws and spread misinformation. The email reiterated academic integrity policies, and warned against AI phishing attempts, adding: “The University supports responsible experimentation with generative AI tools, but there are important considerations to keep in mind when using these tools, including information security and data privacy, compliance, copyright, and academic integrity.”
Artificial Intelligence will forever change how we teach, learn and work in the education sector. While it might be tempting to underestimate AI's impact by comparing it to other historical and technological developments such as radio, television, and computers, make no mistake, the era we have just entered will fundamentally change information generation and consumption as we know it.
By Cato Rolea, Senior Global Partnerships Manager (AMEA), NTU Global
How is generative AI any different ?
Firstly, the current technology we see unfolding has been designed to mimic humans, from writing, speaking, listening, and even singing and smelling. Regardless of how well-designed previous gadgets have been, you could have hardly been fooled by a toaster or a calculator, into thinking that you were interacting with a human being, unless an actual human was controlling them in real-time. When it comes to generative AI technology, things have taken a massive turn.
Secondly, and probably most importantly, a significant difference between previous technological advancements and generative AI tech is that the creative power now lies in the hands of the everyday user. Whilst computers and mobile phones can be used for generative purposes, the generative act in itself is highly restricted to one's talent, training and technical capabilities. Moreover, most information produced is easily reproducible and, at best, below-average quality. In more concrete terms, anyone can write a below-average book using a computer, draw an image using a phone or build an app using a piece of software. Still, only the exceptionally talented or skilled, will stand out and make a real-life impact. Generative AI models disrupt this 'natural order' and significantly reduces the technical knowledge, talent and capabilities gap.
In the past eight months, generative AI has challenged the necessity for a copywriter, designer, calculator, or programmer to produce at least average results. Whilst the word "average" in itself might not bear much weight, the difference between "barely below average" and "at least average" is of immense impact. In simpler terms, that can be the difference between a fail and a pass at school, the difference between succeeding or failing to score an interview for a job application, or pleasing your boss or not with a report.
"At the current adoption rate of AI services, it is only a short time until we see whole sectors gradually change, and one of the first to be affected will be education."
With great power comes great creativity
ChatGPT's growth to 1 million users in just five days and 100 million users within two months of its launch is another testimony of this technology's stark uniqueness. In contrast, it took Instagram 2.5 months to reach 1 million users and 30 months to reach 100 million. It is not just access to ChatGPT that has empowered everyday users with absolutely no tech knowledge, but also OpenAI's access to its APIs behind ChatGPT, where any developer can now create their own versions of ChatGPT at minimal cost.
Moreover, it also means that anyone can be a developer using ChatGPT and ask the chatbot to teach them how to build their own version of ChatGPT. The technology can then instruct , step by step, on how to build, and even provide with required lines of code where necessary.
Apart from OpenAI's ChatGPT, which currently generates text, copy, code and visualises data, other engines are able to generate graphics, such as Adobe’s Firefly; or images and art such as MidJourney, Dall-E, and Stable Diffusion. There are also voice generators/synthesisers such as UberDuck, known for when famous DJ David Guetta used it to clone Eminem's voice for one of his songs, without his permission or any copyright issues. Combining all these generative tools, AI unleashes a whole new dimension of creation available to the broad public, and the ramifications are still hard to foresee.
At the current adoption rate of AI services, it is only a short time until we see whole sectors gradually change, and one of the first to be affected will be education.
"ChatGPT and AI were not developed for education. It is our duty to see how it is possible to use this technology to our advantage rather than be disrupted and confused by it"
AI impact on higher education and accessibility
In the UK, the Social Mobility Commission warns that there is already a big gap between privileged and disadvantaged students. Access to ChatGPT and other AI tools will only increase this gap, mainly because the better-quality platforms operate, or are planning to operate, on a paid model. Such a scenario will affect all stages of education, from admissions to performance and to graduation. –For example, privileged average-scoring students could dramatically increase their performance in class or assessments by using AI in the right way, such as for proofing, mind mapping, and corrections, while a disadvantaged student might not have access to the same tools due to financial or time constraints.
For instance, WordTune Read is a premium AI summarisation tool that can read and summarise entire books up to 300 pages in real-time. . ChatGPT has recently introduced their Plus version, whereat $20/month, can process larger datasets, create graphs and charts and has access to various plugins that can browse the internet, process documents, and deliver better output overall at a faster pace. Such premium tools can be game changers for how students conduct research and, indirectly, how they perform in class - and if not widely accessible to all, it will contribute to an increasing gap in inequality and a potential new term to emerge - digital poverty.
AI tech companies often grapple with understanding their own algorithms, lacking a robust process for managing AI architectures. Coupled with biased datasets or false information, AI models have the potential to turn into authoritative misinformation ambassadors. This phenomenon, known as AI hallucination, occurs when a model deviates from its data parameters and confidently produces false statements. Despite ongoing experimentation and research, the cause remains concerningly unclear.
Stefan Popenici, an education expert at Charles Darwin University, starkly warns, in his latest book, about AI’s potential to undermine democratic principles and increase bias. He urges the education sector to find equitable ways of reforming and adopting it. Presenting at a PebblePad seminar, he eloquently states “ChatGPT and AI were not developed for education. It is our duty to see how it is possible to use this technology to our advantage rather than be disrupted and confused by it.”
"AI tools could widen skill gaps, potentially rendering certain skill sets obsolete."
Picture by Emiliano Vittoriosi, Unsplash
Higher expectations
The landscape is not all bleak. If adopted sensibly, AI can revolutionise teaching and learning and decrease the inequality gap. Universities currently hold massive datasets about students, and if used correctly, AI technology can help adequately interpret and use it to improve the learning experience.
Imagine a world where teaching is genuinely tailor-made and standardised testing is completely removed with the use of AI, where each student is assigned an AI buddy that constantly adapts the teaching methodology. and tracks progress based on multiple factors such as learning pace, individual skills and traits or personal commitments, that human beings can’t possibly consider in the current environment. Based on this, testing also becomes completely personalised to ensure that the outcome thoroughly aligns with the student and the university’s goals and expectations.
Academics’ research and collaborations worldwide could reach unprecedented dissemination capacity. With the advancement in real-time translation and transcription, primary sources will be considerably easier to access and examine, and research outputs easier to disseminate, summarise and adapt to different audiences, transcending any language, knowledge, or cultural barriers. International Research Collaboration also has the potential to reach levels of unprecedented success.
On the other hand, as roles evolve due to AI, staff may face challenges, particularly those less tech-savvy. AI tools could widen skill gaps, potentially rendering certain skill sets obsolete. This mirrors past industrialisation trends where unskilled workers were displaced. White-collar roles, like university administrators performing basic tasks, are at risk. For instance, a tech-savvy staff member could use AI to automate data collection, processing, and visualisation, reducing the need for specialised admin assistants. With AI models like Microsoft's Co-Pilot integrated into the entire Microsoft 365 suite, widely used in the sector, the need for upskilling is more pressing than ever.
Next Steps
Most universities, unsurprisingly, advise engaging with Generative AI and establishing trust with students concerning its use. The Quality Assurance Agency in the UK warns against AI detection tools as they can be unreliable both with output and with protecting the data fed into them. It also cautions against banning AI technology as it is most likely to impact students' trust in assessment methods negatively. Thus far, the UK consensus is that education will need to re-think assessment methods and find creative ways of implementing AI generative technologies without endangering authentic and critical-thinking-based coursework.
While academics and teachers will be challenged to redesign and adapt teaching, learning and assessment methodologies, the administrative and professional services will be challenged to adapt in the face of automatisation and the potential replacement of many of their current duties. Now, it’s the best time to start upskilling as anticipating changes to come will make the transition a lot easier and potentially futureproof employment.