The Road
It’s time to take the reins
The rapid pace of AI development pushes universities to innovate around AI ethics, as discussed in a panel from the recent EduData Summit 2024.
By Rohan Mehra
The disruptive capacity of AI is no longer merely inevitable, it’s already being felt, and the higher education sector is not immune. While there are some unique challenges ahead, there are also opportunities to shape the way universities deal with the ethical and legal issues around it. A panel at the 2024 QS EduData Summit in Washington DC, brought together four leaders from different higher education institutions and organisations to present their thoughts and experience on the ever-changing intersection between AI and academia, with a special focus on generative AI.
As they took turns to speak, panellists, prompted by questions from moderator and QS regional director Dr Ashwin Fernandes, gave examples of how AI tools impact study and research processes. Dr Ramakrishnan Raman, Vice Chancellor of Symbiosis in India explains that, since the dawn of Google in 1998, students and researchers alike found it relatively trivial to search for and collate research content for publication, compared to when one had to visit a library in person and search using their systems. This naturally gave rise to a wave of plagiarism, though tools were quickly established to check for this.
Then, barely a few years ago, generative AI came on the scene which increased the technical challenge to verify originality, yet it was more or less possible, ironically by using similar large language models to those used to generate content in the first place. But now there are even generative assistant tools to humanise generated content, which further heightens the challenge. It’s a technical arms race where the stages of advancement are gaining in pace. While Dr Raman points out that this arms race is responsible for ever increasing retractions of papers by the academic press, he’s not concerned, as it shows that the rigor at the heart of the academic process is alive and well.
In fact, Dr Raman and the panel highlight ways that AI, especially generative AI, can play a positive role in reducing inequalities in academia. Professor Ghassan Aouad, Chancellor of Abu Dhabi University describes how completing a PhD obviously takes time, but AI can cut this down without compromising output quality by helping in certain stages such as the literature review. As many students work part-time to support themselves during study, this saved time could ease that burden . There are more speculative benefits such as the ability for less affluent institutions to provide more personalised education at scale, via chat-bots for example, though that term has already come to feel outdated in the age of ChatGPT. And of course, the decades old promise of technology to reduce administrative overheads could also reduce the financial burden of universities, minimising costs such as tuition fees.
"Even those in the higher education sector see AI as just another tool of convenience or malice to be governed like any other. But the generative nature of AI is different, and we can’t just look at historic technical adoption curves"
Cameron Mirza, Chief of Party at the International Research and Exchange Board reinforces the idea but also provides some counter ideas of how the use of AI can increase inequalities too, as ultimately, it’s not without material expense. Also, the ‘have, and have not’ disparity could be widened further due to some factors used by ranking agencies corresponding to pure volume of research output, and generative AI has already started driving down the time to publish for those with access and an impetus to us it. Mirza’s overarching point is that institutional oversight needs to be holistic and careful not to simply champion this emerging and evolving technology. He adds that it’s the concept of institutional governance on AI that coalesced as the core topic of conversation throughout the discussion. How are universities supposed to manage AI?
Among the panellists, there is broad consensus that governance is needed, though ideas varied about how it might be achieved. Professor Aouad suggests universities will soon require dedicated AI committees and vice presidents for AI to address these challenges effectively, analogous to how many institutions now have environmental or equality committees. Jenny Cooke Smith, a Senior Director at the US-based Council for Advancement and Support of Education, shares a story about her research on general ethics committees at universities, where she found that institutions with established ethical policies see reassuringly high adherence to their guidelines, indicating the effectiveness of such measures when implemented. However, of institutions with any kind of AI ethics policy, less than a third of those working in marketing and communications were even aware there were AI ethics policies, implying a high lack of visibility, arguably rendering such policies non-existent to begin with.
Part of the problem, she feels, is that even those in the higher education sector see AI as just another tool of convenience or malice to be governed like any other. But the generative nature of AI is different, and we can’t just look at historic technical adoption curves, as she puts it. Dr Raman adds to this by stressing the exponential pace of development of AI pushes not just for universities to make AI ethics boards but for them to adapt to the rapidly changing landscape. Universities are not famous for moving quickly, but in this situation, adaptability will be key, and they might have to revise policies throughout the year to address new and unforeseen tools and techniques.
Smith helps draw things to an interesting conclusion by describing the successful case of the Instituto Sorocaba in Brazil, which chose to recontextualise AI, it’s impacts and stakeholders as IA – Intelligence Augmentation. Through this lens, the institution focuses on empowering students, teachers and even parents with training and guidance on how to make AI work for them, while avoiding the pitfalls and temptations that can come with it. Though it’s a small institution and just one example of how things might look, the panel seems inspired by this approach. Given the speed of change in AI right now, ethical frameworks, especially those relating to curriculum design and assessment methods, will likely appear where they haven’t already sooner than you might think.