The View
Effective & Ethical AI: Asking the right questions
AI promises to revolutionise classrooms, but educators must pause, ponder and pave a path towards responsible implementation.
By Borja Santos Porras, Associate Vice Dean for Learning Innovation, IE University
This past year has been incredibly exciting – and fun – in the world of education, thanks to the rapid emergence of new and relevant technologies. ChatGPT arguably caused some angst within the academic community, but it has also given the sector a renewed vigour. It has forced educators, students and institutions to look at learning in a new way.
The integration of AI tools, however, in higher education – as in all industries – cannot be done with wild abandon, regardless of how tempting that might be. No, it requires significant forethought in order to ensure that these technologies and tools are being used in an effective and, most importantly, ethical manner.
The use of AI in education presents challenges alongside many opportunities and so educators must therefore take stock before implementing it within curriculum, programmes and teaching. There are still many questions that educators and educational organisations must address so as to properly navigate and capitalise on an ever-evolving technological landscape. Here are some suggestions:
1. How can we avoid the implications of AI’s current monoculturalism?
Given that half of the content on the Internet is in English, followed by Russian (less than 6 percent) and Spanish (4 percent), and that AI is trained using this data, it makes sense that the cultural norms associated with those prominent languages bias the content. Furthermore, as Jill Walker Rettberg of The University of Bergen in Norway argues, while ChatGPT may be trained on a variety of multilingual data, it tends to reflect and promote American norms and values, potentially perpetuating that culture. There are several initiatives dedicated to exploring various items in their own language, yet there remains room for research into resolving these implications.
2. How much of our resources are going to teaching machines how to learn vs teaching humans how to learn?
The evolution of large-scale language models (LLMs) has focused the debate on how to feed and teach these machines. Fair enough, but we should be careful not to neglect the fundamental aspect of education: teaching humans how to learn effectively and independently. In his book ¿Cómo Aprendemos? (How Do We Learn?), Héctor Ruiz Martín of the International Science Teaching Foundation, comments that it is essential for teachers to understand the cognitive and emotional mechanisms that enable learning so as to teach students how to study, learn, and memorise. As LLMs continue to learn and evolve, enhancing educational practices, it’s important that the essential learning needs of our students are kept front and centre.
3. What skills are necessary for students to develop in order to lead and work in this era of AI?
Most leaders today will need to work with AI in some way or another. Therefore, I see four basic and necessary competencies to do so: a) general knowledge of AI, which includes understanding what it is and how it works. The hardest part of this is learning how to keep up to date in a constantly changing technological environment; b) an understanding of data-driven decision-making, developing the ability to interpret, use and mitigate biases in this data, and the ability to pinpoint those biases; c) a steadfast commitment to the ethical and responsible use of AI, addressing issues such as privacy and fairness; and finally, d) critical thinking to be able to ask the right questions.
4. How broad is the scope and potential of AI in education?
Ethan Mollick and Lilach Mollick of Wharton have proposed a wide variety of cases where it could be very useful. For example, AI can function as a personal tutor, catering to the individual levels and needs of each student, or generate personalised feedback for learners. However, there are other dilemmas ahead. Is it appropriate for an LLM to assign the final grades for academic assessments? How can we identify when this is unfair or wrong? Can it assess creativity? In what cases would students consider the automation of their results as legitimate?
5. How do we address plagiarism and academic integrity in regard to AI?
There are applications that can detect instances of plagiarism, but they still have a high error rate, including both false positives and false negatives. Thus, would it not be more effective to focus on teaching ethical values for the responsible use of AI? How do we develop that awareness and how can we act to recognise when integrity is not part of the equation?
6. Is it possible to become overdependent on ChatGPT?
Excessive use of LLMs could curtail the development of research, critical thinking and problem-solving skills. The ease and speed with which ChatGPT provides information could also reduce student motivation and the perceived need to learn and retain information. They may even feel insecure about their creativity and originality once they get used to relying on AI-generated responses. It’s essential that educators learn how to guide students in using AI responsibly. For instance, in my public speaking and speechwriting course, I demonstrate to students how they can utilise chatbots and AI for idea generation, while also highlighting its limitations and ethical implications. Additionally, I guide them in crafting more personalised and effective speeches by using their unique language, incorporating their original personal stories, and developing their own distinct style – elements that contribute significantly to their charisma. This approach emphasises the value of individual creativity, which can be more challenging to achieve with AI.
7. How do biases affect AI, especially in LLMs that identify patterns using large datasets?
If the data used to train an LLM contains, for example, gender or racial stereotypes, or dominant cultural and geographic perspectives, these biases will be reflected in the model’s results. The question then becomes how to ensure that minority perspectives, which bring diversity and sometimes highlight less acknowledged and hidden truths, remain part of the education process.
8. What should be done about algorithmic hallucinations?
We would not have guessed it a year ago, but an AI hallucination is now a well-understood concept in public discourse. Not only do chatbots get things wrong, they can sometimes fabricate information altogether. If the original content is poorly developed, then it can lead to disinformation in searches, distorting reality and creating general confusion. How can we reduce AI-generated distortions to ensure they don’t impact the quality of learning materials or applied research content?
Without a doubt, AI has already had a transformative role on education and this trend will only continue. We therefore find ourselves at the intersection of unprecedented opportunities and daunting challenges. It’s important to ask the right questions, to move forward with careful consideration of purpose, ethics, resources and always with the student in mind. It is the role of educators and academic institutions at large to lead the discourse around learning and AI so that we can move forward with integrity, inclusivity and sustainability.
*This article was originally published on IE Insights, the thought-leadership publication of IE University.