The Road
AI-powered research
In a panel at QS Higher Ed Summit: Europe 2024, speakers discuss the long-standing integration of AI in research and potential challenges of its use within the higher ed sector.
By Julia Gilmore
AI is revolutionising the way researchers conduct their work, enabling them to analyse vast amounts of data, identify patterns and make new connections that would have been impossible or difficult to carry out using traditional methods. Last month, at the QS Higher Ed Summit: Europe in Barcelona, a panel of speakers discussed the transformative power of AI in research, exploring how it is accelerating discovery, enhancing communication to the public and shaping the future of scientific inquiry.
The panel begins with moderator Eng. Sofia Ribeiro, Operations Lead at DareData, raising the key issue that “a lot of people that mistake AI for large language models that are behind Chat GPT and similar programs”. Although these tools are very useful for research, she says, they are mainly for communication, to enhance text or to help us write papers discussing ideas. “But these are not the tools that we use to research,”she adds, asking the panel their thoughts on how AI is being used in research.
Mercè Crosas, Head of Computational Social Sciences Programme at Barcelona Supercomputing Center, highlights the long-standing integration of AI in research: “Even though we talk a lot about large language models and ChatGPT, if we're talking about artificial intelligence in general, this has been used in research for quite some time, and especially in the last decade or two”. However, she points out that more recently, large language models [LLMs] are used in almost all fields.
Crosas shares one of the social science projects her institution is using AI for, by Horizon Europe, which is “helping to model the communication environment”. The project looks at how people are sharing information across digital platforms and how this is causing disinformation or causing polarisation or hate speech, she explains. “It uses an agent-based model, traditionally used in AI, using observational data to train LLMs.”
"The issue right now is that artificial intelligence is going to cause deeper inequality in higher education, the equity is changing."
The panel also discuss the potential issues that AI raises in higher education, with Prof Dr José M. Martínez-Sierra, Director General and Provost, UPF Barcelona School of Management commenting: “The issue right now is that artificial intelligence is going to cause deeper inequality in higher education, the equity is changing. Big research universities like Harvard, MIT and Stanford that are connected with aerospace and military funding will speed up and extend the gap with other higher education institutions. Whilst we all have the same challenges [with AI], the difference is that some are very prepared because they are in the right ecosystem.”
Ribeiro also raises the issue of “fake research” resulting from AI, with concerns that research used may not have been validated. Imran Ali-Farzal, Co-CEO, KEATH.ai agrees that this is a legitimate concern: “We don't really know what data sets AI models are being trained on. Within institutions you have a bit more control, if you're training your own AI models. But if you're relying on open AI or a private entity, you don't know what's being used.
“On top of that, AIs tend to hallucinate when they don't know the answer. So, if you're querying an AI to help you with a piece, a piece of research, if it doesn't know the answer, it's going to try to give you an answer. Unless you know what you're looking for, you're going to end up building on top of something that's completely made-up.”
Ribeiro questions the possibility of assembling a system where the authenticity of each step of the research is validated, for example using blockchain. Ali-Farzal took a positive approach to this, while acknowledging the controversies of such asystem “largely because [blockchain] is public, which could hinder the publishers in terms of income”. However, he ruminates that “if all universities and research institutions were to agree that there's a public ledger to put any research that is valid and has gone through peer review and then everybody can access that and use that as your basis to check”.,Such a system would be “a very clear way to potentially do it, but you'd have to get everybody to agree”, he adds.
Crosas however disagrees with this assessment, saying that “blockchain is in some ways similar to what's happening to large language models”. “We need to be careful about using it for general purposes when they are not fine-tuned by experts. Blockchain had been for many years promising a lot, but it's an overkill technology for many of the things that it has been applied to and it hasn't shown to work well for many of them,” says Crosas.
Shegoes further to say that there needs to be “an analysis much more complete than just machine learning itself”. “There could be arguments about how with machine learning you could still build a causality diagram and try and get causal inference basically from that, but the focus of machine learning in the majority of the cases is prediction.”
Crosas adds: “Just because we use AI in science, it doesn’t mean that we only use AI in science, you combine it with many methods to have a complete picture of the world.”
Another concern the panel raise was the potential cost of AI, not just from a financial perspective (ChatGPT costs around €700,000 per day to be run just on hardware or €0.36 per question), but from a social perspective. Dr Anita Patankar, Director, Symbiosis International University, agrees that “the training is expensive, the hardware is expensive, the whole gamut is expensive”., This means that for a higher education institution it is going to become a matter of privilege, as it is in India, where higher education at a college with a program of your choice is a matter of privilege, she points out.