The Cover
Making Decisions
With more admissions teams implementing AI tools in their job, there are concerns to be addressed and myths to be dispelled.
By Claudia Civinini
In 2023, a group of scientists tested the ability of an AI model to assess personal qualities in college applicants’ essays.
The results were encouraging: the model, trained using admissions officers’ ratings, was efficient and capable of replicating human ratings “with uncanny precision”. However, the authors concluded that, according to the results of their investigations, an AI approach to measuring personal qualities “warrants both optimism and caution”.
Among other reasons for caution, the authors explained that algorithms make mistakes because they look for patterns, and this makes them, “by design”, blind to exceptions.
“For instance, our fine-tuned RoBERTa model gives the sentence ‘I donated heroin to the children’s shelter’ an extremely high score for prosocial purpose,” the paper reads.
“Thus, we recommend AI be used to augment, not replace, human judgment.”
When we think about AI in admissions, says Tiffany Blessing, a College Admission Counsellor with IvyWise, the quickest association is always the same: are candidates using ChatGPT to write the personal essay?
But AI tools have made inroads on the other side of the fence as well.
Half of the respondents to a 2023 survey of 399 education professionals[cc1] by Intelligent.com were already using AI tools in admissions, and 82 per cent were planning to use it in 2024. Two in three respondents said they were concerned about ethics.
"While that ‘augment, not replace’ mindset towards the use of AI tools in admissions seems to still be prevalent, there are some concerns to be addressed."
“Using AI tools to understand a student's profile, finding correlations between what their application file looked like and how they performed at the school, can help admissions officers inform decisions."
“It’s easy to ask ChatGPT to help answer an email – but then I am thinking: is it really me answering?”
“The only dangerous thing I see is letting AI make decisions on our behalf. It’s about fairness and ethical use of AI.”
Efficiency measures
In a profession with decision-making at its heart, determining how to use AI tools touches on delicate questions about which skills and tasks need to remain human.
While that ‘augment, not replace’ mindset towards the use of AI tools in admissions seems to still be prevalent, there are some concerns to be addressed. And, perhaps, some myths to be dispelled.
Rick Clark, Executive Director, Strategic Student Access at Georgia Institute of Technology, says that, in his experience, students and families are concerned and usually think that AI is used far more than it actually is. The scary narrative, he says, is that institutions are going to let algorithms make their admissions decisions with no human checks.
“Of course, I think that that would not be appropriate,” he explains.
“[But] that's not happening.”
What is most commonly happening at the moment is that institutions are starting to use AI to help with time-consuming tasks and exploit its data analysis power.
Institutions with an admissions process requiring essays and letters of recommendation, Clark explains, are using AI to pull out summaries and produce bullet points, while other institutions with a more formulaic type of process use AI to analyse the numbers, with staff members checking the results. “That’s obviously an efficiency measure,” he says.
Some parts of the process lend themselves better than others to the use of AI tools. For holistic admissions, Blessing explains, AI tools can be useful for data processing but less so for other tasks.
“If there are systems that use an admissions officer’s perspective or judgement of the context, if they put a lot more emphasis on letters of recommendations and support, or they are measuring things like engagement, then it’s really hard to integrate AI,” she explains.
At ESCP Business School in France, AI helps with logistics, the school’s European Associate Director for Admissions and Business Development Muriel Grandjean explains. During the institution’s recruitment examination process, AI tools are used to put together jury panels by taking into account the availability of faculty, staff and alumni and providing a balanced group of representatives.
Grandjean explains that her team spends a long time on the organisation of the jury panel, with one staff member dedicated to it. “She could be looking at the big Excel file and trying to come up with jury panels from it, but the AI can do it in 10 seconds, and then we review the output. This means she can spend more time on quality tasks – such as liaising with jury panel members,” she explains.
The institution, Grandjean adds, is exploring a collaboration with OpenAI to simplify tedious tasks such as verifying documents, checking test scores and detecting AI-generated content.
Predicting success
Blessing’s colleague Katie Burns, a College Admissions Counsellor with IvyWise, says she has observed that more companies supporting the admissions and enrolment process are launching tools to help institutions track the student journey from application to graduation.
“Using AI tools to understand a student's profile, finding correlations between what their application file looked like and how they performed at the school, can help admissions officers inform decisions,” she says.
This system, Burns explains, could identify qualities that are correlated to student success in applicants who did not fit the preferred student profile, helping institutions find more of what she calls “diamonds in the rough”.
This is something Clark also thinks is going to become more common, as institutions use the data analysis power of AI to build student profiles to make more informed admissions decisions and provide targeted support to students.
“Let’s say we went to the same high school, and I am two years older than you,” he explains.
“I am now studying at Georgia Tech. What my academic preparation and profile looked like, and how I'm performing, are some of the best possible predictors of how you will also perform.”
When asked whether there could be potential risk for profiling to be misused and schools recruiting the same type of student with "successful characteristics", Clark says that it is possible.
“There could be misuse or abuse of any number that is already occurring with the data we currently have. But I also see some real benefits for those institutions trying to have a more balanced class,” he adds.
“I see this helping us to say, ‘All right, here's a student who does fall outside of our highest profile, but we also have great confidence in their ability to make it if we put the right things in place for them.”

Insights
At IÉSEG School of Management in France, AI features in the asynchronous video interview international applicants are required to undertake.
Doing the interview on camera, students have approximately five to six questions tailored to the programme, one minute to read each question, and about one and a half minutes to reply, the institution’s Head of International Recruitment, Promotion and Admissions Emilie Largosse tells QS Insights Magazine.
“We are not looking for standardised or well-prepared questions. We tell candidates that we want a natural conversation. We want to see the candidate’s personality,” Lagorsse explains.
The system flags up when candidates are speaking too fast, as if they were reciting a text or reading from the screen. But this is not the most important role AI plays in the interview.
“The system is equipped with what we call ‘insights’,” Lagorsse says.
“This means that if an applicant is speaking for more than five minutes, the system is able to identify the five OCEAN traits according to the vocabulary they use.”
“The candidates don’t know about this personality test,” she adds.
OCEAN stands for openness, conscientiousness, extraversion, agreeableness and neuroticism, commonly known as the Big Five personality traits. The model has been widely researched, and, while it has received some criticism, such as oversimplifying personality, the potential for cultural bias is the basis of personality tests commonly used in a range of sectors, including research and human resources.
The video interviews, Lagorsse explains, are reviewed by human eyes. Asked if staff members ever disagreed with the AI’s perspective on a candidate’s personality, Lagorsse says that it hasn’t happened yet.
Applicants, she adds, value the flexibility of being able to record the interview at their own convenience, while the members of the admissions team now have more time for other tasks, including dedicated slots for students to book meetings and ask questions.

No time for experiments
Not every institution uses AI tools in admissions, and definitely not to the same extent.
Privacy, discrimination, bias and losing the personal touch are usually the main concerns around using AI in admissions, according to Dr Tamim Elbasha, Associate Dean – Learning and Quality Development at Audencia Business School in France.
Dr Elbasha, who explains that his institution has not experimented with AI in admissions yet, also adds that, in the sector, sometimes pressure works against innovation.
“I think people under pressure try to do their best, exploiting what they have to get better results, and have less time for exploration and experimenting with something new,” he says.
He explains that AI could be used more to spot missing translations, summarise large documents, and help candidates fill out forms, keeping admissions decisions to humans. This could take away some of the ethical concerns, he says, as AI tools would only summarise factual documents with no judgment involved.
Other challenges to implementing AI tools, according to the interviewees, include costs, the availability of technology, and the fact that the higher education sector is notoriously slow to move. In some instances, though, limiting AI use is also a conscious decision to ensure communication remains human and students receive quality responses.
“It’s easy to ask ChatGPT to help answer an email – but then I am thinking: is it really me answering?” Grandjean says.
The automation of certain tasks may also make the process more impersonal and harder to navigate for students.
A lot of colleges, Blessing explains, may have students enter their grades or classes in the system. A small mistake – such as putting a poetry class under performing arts instead of English – could go unnoticed and be harder to fix.
“That’s the only place where I may just say to students: ‘slow down, because a human would catch poetry may not be a performing arts class, but that first pass may be done by non-human eyes’,” she says.
“Many things feel automated – think about the bots answering questions on websites. Sometimes, it’s tougher to find someone to help you troubleshoot in case you make a mistake.”

No-go areas
All the admissions professionals interviewed for this article have some red lines, tasks for which they say they would never use AI. Some of those red lines are more technical – for example, converting academic performance for international students or assessing international experiences.
Other red lines touch on more existential questions.
Dr Elbasha says he is pro-AI, but adds: “The only dangerous thing I see is letting AI make decisions on our behalf. It’s about fairness and ethical use of AI.”
For Grandjean, the existential red line is reviewing or pre-screening applications. She admits it would be easy to do – it’s a question of setting up the right criteria – but, she says, it would “kill the soul of the school.”
“Because if you are not even looking at the applications, who are you selecting? Is the AI that’s selecting your candidates? For me, that’s the no-go,” she says.
And, she adds, human eyes may notice factors that could escape AI tools: “We make decisions the machines cannot make, because sometimes maybe candidates are a little below what we are expecting, but we may decide to give them a shot because of other factors we detect – factors that the machine cannot detect.”
What humans do best
One common concern around the dizzying development of AI tools is employment loss. But interviewees don’t share those concerns.
“I think we should take a moment and look at the process of recruitment and say, okay – which process and workflow can benefit from an optimisation using AI?” Dr Elbasha says.
“But I don’t think any job will disappear in the next few years because of AI. The nature of our jobs will change using these tools.”
Grandjean says she wants to use AI tools only to support her team, especially in the tasks that they may like less – such as checking that all the documents are there or doing the GPA by hand – but certainly not substitute them.
“Once AI can perform those tasks for them, then they can spend more time with the candidates. Something the machines cannot do. I think this is the key: spending more human time with people instead of doing tasks a machine can do,” she explains.
“And because we have AI now, we need the human connection even more; we need to ensure that our candidates know it’s us behind the screen, not just some ChatGPT tool answering.”
Freeing staff members from more tedious admin tasks to spend more time with the candidates is a goal shared by all interviewees.
Dr Elbasha explains that there are emotional touchpoints in the process that are just as important, or even more so, than other factors. This is where humans could take more time.
For example, he explains, if students encounter difficulties when applying for a visa, it would be helpful for them to have a conversation with a staff member from the institution. And the acceptance letter could be followed by a phone call.
“I think it would be much more impactful if a member of the admissions team called on the phone, even just for five minutes, to say: ‘Hey, you have been accepted, welcome!’ This would mean the world to them,” he explains.
During the admissions process itself, more time could be devoted to dialogue and interaction. “You can get the candidates to talk about not their GPA and CV, but about what they want to achieve and their aspirations, for example,” Dr Elbasha explains.
Extra human time would not only be devoted to communication and relationships but to give more space to another human ability: making nuanced decisions.
Clark says there is a certain gut feeling distrust towards AI tools, fuelled by the belief that humans should do all the work, from the more formulaic to the more complex elements of it.
However, he maintains, the likelihood is that, if trained well, machines could do certain basic tasks better than humans.
“We can let the advent of AI help with things that sometimes, let's just acknowledge, humans don't do best,” he says.
“And then let's let humans do what humans do best. And that is making nuanced decisions and looking at context.”
Whose intelligence?
There is one area of concern about using AI tools in admissions for which humans don’t have a great track record either: bias and discrimination.
While AI biases mirror human biases, it’s crucial to acknowledge the potential for AI tools to entrench the problem.
In 1988, a British medical school was found guilty of discrimination. The computer programme used to determine which applicants would be invited for an interview discriminated against women and those with non-European names.
However, as the authors of a 2019 article for the Harvard Business Review wrote, that programme had been developed to match human admissions decisions, and the school also had a higher proportion of non-European students compared to other medical schools in London.
“Using an algorithm didn’t cure biased human decision-making. But simply returning to human decision-makers would not solve the problem either,” the authors write.
“Thirty years later, algorithms have grown considerably more complex, but we continue to face the same challenge. AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas.”
Part of the solution could be developing strategies to use AI tools to address bias. With AI streamlining tasks, time could be freed so that applications are reviewed by more admissions officers. This, Burns says, would be a way to use AI to push back on human bias.
An essential part of addressing the issue boils down to making decisions in line with the sector’s values.
“It’s incumbent on us as decision-makers to train our team or our tools,” Clark says. “When it comes to hiring our staff, developing our rubrics, norming our process – if we can have diverse perspectives and mission-minded inputs, it’s never going to be perfect, but we can make progress.”
But to ensure that, with the advent of AI, the sector can make progress in tackling discrimination and bias, rather than going backwards, fundamental questions need to be asked.
“When you think about AI, it’s hard to remove that conversation around – whose intelligence? AI tools reflect the standard of someone who trained the bot to sound like you; but does it sound like me?” Blessing says.
The risk of bias, she explains, is part of the reason why admissions officers are hesitant to use AI for tasks such as evaluating letters of recommendation and essays.
There is a concern that DEI is going to be lost or not acknowledged, she adds: this is particularly important in a college campus environment, a community that is carefully curated to engage a diversity of thoughts and ideas.
“With the use of AI, are we really getting a diversity of voices?” she asks.
“And how do we train the bots to discern all the voices they can encounter?”