Get the QS Insights Newsletter

Subscribe

The Dispatch


Meet your AI applicant

Some university applicants are using AI in their applications, and most universities can’t even tell. Can it be tackled, and is AI hurting the user in the long run?

By Seb Murray

"When it comes to school-specific research, we find that AI misleads applicants toward outdated, impersonal and/or overly generic information that can be a dealbreaker in admissions."
"As soon as human and AI writing gets mixed, it gets more complicated to draw a clear line."

In Brief

  • Students are increasingly using generative AI to draft and refine application essays, but universities and detection software struggle to reliably identify its influence in the final writing.
  • Heavy AI reliance risks creating generic, dull essays, stripping away the authentic voice and originality sought by admissions officers. A sizable majority of institutions currently lack formal rules on AI use in the admissions process.
  • Because sophisticated AI detectors are not a lasting solution, universities must strengthen AI literacy for all stakeholders. Alternative paths include securing assessments or requiring applicants to disclose AI use.

This year, many would-be university students have found themselves in a new kind of collaboration: drafting essays with the help of generative AI. Admissions consultants and educators say a growing share now ask ChatGPT or similar tools to “make it sound more mature” or “tighten the grammar”.

Many do so without their institutions knowing. While hard data on AI use in admissions is scarce, broader research shows how these tools have become part of everyday student life. A 2025 survey by the Higher Education Policy Institute (HEPI) found that nearly nine in 10 undergraduates now use AI tools for assessments, up from about half the year before.

And many do so regularly: almost a quarter of students reported using it daily for various tasks, while more than half said they use it at least once a week, according to the Digital Education Council. This familiarity is likely to carry over into the admissions process.

For many university applicants, AI is simply the newest aid: a natural extension of the spellcheckers and grammar apps they already rely on.

“We often encourage checking AI for specific tasks such as proofreading, revisions and identifying word choices (e.g. synonyms), as long as the applicant independently reviews and validates the recommendations,” says Stacy Blackman, an admissions consultant who advises business-school candidates.

Her firm, Stacy Blackman Consulting, discourages using algorithms to shape personal stories. “When it comes to school-specific research, we find that AI misleads applicants toward outdated, impersonal and/or overly generic information that can be a dealbreaker in admissions,” notes Blackman.

At Dickinson College, a liberal arts institution in Pennsylvania, Seth Allen says AI use in applications is hard to spot, but is likely happening behind the scenes anyway.

“It’s rare to find an admissions application with writing that can be identified with high confidence as having originated through AI,” notes the Vice President of Enrollment Management; “though just because we can’t see it in the finished writing, doesn’t mean it is not being used to brainstorm, refine ideas [or] develop approaches.”

Essays may look authentic even when algorithms have shaped their tone and structure. For admissions offices, that makes it hard to see where the applicant ends and the software begins.

Rene Kizilcec, Associate Professor of Information Science at Cornell University, has tested this growing overlap closely. His study published in March compared 30,000 human-written admissions essays with AI-generated ones – including some produced with added demographic details about the applicants.

The result was that the machine-made essays still sounded different from the human ones, and giving the AI extra background details about the applicants made little difference to how natural or individual the writing seemed.

Professor Kizilcec doubts that software designed to detect AI-written text will work accurately. “AI detectors are not generally reliable, and what’s worse, they are known to be biased against non-native English writers,” he tells QS Insights. “As soon as human and AI writing gets mixed, it gets more complicated to draw a clear line.”

He adds that admissions officers themselves are often overconfident about spotting AI use. “They think they know but they really don’t; [their accuracy is] only slightly better than chance. That, again, is concerning if readers have a bias against AI, and they can’t even tell.’”

Universities are taking mixed approaches to AI; some are drafting formal rules on its use in applications, others are waiting to see how the technology evolves before deciding whether new policies are needed.

Some are waiting for sector-wide standards, wary of writing policies that could date within months as the tech continues to evolve.

Stacey Koprince, Director of Content and Curriculum at Manhattan Prep, says: “A sizable majority of programmes we spoke with still don’t have formal rules on how applicants can use AI in the admissions process… For now, it remains a bit of a ‘wild, wild west’ with many candidates essentially calling their own shots.”

The company offers prep for admissions tests, and its advice to prospective students is simple: “We strongly encourage applicants to check each school’s individual guidelines; no one wants an application tossed over an avoidable misunderstanding,” Koprince says.

Supporters see AI as a leveller, helping students with fewer resources compete on more equal terms; critics see it as a new divide between applicants who have the knowledge and support to use these tools well, and those left behind without them.

“LLMs can help students who don’t have great resources write better essays,” Cornell University’s Professor Kizilcec says. “It can also make anyone’s essay sound generic and dull.”

Admissions officers share this concern: that overreliance on AI can strip away the personal voice and originality they look for in an application. “Many business school admissions officers have told us that they view generative AI as a valuable tool – when it’s used to support and elevate an applicant’s own work,” says Koprince at Manhattan Prep. “The key, they say, is for AI to enhance the applicant’s authentic voice, not replace it.”

Other universities take a harder line, arguing that using AI to write an entire essay, for example, crosses an ethical boundary; Koprince herself calls that a “misuse” of the tech.

“Fully AI-generated essays often read as generic and detached. That lack of genuine voice is exactly what admissions officers are on alert for,” she adds. “And they really can spot it.”

David Rettinger, Associate Chair of Psychology at the University of Tulsa, says that AI-generated personal statements pose a new kind of integrity challenge. “While it has always been possible to plagiarise parts of a personal statement or falsify one’s accomplishments, it is now easy to create a document that gives a false impression of an applicant’s knowledge and/or writing ability,” he says.

For him, detection will never be a lasting solution to identifying AI-written essays because the tools are evolving so quickly. “Even if detectors catch up technologically for a period of time, the cycle will repeat,” Rettinger says.

That leaves two paths for universities deciding how to handle AI use in personal statements. “If colleges wish to retain personal statements… the choices are to allow AI and request disclosure, or secure the assessment in some way.”

That could mean, for example, having applicants write their essays in testing centres, where outside tools can’t be used. “I think universities would be wise to reconsider the validity of using personal statements in university admissions,” Rettinger adds.

For Allen at Dickinson College, what matters most is authenticity. “Personal statements are crucibles for the ideas and insights applicants wish to share,” he says. “While AI can efficiently concoct a result, it’s the applicants who provide the raw ingredients and personality in the writing.”

“The best personal statements are original works,” adds Allen. “Fresh insights, fresh ideas, the delight of a certain cadence or turn of phrase in the writing can add magic to the application. AI can’t do that.”

Blackman sees a clear distinction between applicants who understand the limits of AI and use it selectively, and those who rely on it so heavily that their essays lose individuality. “Top MBA programmes… admit applicants who understand the limits of AI, know how to dig deep… and have self-awareness to recognise the ethical lines,” she says.

As universities decide how to handle AI in admissions, experts warn against overly rigid rules on how applicants use these tools. Professor Kizilcec says that if universities require applicants to disclose use of AI but have no way to verify it, those who are transparent could face bias or penalties while others go undetected.

Some experts see education as the best safeguard for fairness and integrity. That means teaching applicants to use AI responsibly, and admissions staff to judge that use fairly. “Most important in admissions is to strengthen AI literacy for both students, parents, mentors and application readers,” says Professor Kizilcec.

AI is evolving faster than most universities can keep up. As of mid-2025, fewer than a third of institutions had a formal policy on AI use, and about the same share were still working on one.

“AI is the future, and we view it as a valuable tool,” says Blackman. Comparing it to a calculator, she notes that some applicants “may default to AI because it’s free and helps with instant gratification in an otherwise stressful application process”.

Students entering university will already be accustomed to using AI. Admissions officers, in turn, will have to decide how to evaluate work shaped by it.

For Professor Kizilcec the spread of generative tools has already reached a point of inevitability. “We live in a world now where AI writing tools are just so common,” he says, “it will soon be more work to avoid them than to use their suggestions.”