The View
How Generative AI is Challenging Traditional Notions of Assessment
By Krishna Mohan
“On one hand, GenAI unsettles long-standing ideas of originality and authorship. On the other, it creates new ways to assess deeper qualities how students frame problems, interpret AI-generated material or bring their own perspective to improve on machine output.”
“The lesson is clear: what matters is not whether AI exists, but how systems are designed to work with and around it.”
Assessment has always been central to education. For decades, exams, essays and grades have been the trusted signals of student ability, shaping everything from admissions to funding to career opportunities. Universities built entire systems around these measures, and for the most part, the system was steady and predictable.
That steadiness is now being tested. Generative AI has entered classrooms with unprecedented speed, raising questions that go to the heart of higher education: if machines can instantly produce essays, answers, code and even research summaries, what exactly are we measuring when we assess students? Are we capturing their true learning, or just their ability to use powerful new tools?
Traditional Notions of Assessment
For decades, assessments have followed a familiar pattern. They were built around memory and recall, rewarding students who could reproduce facts accurately. They were standardised and uniform, designed to measure large groups in the same way, even if that meant overlooking individual differences. Most were summative, acting as a final score or judgment at the end of a course, with little attention to what happened in between.
They remained faculty-centered, relying heavily on the authority of the instructor as the main evaluator. And they were output-focused, placing greater weight on the final essay or exam than on the process of thinking, experimenting or creating that went into it. The system had its flaws, but it was steady. Everyone knew how it worked. That steadiness, however, is now being disrupted.

When Machines Produce the Output
GenAI changes the equation entirely. Tools that can produce polished essays, answers, coding solutions or even research summaries in seconds force us to pause and ask: what exactly are we measuring when we assess students? If the final product can be generated by a machine, then traditional testing risks grading tool usage rather than genuine learning.
For universities, this is both a challenge and an opportunity. On one hand, GenAI unsettles long-standing ideas of originality and authorship. On the other, it creates new ways to assess deeper qualities how students frame problems, interpret AI-generated material or bring their own perspective to improve on machine output.
Imagine a physics course where students are asked to test the assumptions in an AI-generated solution, or a business class where students stress-test AI-generated market analysis. In these cases, assessment moves beyond memorisation and starts to capture the critical reasoning, creativity and ethical judgment that define real expertise. The challenge is clear: if we continue to assess only the finished product, we risk mistaking machine fluency for human mastery.
But if we adapt, GenAI can help higher education build assessments that value originality, process and discernment as much as the final outcome.

Preparing Students for the Real World
This shift forces universities to think differently about what they are really preparing students for. Employers are less interested in whether a graduate can recall formulas or produce a polished report; they want people who can question assumptions, make decisions under uncertainty and use new tools responsibly. In that sense, GenAI is not just disrupting classrooms it is aligning assessments more closely with the real skills needed in today’s economy.
Lessons From Other Industries
We have already seen this play out in other industries. In fraud prevention, for example, GenAI is used by both attackers and defenders. Fraudsters now generate fake identities, convincing documents and phishing messages at scale. At the same time, banks and regulators deploy AI systems to detect subtle patterns of deception that humans might miss. The lesson for higher education is clear: the same technology that challenges integrity can also safeguard it. Universities that lean into this duality—teaching students how to harness GenAI while also designing safeguards—will be the ones that stay relevant.
How Universities Can Adapt
The good news is that universities don’t have to start from scratch. Many of the tools they already use for teaching and evaluation can be rethought for a GenAI era. A physics instructor, for example, could assign students an AI-generated solution to a mechanics problem and ask them to identify errors, refine the logic or explain why the assumptions don’t hold in real-world scenarios.
A business professor could provide an AI-produced market forecast and challenge students to test its validity using actual data, considering risks the model may have overlooked. In both cases, the focus shifts from producing a polished answer to demonstrating judgment, originality and the ability to think beyond what the AI can do.
This also opens the door to more collaborative and project-based assessments. Students might be asked to work in teams where GenAI is part of the toolkit, just like spreadsheets or databases once were. The assessment would then focus on how well they frame the problem, integrate AI responsibly, and build on its outputs. By designing tasks that value creativity, process and critical engagement, universities can help students graduate with skills that mirror the challenges they will face in the workplace.

Rethinking the Credibility of Assessment
The credibility of higher education has always rested on the integrity of its assessments. Employers, governments and society trust that a degree reflects genuine capability. GenAI is now putting that trust under pressure. If universities continue to grade only the finished product, outsiders may begin to question whether those grades represent student ability or simply student access to AI.
But there is a positive path forward. Institutions that embrace GenAI-aware assessment models can actually strengthen credibility. They can show that their graduates are not just familiar with new tools but capable of using them responsibly and critically. This kind of disruption where a technology is quickly adopted for both harmful and constructive purposes is not new.
In fraud prevention, for example, fraudsters have leveraged GenAI to create fake documents and convincing scams. Yet the same technology, when applied wisely, is used by banks to detect anomalies, flag suspicious activity and protect customers.
Education has faced similar moments before: the rise of open-book or take-home assessments once raised concerns about fairness, but over time they proved valuable for testing how students think, analyse and apply knowledge rather than just repeat it. The lesson is clear: what matters is not whether AI exists, but how systems are designed to work with and around it.
For universities, this means creating assessments that capture how students use GenAI, not whether they can avoid it. A degree should demonstrate the ability to question, test and apply AI outputs with discernment skills that employers across the world are already demanding.
The Role of Policymakers
And for policymakers, there’s an equally important role. They don’t need to dictate exactly how a physics test or business case study should be redesigned. But they do need to set the guardrails: funding pilots for AI-integrated assessment, building frameworks for academic integrity in an AI era and ensuring that access to these tools is equitable across institutions. The credibility of higher education is a shared responsibility, and policy has to evolve in parallel with practice.
Done right, this moment is not a threat but an opportunity. GenAI can help universities and policymakers together make degrees more meaningful, not less. By focusing on originality, process and discernment rather than polished outputs alone, higher education can build a system of assessment that reflects the skills truly needed in the AI-driven economy.
Krishna Mohan is a leading voice in Generative AI, specializing in AI engineering and building ethical, future-ready AI solutions.
