The WUR 2026 Supplement
Emerging Metrics in University Success
While university rankings have been a major voice in university prestige, alternative metrics are beginning to take shape and attention. What other ways can reputation be measured?
By Seb Murray
“As students seek clearer links between study and employment — and as employers look for signals beyond brand-name degrees — skills-based assessment is emerging as a parallel, if fragmented, measure of value."
Over the past two decades, global university rankings have exerted a steady influence on international higher education. Universities have factored them into strategic planning, prospective students often use them as shorthand for prestige and governments in several countries have incorporated them into visa schemes, funding models and performance frameworks.
The appeal is obvious: rankings offer comparability, visibility and a form of international currency. But their influence has shaped a narrow definition of what a “good” university is — one that may not reflect the priorities of all students, communities or regions.
Critics argue that rankings such as QS Quacquarelli Symonds and Times Higher Education rely too heavily on traditional measures, like employer name recognition, research citation counts and the proportion of international students and staff.
As the cost of higher education continues to rise, and students grow more focused on value and outcomes, the focus is shifting from whether rankings are flawed to whether better ways of measuring quality are starting to take shape.

If traditional university rankings reflect institutional prestige, a new class of credentials is quietly reframing the value proposition around individual skills.
Digital badges, stackable certificates and so-called “skills passports” are now edging into the mainstream, propelled by employer recognition and growing student interest in demonstrable outcomes from online learning.
Digital education platforms such as Coursera and edX have grown strongly, offering short courses aligned to industry needs and often co-branded with major employers.
Tech firms, in particular, have warmed to these formats. IBM, Google and Salesforce now offer professional certificates and digital badges that don’t require a university degree to earn — and are increasingly accepted by employers as alternatives to traditional qualifications.
Meanwhile, professional service firms such as JPMorgan, PwC and Bank of America are beginning to experiment with hiring based on demonstrated skills rather than academic qualifications — particularly for entry-level and technical roles.
Universities have begun to respond, embedding their own micro-credentials into full degrees, enabling modular, skills-oriented progression through higher education. Advocates argue this approach offers flexibility, accessibility and closer alignment with the labour market, particularly in fast-moving sectors such as data science, cybersecurity and sustainability.
But obstacles remain. The lack of standardisation across platforms makes it difficult to compare credentials. Outside of certain industries, recognition remains uneven.
Still, the appeal is growing. As students seek clearer links between study and employment — and as employers look for signals beyond brand-name degrees — skills-based assessment is emerging as a parallel, if fragmented, measure of value.
As global rankings face questions over what they measure, a parallel effort is under way to capture what they often ignore: social impact, equity and local engagement.
QS’ Sustainability Rankings, introduced in 2022 and the THE Impact Rankings, introduced in 2019, have become a visible attempt to do so. QS’ Sustainability Rankings seek to highlight the commitments universities are making to sustainability, particularly around environmental, social and governance indicators. Since its first edition, the methodology has evolved, and grown from 700 institutions to just over 1,700.
Based on universities’ alignment with the UN Sustainable Development Goals (SDGs), THE’s framework rewards efforts around social and economic impact. The number of participating institutions has grown steadily, from 450 in the first year to more than 2,000 in 2024, reflecting appetite for a broader definition of excellence.
Beyond these rankings, other measures are emerging. Some universities now publish first-generation student data, track regional graduate retention rates, or quantify their economic contribution to underserved areas.
Participation in higher education remains below the national average in north-east England, but Northumbria University has emerged as an outlier with 40 percent of its students coming from areas with low university participation.

Andy Long, Cice Chancellor, has said that widening access is only part of the equation: Northumbria has set a 2030 target to ensure that students from underrepresented backgrounds are just as likely as their peers to enter skilled employment or further study within 15 months of graduating.
“It’s no good just chipping away at the gap in outcomes,” Long said. “We want to eliminate it.”
Universities are under growing pressure to demonstrate value not just globally, but locally. Whether rankings can keep pace with that shift remains to be seen.
Governments are similarly beginning to look more closely at graduate outcomes and equity benchmarks. Employers, particularly outside academia, are paying closer attention to demonstrable skills over reputational proxies.
Traditional rankings are unlikely to drive the shift, as their models rely on consistency for comparability over the years. Instead, change is more likely to come in stages, through government reforms, greater transparency from universities and skills-based hiring led by employers.
Over the past two decades, global rankings have helped to shape how excellence in higher education is defined. As new measures emerge, however, focused on outcomes, equity and skills, university rankings are no longer the whole story.