Get the QS Insights Newsletter

Subscribe

The Spotlight


Research security and the international collaboration dilemma

Universities must adopt "smart openness", shifting research security from a compliance exercise into a strategic governance priority against global espionage.

By Dr Brendan Walker-Munro, Associate Professor (Law), Southern Cross University, Australia

"70 percent of the OECD countries now have some form of national research security policy."
"Research security is not a compliance exercise, it is a strategic capability."
"The 'collaborate with everyone, everywhere' environment of the early 2000s is well and truly dead."

In brief

  • Academic research faces escalating threats worldwide from espionage, foreign interference, and intellectual property theft – yet nations and institutions differ sharply on what to protect and how.
  • The anti-money laundering approach offers a proven model: coordinated international standards, implemented nationally, with built-in accountability, all without a binding treaty.
  • Research security needs a similar risk-based approach to collaboration, mutual evaluation, and meaningful consequences for inadequate protections.

On 6 March 2026 Yunhai Li, a researcher at the University of Texas MD Anderson Cancer Center, pled guilty in Harris County court in the US. His charges? Economic espionage – after he was arrested at George Bush Intercontinental Airport with nearly 90 gigabytes of unpublished cancer research and medical data on his personal devices. Li was due to fly back to China the day he was arrested.

Risks from spying in academia can cut both ways. In July 2024, Viacheslav Morozov, a professor of political science in Estonia, was convicted of espionage. His trial subsequently heard that he was in fact a Colonel in the GRU, Russia’s infamous military intelligence arm, and had been using his academic position as a cover for years, possibly decades. And just last year, a Swiss academic was arrested and deported for allegedly supplying navigation systems for drones and missiles to Iran.

These cases demonstrate an emerging trend of nations looking to boost their economic and technological competitiveness through any means possible – including stealing research from the university and higher education sector.

Globally, universities face escalating, deliberate efforts to steal commercially or militarily valuable research, repress views critical of foreign regimes, and hack sensitive databases of students, staff and research participants. This has prompted experts to ponder how to protect the university and higher education sector. The resulting discussions about “research security” – the protection of research and development (R&D) from foreign governmental access, interference or theft – are gathering pace in the US, the UK, the European Union and across the Indo-Pacific. In fact, 70 percent of the OECD countries now have some form of national research security policy in place.

Research security requires a cross-border, coordinated approach between trusted nations. Here, we can draw lessons on how the world tackled money laundering: even without an international treaty, the nations of the world all agreed on what is acceptable practice for banks and financial institutions, and used the world financial system to crack down on illicit behaviour. Through the Financial Action Task Force (FATF), nations who didn’t meet risk-based minimum standards were “greylisted” or “blacklisted”, meaning accepting or receiving payments from those countries should be treated with caution. Countries that dealt with grey- or blacklisted countries might even themselves end up on the list.

Crucially, FATF members also submit to regular peer reviews of each other’s compliance, creating accountability without requiring any nation to surrender sovereignty over its own laws. Research security has no equivalent. There is currently no way for trusted nations to assess whether each other’s protections are adequate, or to learn systematically from each other’s approaches. Without that coordination, nations are going it alone, and neither governments nor higher education institutions are getting the settings quite right.

My publications examine research security as an emerging discipline exploring where global approaches converge and diverge, and how nations are responding to increasingly complex geopolitical and research policy challenges.

Some countries approach research security as a form of prohibition. The US banned the export of semiconductors to China in 2022, to stop China from using those chips to fuel their advances in emerging technologies with military applications, like hypersonics, genetic modification, and artificial intelligence. The US also publishes annual lists of institutions with whom university collaborations are explicitly prohibited. And in Canada, bans on Federal research funding can be applied to universities that work on sensitive technologies in conjunction with entities on their “Named Research Organizations” list. Of the 103 organisations currently listed, 88 of them are foreign universities in China, Iran and Russia.

Others adopt a more collaborative and advisory model. The Netherlands, Denmark and United Kingdom have established government contact points to help institutions assess research security risks, and make informed decisions about sensitive projects.

Then countries like Germany and France have implemented institutional governance measures. Germany has adopted “security ethics committees” – modelled on human and animal research ethics committees – to scrutinise university proposals that are potential risks to national security. And France has been using a system called the “Protection of Scientific and Technical Potential” for nearly 15 years to provide its research security. Academics wanting to work in “restricted zones” on the crown jewels of the nation’s research must undergo a security screening through French intelligence.

Although these schemes were criticised early on for potentially damaging national innovation and competitiveness, Nature’s Research Leaders Index continually places France and Germany in the top 10 research-producing countries of the world.

What emerges from these approaches is not a single model, but three distinct pathways: prohibition-based systems that restrict collaboration, advisory models that guide institutional decision-making, and governance-led approaches embedded within universities themselves. Each reflects different risk appetites and geopolitical realities, but all point to the same conclusion: research security is not a compliance exercise, it is a strategic capability. For universities, the question is no longer whether to engage, but how to build the governance, partnerships, and internal expertise required to do so responsibly at scale.

Research security is also an international imperative. In May 2024, the European Commission directed all 27 Member States to adopt laws and policies to “work together to safeguard sensitive knowledge from being misused”. In December 2024, the United Nations General Assembly called on nations of the world “to take concrete measures to promote international cooperation on materials, equipment and technology for peaceful purposes”, noting that they were concerned new laws were creating “undue and increasing restrictions on exports to Member States”.

So just how do we walk that fine line between security and openness? Between secrecy and transparency? Or protection and collaboration? The answer is designing systems that enable both simultaneously.

We can’t just stop collaborating with foreign nations. Some are far more scientifically advanced than we are, and we risk cutting ourselves off from developments in the latest technology. In other cases, we might be unfairly discriminating against researchers from other countries.

That’s why research security in higher education is such a tough sell. Our universities have long been considered independent institutions built on traditions of openness and intellectual independence where the freedom to publish, collaborate, and work together is a critical virtue. Researchers don’t like being told what to publish and when, and the usual tools of national security – secrecy, classification of information or data, threats of criminal offences or jail time – clash with academic principles of openness, transparency and global partnership. And we’ve made our universities into businesses, where the allure of a big foreign investment, a new international campus, or a bump in international student enrolments is too good to pass up.

But the “collaborate with everyone, everywhere” environment of the early 2000s is well and truly dead.

The first step is establishing common definitions. The US, Canada, and most of the EU talk about research security, with some of the EU nations using “knowledge security”. The UK and New Zealand use “trusted research”, and in Australia, it’s all about “foreign interference”. It’s hard to make meaningful and practical strides in policy if we aren’t even speaking the same language.

Second, both governments and universities need to recognise that research security isn’t about a choice between “open” and “secure” – it’s both. Denise Simon and Caroline Wagner, both Fellows of the Quincy Institute for Responsible Statecraft, have written about “smart openness” which talks about transparency and security as a spectrum.

Thirdly, we need to think about exactly what the risk is, and who it is to, in terms that academics can understand. This isn’t about harmonisation – what is considered risky research in Canada or Britain won’t be the same for Belgium or Australia, India or Taiwan. As my July 2025 report Shifting the Needle found, adversaries and competitors are playing hardball. They are deliberately stealing information, inserting malicious insiders, targeting researchers, and exploiting data, cyber and personnel vulnerabilities.

Research security must now be treated as a core governance priority. It should be embedded in institutional decision-making, from partnership approvals to research strategy and risk oversight. University leaders should be asking not only what research is being conducted, but with whom, under what conditions, and with what potential downstream implications. These questions are far easier to address proactively than the aftermath of a breach, particularly when routine research activity can carry significant strategic consequences.

Dr Walker-Munro is a Senior Lecturer in Law at Southern Cross University. His work focuses on research governance, national security law and academic freedom. He has published with ASPI, InnovationAus and Security Challenges.