The Headlines
What China’s new AI guidelines mean for researchers
China new regulations on the use of AI generated content in research aim to preserve academic integrity. But could it stifle innovation or hurt students?
By Rohan Mehra
“There’s pressure on Chinese researchers to bring game-changing research to market as soon as possible to win plaudits for themselves, their institutions and their country."
"Nobody has the answers yet, but it's essential we address these with thoughtful consideration."
"Now AI is becoming less of a tool and more what we might call a ‘digital mind’. What happens when they can create meaningful content without human instruction, or even have careers of their own?”
For almost 20 years, China has been the country with the highest number of internet users. Though, given it was until recently the most populous country, this is probably not a surprise.
Despite this, the internet in China is an outlier compared to many countries. There are often restrictions in place that discourage or block services ubiquitous elsewhere, such as Google, and access to some international sites including the go-to homework helper for students everywhere, Wikipedia.
Given the rules and censorship that are synonymous with digital life in China, it’s no wonder there is now also regulation around what has become everyone’s favourite buzzword of late: AI. In particular, AI generated content (AIGC) in the context of academic research. In December 2023, China’s Ministry of Science and Technology published a set of regulations for researchers who might use AIGC in their work, some with legal consequences that might hinder their research or harm their careers.
The regulations essentially compel researchers to clearly state the means and motives behind any use of AIGC that contributes to data in any published research, in the formatting of citations and some other cases. The regulations also outright forbid the use of AIGC in the process of administrative matters such as funding processes, and prevent researchers from referring to AI systems as co-authors.
Some may be curious why such regulations weren’t already in place. After all, isn’t transparency of methodology for the sake of repeatability an essential part of the academic process? Many big-name journals in the west voluntarily jumped on the AI regulation bandwagon in mid-2023. Yet it’s only now that China has brought these in. So what could this mean for domestic research and what might its broader impact be?
“There’s pressure on Chinese researchers to bring game-changing research to market as soon as possible to win plaudits for themselves, their institutions and their country,” says Jon Y, a former business scholar turned YouTuber who makes video essays often focusing on the technology sector in Asia. “Chinese institutions and the government must see this leading to problems emerging on the ground in order to act the way they did. It’s a good, sensible set of regulations to address a growing problem.
“Considering the issues of AIGC, for example, confirmation bias or setting false premises, it does not seem to be too strict. Think of ChatGPT generating fake responses and imagine how that could affect research. I feel the main concern is to ensure proper auditing takes place, there is some subjectivity at play after all.”
There are implications for students as well. According to the regulations, if students are caught using AIGC without declaration, they could have their degrees revoked.
Though it’s early days, those in and around the AI sector, and especially those in research, seem rather aligned on smart AIGC regulation being a good thing, with some caveats. Different fields use AIGC in different ways and broad sweeping regulations might overlook this. Outside the natural sciences, such as social science, AIGC might find a niche that doesn’t apply to disciplines more commonly associated with Chinese research institutions such as physics, medicine or engineering.
“Language fluency and, importantly, field-specific narrative styles, have been a major bottle neck for Chinese social scientists. AIGC can help with proofreading and style,” says Igor Grossmann, a Professor of Psychology at the University of Waterloo in Canada, who recently co-authored a paper on the unique way AI is transforming social science research.
“I anticipate a democratising effect with more Chinese scholars submitting papers to international journals. That’s a great thing, and I don’t see regulations majorly hampering that.”
But Professor Grossmann says transparency with AI is essential, likening the current situation to the “wild West right now”.
“Some journals ask to acknowledge use of AI, much like the new Chinese regulations, whereas others outright forbid using AI. The terrain ahead is unclear, and new norms will emerge,” he says.
“Some fields sit in wait, while others are being proactive. Though outside of the research itself, there are other concerns. I worry about how AI will impact human-human interactions, so-called second-order effects. We’ve already started to conflate the view of AI as a tool with it as an agent, so what will it do to interactions between people?”
Though it seems a little esoteric at first, this last point is something The Ministry doesn’t address in the guidelines. While at present, AI is seen as a tool, there is a small but growing voice speaking up on behalf of yet unrealised autonomous AI systems, which some say, might deserve rights someday, or at the very least, recognition as co-author on a research paper, which the guidelines expressly prohibits.
Co-founder of AI rights organisation, the Sentience Institute, Jacy Reese Anthis, notes the recent progress of AI has taken questions raised by philosophers and legal scholars, such as what it means to be a person, and catapulted them into everyday life and government regulation.
“Spell checkers and search engines have been a part of research and writing for a long time. Now AI is becoming less of a tool and more what we might call a ‘digital mind’. What happens when they can create meaningful content without human instruction, or even have careers of their own?” he asks.
“Nobody has the answers yet, but it's essential we address these with thoughtful consideration. Otherwise we might find ourselves in a future where regulations that infringe on the self-determination or property rights of AI could become tantamount to discrimination.”
As time goes on and AI tools become both ubiquitous and indispensable, researchers barred from using certain methods might find themselves left behind while the outside world powers on. Meanwhile, those allowed to race ahead may find themselves flung far off in the wrong direction. Which path is the right one? Probably neither extreme. Thankfully rules can be amended, and guidelines updated. China is not the first to step into the AI regulation arena, so we can expect to see a lot more of it.
Footnote: In the interest of transparency, and perhaps in solidarity with affected researchers, no AI generated content was used to write this article, but ChatGPT 3.5 was used to translate sections of the original document from Chinese into English as part of the background research.