Constantly evolving and ever more popular, generative AI is being used by more individuals and organisations. Text, digital art, code and even music is being created with the help of Large Language Models (LLMs) and AI chatbots such as ChatGPT. But who is the author, and importantly, who is the owner?
By Gauri Kohli, contributing writer
“As long as ChatGPT does not quote or use sources materially unchanged, then the AI is the owner."
Things move quickly in the generative AI world. The fast-paced evolution, backed by the launch of new tools like DALL-E 2, Bard and the recently released GPT-4, however, is creating complex issues surrounding the output generated through these LLMs, such as ownership, attribution and copyrights. One perspective is that the ownership resides with the companies that invest their resources and expertise in creating these LLMs and that they are responsible for its development. However, LLM users also play a key role in determining the output. They feed in queries, prompts or context that guide the model’s responses. It can be argued that their input is essential in generating the output and thus they must get some level of ownership.
Answering the question of who owns the output is tricky, according to Professor David Epstein, Executive Director, Susilo Institute for Ethics in the Global Economy at the Questrom School of Business, Boston University. “There is no law against integrating things that we have read and interpreting that information and creating a new narrative. And that is what is newly copyrighted,” he says.
“Using that model, as long as ChatGPT does not quote or use sources materially unchanged, then the AI is the owner of that. If a user produces a paper or statement lifted directly or materially unchanged from the AI, then [the AI] should be referenced as the rightful owner of that material.”
Other experts also suggest that the question of ownership is entirely dependent on where you are located. Dr Andres Guadamuz, Reader in Intellectual Property Law at the University of Sussex, notes that in some countries, the outputs have no copyright and they’re in the public domain, whereas in others, the question remains open to interpretation. In the UK, for instance, the output belongs to the person who made it possible for the work to be created.
University of Oxford researchers, in collaboration with international experts, recently published a study in Nature Machine Intelligence addressing the complex ethical issues surrounding responsibility for outputs generated by LLMs. The study, co-authored by an interdisciplinary team of experts in law, bioethics, machine learning and related fields, delves into the potential impact of LLMs in critical areas such as education, academic publishing and intellectual property.
"People will become the editors of the AI instead of the other way around"
While other studies have focussed primarily on harmful consequences and AI responsibility, the paper diverges. To claim intellectual ownership credit, or authorship for a creation, a person has to put in certain amount of skill and effort, explains the paper’s joint first author Dr Brian D Earp. Therefore, they must also be able to take responsibility for the creation, producing what he sees as a paradox for outputs of human-prompted generative AI.
“Suppose you instruct an LLM to write an essay based on a few keywords or bullet points. Well, you won’t have expended much effort, or demonstrated any real skill, and so as the human contributor to the essay, you can’t really claim intellectual ownership over it,” he says.
“But at the same time, the LLM that actually produced the essay can’t take moral responsibility for its creation, because it isn’t the right kind of agent.”
As a consequence, neither entity can take full ownership of the output. Accordingly, there might be a lot of creative work that is generated in the coming years that will strictly speaking be author-less.
What Dr Earp and fellow joint first author Dr Sebastian Porsdam Mann, with their collaborators, are now considering, is the question of credit or blame for outputs of LLMs that have been specifically trained on one’s own prior. human-generated writing. “We argue that if a human uses a personalised LLM, trained on their own past original writing to generate new ideas or articles, then, compared to using a general-purpose LLM, the human in such a case would deserve relatively more credit and should be able to claim at least partial ownership of the output,” observes Dr Earp.
"The LLM that actually produced the essay can’t take moral responsibility for its creation."
Can AI replace writers and researchers?
While there are issues with the accuracy and bias of the materials that AI platforms generate, there is growing speculation that these platforms could replace some of the work of writers, analysts, and other content creators. “We need to start considering that an increasing number of works are going to be generated with AI, and short of widespread use of AI detectors, this is a reality that we will have to be content with,” said University of Sussex's Dr Guadamuz.
Experts like Professor Epstein at Boston University believe it will replace much of the work now done by humans. “All those jobs of writers, analysts and other content creators are at risk, and it is unclear that we will need much more content that would employ those replaced. In other words, it is unlikely that work products will expand at the rate that people are replaced to take up the slack,” he says.
As far as inaccuracies and biases are concerned, ideally humans will provide oversight of the AI content generated to ensure the message they are trying to get out is both accurate and unbiased, or biased in the way they want to communicate their opinions. “People will become the editors of the AI instead of the other way around,” adds Professor Epstein.
Experts are also debating whether LLMs be used for processes and fields which require critical decisions like medical care, legal or finance.
"The one who publishes this content is liable, however it is generated."
Who gets the blame for damaging content?
As frequent users of generative AI already know, LLMs will at times confidently provide inaccurate or outright false information, known as “hallucinations”. For images, it has also tended to struggle with details like fingers. More sinisterly, however, these tools provide opportunities for fraud and hoaxes. In March, an image of Pope Francis wearing a white, puffy jacket went viral, and many believed it was real before later discovering it was actually created using AI image generator, Midjourney. A month later, music artist Drake appeared to have another hit single on his hands, except he didn’t write or perform it. An AI doppelganger did.
Who is responsible when generative AI produces unwanted or harmful output, either intentionally or unintentionally, remain open ended. Should the generative AI, it’s company or the user who posed the query be liable?
“I believe that the one who publishes this content is liable, however it is generated. The publisher and author are responsible for what is published now, so that should not change just because it is generated by AI,” says Professor Epstein.
However, Dr Guadamuz, whose main areas of research are artificial intelligence and copyright, says the answer will depend on the situation. In their terms of service, OpenAI claims they’re not liable. With the consistent growth of generative AI and its expanding use, the issue of LLM output and IP ownership is set to grow even more complex.
Impacting the university business model
There has been global concern about the effects of generative artificial intelligence on the traditional model of higher education, from the reliability of essays to continuous assessment of all kinds. But less frequently debated, is the impact on universities as institutions and the people who work in them.
By John O'Leary , contributing writer
For every optimist who sees ChatGPT as the modern equivalent of the calculator, easily absorbed into a mildly tweaked curriculum, there are others who fear undergraduate education could be usurped by AI and suffer a collapse in demand from employers and prospective students alike. As ever, the truth probably lies somewhere in the middle, but few are confident about exactly where.
The impact has already been felt at some private-sector companies specialising in education. Chegg, the American online learning service, suffered a drop of almost 50 percent in its share price after disclosing a five percent fall in its subscribers following the launch of ChatGPT. Pearson, a more diversified education company based in London, saw its share price fall 15 percent immediately afterwards, although it has since recovered most of its value.
With the sudden popularity of ChatGPT, the first reaction of universities in many parts of the world was defensive. In the UK, for example, a third of the 24 Russell Group universities, including Oxford and Cambridge, banned its use for official assignments by the end of March. Now, however, many institutions are now trying to strike a balance, having concluded that students are going to use it anyway, and that, increasingly, employers will demand familiarity with AI for all types of roles.
Russell Group universities who outright banned ChatGPT in March
Chegg's share price fall
Pearson's share price fall
New features
In the United States, universities with strong financial capabilities are planning for advanced AI facilities and competing for the best computer scientists to staff them. The University of Southern California is investing more than $1 billion in computing, including $10 million on a new Center for Generative AI and Society, with leaders from journalism, education, cinematic arts and engineering engaging in a review of AI’s benefits and challenges.
More US universities are following suit. The University of Florida, for example, has announced an Artificial Intelligence Academic Initiative Center to promote AI and integrate it across the curriculum. Purdue University in Indiana will recruit 50 employees for a new Institute of Physical AI, and Emory University located in Southeast Georgia has already employed 19 faculty for its AI Humanity initiative, with around 50 more to come.
Beyond the US, the world’s first AI university has opened in Abu Dhabi. The Mohamed bin Zayed University of Artificial Intelligence has fewer than 100 students but the postgraduate, research-based institution has ambitious plans and the financial backing to achieve them.
The US and Chinese governments have been generous with funding for research initiatives involving generative AI, leaving universities in other parts of the world to fear that they will be unable to compete at a crucial stage in its development. The US’ National Science Foundation has allocated $140 million to establish seven national research institutes in specialist areas such as cybersecurity and AI-augmented learning.
"You are putting your data into someone else’s service."
Bumpy road ahead
Universities trying to compete in this rapidly-expanding field face obstacles however, given the global shortage of highly-qualified computer scientists and the salary levels they can command outside the education sector. There are also uncertainties in most countries about the status of intellectual property both mined and produced by chatbots.
Publishers, among others, immediately pushed back against the British Government’s plans to relax IP legislation to promote innovation. A governmental white paper in March, “A pro-innovation approach to AI regulation”, remains under consultation. Aiming to make the country an "AI superpower", the paper promises guidance to support AI firms' ability to access copyrighted work as an input to their models, while ensuring there are protections, such as labelling, on generated output to support right holders of copyrighted work.
Melissa Highton, Director of Learning, Teaching and Web Services and Assistant Principal for Online Services at the University of Edinburgh, told a conference last month that ownership issues still remained unclear when it comes to AI. If chatbots were working on a set of data owned by a university, for example, there would be the issue of whether the institution retained ownership, or whether “you are putting your data into someone else’s service”. She doubted whether ChatGPT and similar tools would continue to be free to use in the long term, adding that their high-power usage would also become an environmental issue.
Another area of inevitable controversy is the scale of job losses within universities as chatbots take over some of the functions of academics and administrators. Two-thirds of US businesses responding to a ResumeBuilder.com survey said they expected to reduce the size of their workforce in the next five years as AI takes over more roles, and unions are concerned that the same will apply to universities, especially those that are struggling financially.
The counter-argument is that AI can relieve academics of bureaucratic tasks and consequently guide students better and/or spend more time on research. But even this is little consolation to those in subject areas that may be regarded as superfluous as AI makes further inroads into more fields of employment.
"Most AI systems need data. Lots of it. This brings in issues of privacy and ethics, data ownership, copyright, GDPR and bias."
"The value of spending three years earning a degree... is rapidly diminishing."
Professor Nick Jennings, Vice-Chancellor of Loughborough University and an AI researcher himself, admits that there are “significant challenges” for universities in areas such as the accuracy of chatbots’ output and their relationships with humans. He told the university community in his blog: “Most AI systems need data. Lots of it. This brings in issues of privacy and ethics, data ownership, copyright, GDPR and bias. These are all genuine showstoppers if handled incorrectly.”
But he concluded that the advantages outweigh the disadvantages. “I firmly believe AI will revolutionise all aspects of university life and that we should be in the vanguard of this in Loughborough,” he says.
Nic Newman, the Founder of London-based Emerge Education, which funds education technology ventures, is confident that universities will adapt and survive. But already he sees signs that some will do better than others. “The value of spending three years earning a degree that doesn't prepare you for work is rapidly diminishing,” he says. “Higher level qualifications are absolutely needed, but the way we earn them will rapidly become more stackable, more flexible and more integrated with work.
“AI and change go together. AI won't discourage students going to university, but it will disrupt which ones people go to. We are about to have a two-tier university system - those that have responded well to the changing nature of jobs and have implemented AI in all aspects of the student experience, and those that don't.
“The unis that respond well to the opportunity AI presents to teach students more effectively - an AI coach for every student - will be the winners. Those that don't respond well have everything to lose.”