The Dispatch
Ditching cognitive convenience
Adopting mindful and responsible AI in universities
To harness the benefits of AI while mitigating its environmental impact, institutions need to consider the energy cost per insight and guide usage carefully.
By Claudia Civinini
"We need to be conscious of the way we use AI tools, not just for data privacy but also for sustainability."
“AI has potential for supporting sustainability goals, but the devil is in the caveats, as usual.”
"The highest negative impact on sustainability with the lowest benefits is what I call cognitive convenience.”
"We can collect data on how people are using it and then create frameworks to help students use AI in a more sustainable manner.”
In brief
- As AI integrates into campuses, educators warn of its environmental footprint, arguing that high energy and water usage may clash with university sustainability goals.
- Many users rely on AI for "cognitive convenience", simple tasks like routine emails, leading to high energy costs for low-value insights that are often hidden within cloud services.
- Universities must lead by adopting clear usage policies, vetting green energy sources, and encouraging students to prioritise human critical thinking over unnecessary, resource-heavy machine shortcuts.
At the time of filing this feature, a manifesto for conscientious objection against AI has been signed by over 2,700 educators in France. Launched in December 2025, it has gathered signatures from lecturers, researchers and teachers across the education sector.
Three arguments support the manifesto’s stance against AI, touching on its social impact and the threat of misinformation. But the first argument against it is on an environmental basis. Using AI is incompatible with international efforts to tackle climate change such as the Paris Agreement, the manifesto says. This argument, it maintains, is enough to reject its use.
The environmental impact of AI use has become a more common topic of discussion in the sector. While data on the impact of commonly used tools is not always consistently available, the energy use and water consumption of AI cannot be ignored.
Driven by growth in AI usage, data centres are making headlines for their impact on the environment and the communities around them. And the production, maintenance, disposal and transportation of AI’s hardware components require additional energy use and consumption of natural resources, argues Dr Yuan Yao, Associate Professor at Yale University.
According to an explainer on the London School of Economics website detailing the impact of AI infrastructure and AI-powered applications on the environment, AI regulation often focuses on privacy and ethics, neglecting the environmental aspect.
The authors argue that the future of AI is a political decision: “the choice could be made to build a future in which AI benefits society without causing irreparable damage to the environment,” they write. “Democratic deliberation and government regulation will be critical to promoting this choice.”
This is a choice and a question the sector is already grappling with.
An Assistant Professor at the University of Michigan, Dr Rabab Haider, said in an interview that we need to be conscious of the way that we use AI tools, not just for data privacy but also for sustainability.
“There’s a tension in terms of trying to meet our U-M sustainability goals and decarbonisation goals while still pushing out this technology,” she said.
Roundtable discussions held by UK non-profit organisation Jisc in 2025 on AI and sustainability in higher ed also emphasised a dual benefit and risk. While AI’s resource usage can be substantial, it has the ability to contribute positively to environmental goals, such as climate prediction, energy efficiency and carbon footprint tracking, according to a blog summarising the events.
A need for balance emerged; universities need to balance harnessing the potential of AI tools with their commitments to sustainability.
Invisible footprints and scale creep
Arif Gasilov, Partner, Sustainability Strategy & ESG Compliance at consulting company Gasilov Group, says that a lot of his work now overlaps with how organisations adopt AI. He says keeps seeing higher education roll out AI faster than the systems that track energy or emissions can keep up.
He notes several issues that can determine whether and how the environmental impact of AI tools can be managed by a university.
One issue is scale creep. An AI tool could be implemented with a specific function but then become standard across a programme. When this happens, Gasilov explains, staff and students rely on the tool, making it difficult to cut back even if the usage pattern is wasteful.
“Broad assistants that plug into everyday workflows tend to sprawl, because they can be used for almost anything. More task-shaped tools tend to have clearer boundaries and better levers for governance,” he says.
The environmental footprint of AI usage can be invisible, at least internally, since it sits in cloud services rather than on campus. If ownership or control of AI tools within a university is fragmented, he says, that environmental footprint becomes difficult to manage.
“IT can enable access, the department pays for the licensing, but no single group has full accountability, and because of that, when the responsibility is split, the teams cannot manage what they don’t control,” he says.
At the source
Dan Graf is CEO and Co-Founder of the AI-native carbon reporting platform earthchain.He says energy sources must also be vetted.
Cloud services need to be procured responsibly, for example from data centres which are powered by renewables and are not in areas affected by water shortages.
“The university could also host some [smaller] models on its own, managed infrastructure, whether that's their own data centre, or one that they contract directly… then they will have complete transparency over how that data centre is powered, where the energy sources are coming from,” he adds.[cc1]
He says the University of Oxford, for example, has developed a Generative AI usage policy which favours running smaller models locally instead of relying on cloud services where the use of resources is opaque.
But beyond institution-wide governance, another, perhaps simpler, factor is key to mitigating AI’s environmental impact: questioning how we use AI tools and whether their use is necessary or justified.
“There are a lot of great case studies where AI is really helping companies and universities reduce their carbon footprint,” says Dr Mark McNees, Director of Social and Sustainable Enterprises at Florida State University, Jim Moran College of Entrepreneurship.
It’s an amazing tool if it’s leveraged for the right purposes.”
The right purposes is key here. AI has potential for supporting sustainability goals, but the devil is in the caveats, as usual.
If used carefully, AI can improve progress towards sustainability goals, particularly around measurement and data tracking, identifying hotspots and trends, as well as scenario planning, Graf says.
“But the energy impact and water impact are potentially huge, and the use of AI without guidance, policies and documented best practices will do damage. This impact is also difficult to account for and track,” he adds.
Apples and oranges
“A lot of people just throw AI at things now. They use very advanced models to solve really simple things… AI can be used lazily, and I think that that's a shame,” Graf observes.
In certain circumstances, he explains, it is plausible that AI can make tasks more efficient and less resource-intensive. “But we need to be really cautious about ensuring that the task that you're giving the AI to do is appropriate and is beneficial.”
The debate around AI can at times lack nuance in the general public. It’s not uncommon to be called a Luddite when pointing out the safeguarding gaps found in some commonly utilised tools, or the problems around privacy, bias or employment trends that AI is making us confront. The same goes for its environmental risks.
But AI is not a monolith. There are extremes, such as those Graf points out: on one side, for example, the frivolous creation of deepfakes for social media – which he equates to burning crude oil in the road – and on the other using AI tools to monitor rainforest and canopy cover – which is plausibly less resource-intensive than sending research teams in person.
And in between these extremes, there is a range of different tools and ways to use them; making distinctions is key. While the technical differences are their own discussion altogether, how we implement and use AI is something we can monitor, evaluate and regulate, both at the individual and institutional levels.
As Dr Lucy Gill-Simmen and Dr Will Shüler of Royal Holloway, University of London, said about how the institution developed its policy on AI, “If we are committed to sustainability, then adoption must be weighed against environmental impact.”

Give your brain a chance
While what can be considered beneficial is subjective and debatable, a clear principle can guide the decision.
Admitting that the sector is still “figuring this out”, Dr McNees explains that a decision framework is necessary for staff and students. “Really ask the question: does this task genuinely require AI capabilities, or am I just using this out of convenience?
“The highest negative impact on sustainability with the lowest benefits is what I call cognitive convenience, and that’s just using AI as a shortcut instead of taking 30 minutes to brainstorm, for example, or using AI to generate routine emails. Then the energy cost per insight is really poor,” he says.
“I don’t think people realise how energy-intensive AI is, and I think a lot of people just use it for cognitive convenience rather than using it for complex tasks.”
Cognitive convenience is a very common aspect of AI usage – a student blog on the topic of AI and sustainability invited readers to give their brain “a chance” before using AI tools. Acknowledging and tackling it could be an opportunity to limit AI’s environmental footprint.
“AI should be used where it really supports learning and access. But you’ll find you don’t always need to use AI, at least Generative AI, in order to produce a similar kind of value,” Gasilov says.
“It shouldn’t be run by default because it’s convenient. And I’d say eliminating AI use completely is quite unlikely, but reducing wasteful or unnecessary AI use is realistic, and that’s where most of the opportunity lies currently.”

Awareness
According to Hazel Horvath, Founder and CEO of sustainability platform Ecolytics, building awareness is a first step. Through her work with Ecolytics’ Offset AI, a browser extension that helps track and manage the carbon footprint of AI tools, she says she has observed a broader awareness of digital emissions.
“I think the extremely environmentally intensive AI tools have brought more awareness in the modern conversation about digital emissions,” she says.
“We may not always think about it, but being on a Zoom call has an environmental impact. And it’s often a hidden emission.”
Awareness can spark behavioural change. Similar to putting a sign on paper towels saying ‘every one of these was a tree once’ to curb waste, ensuring people are aware of the environmental footprint of the AI tools they use could make them think about whether using them is appropriate.
Showing the emissions and water consumption resulting from using an AI tool, Offset AI employs an analogous approach, she explains, nudging students to consider whether a search, for example, is really worth the emissions and water footprint.
But beyond individual awareness, policy is needed to provide explicit guidance.
“There isn't just one AI. There are all sorts of different models and all sorts of capabilities, and I don't think the ordinary user necessarily understands the differences between them and why you would be selective about which models you choose,” Graf says.
This is something, he says, an institution can provide specific guidance on – which models are allowed, how they are sourced and provided, and how they can be used. He envisages detailed guidance helping staff and faculty pick out the best solution for each task, instead of having them do in themselves, which makes it difficult to have oversight on what tools are used and what their impact is.
And while guidance is essential, more data on how students and staff are using AI is also needed to build an evidence base to inform policy.
Dr McNees says: “The way I have set it up with my students is I told them, ‘just let me know how you are using it’.
“I believe, right now, transparency is the most important thing. That way, we can collect data on how people are using it and then create frameworks to help students use AI in a more sustainable manner.”
Policies that set guidance on responsible, ethical and sustainable use of AI in universities are going to have an impact well beyond academia.
As they are at the forefront of technology adoption while having net zero and sustainability goals in place, universities can be leaders in balancing innovation and sustainability, Horvath points out.
“The rest of the world looks towards academia to be a leader in that and to be able to provide responsible AI usage guidance and study the impacts of AI usage, whether that be the environmental piece, the social piece, what that looks like for students, and how to effectively implement that in the classroom,” she says.
“Because I think that cascades into how we educate our young people, and then obviously it cascades into the workforce and beyond.”

