Skip to main content
Artificial intelligence

Artificial intelligence

Is our embrace of AI naïve and could it lead to an environmental disaster?

26 Jan 2026

Johan Hansson says it is dangerous to treat artificial intelligence as a magic wand and thinks researchers should create their own AI tools that they can control better

silhouette of figure in a hoodie holding a glowing red AI chip
Jumping on the bandwagon Despite the potential benefits of artificial intelligence are we being too quick to embrace the technology? (Courtesy: Shutterstock/khunkorn Studio)

According to today’s leading experts in artificial intelligence (AI), this new technology is a danger to civilization. A statement on AI risk published in 2023 by the US non-profit Center for AI Safety warned that mitigating the risk of extinction from AI must now be “a global priority”, comparing it to other societal-scale dangers such as pandemics and nuclear war. It was signed by more than 600 people, including the winner of the 2024 Nobel Prize for Physics and so-called “Godfather of AI” Geoffrey Hinton. In a speech at the Nobel banquet after being awarded the prize, Hinton noted that AI may be used “to create terrible new viruses and horrendous lethal weapons that decide by themselves who to kill or maim”.

Despite signing the letter, Sam Altman of OpenAI, the firm behind ChatGPT, has stated that the company’s explicit ambition is to create artificial general intelligence (AGI) within the next few years, to “win the AI-race”. AGI is predicted to surpass human cognitive capabilities for almost all tasks, but the real danger is if or when AGI is used to generate more powerful versions of itself. Sometimes called “superintelligence”, this would be impossible to control. Companies do not want any regulation of AI and their business model is for AGI to replace most employees at all levels. This is how firms are expected to benefit from AI, since wages are most companies’ biggest expense.

AI, to me, is not about saving the world, but about a handful of people wanting to make enormous amounts of money from it. No-one knows what internal mechanism makes even today’s AI work – just as one cannot find out what you think from how the neurons in your brain are firing. If we don’t even understand today’s AI models, how are we going to understand – and control – the more powerful models that already exist or are planned in the near future?

AI has some practical benefits but too often is put to mostly meaningless, sometimes downright harmful, uses such as cheating your way through school or creating disinformation and fake videos online. What’s more, an online search with the help of AI requires at least 10 times as much energy as a search without AI. It already uses 5% of all electricity in the US and by 2028 this figure is expected to be 15%, which will be over a quarter of all US households’ electricity consumption. AI data servers are more than 50% as carbon intensive as the rest of the US’s electricity supply.

Those energy needs are why some tech companies are building AI data centres – often under confidential, opaque agreements – very quickly for fear of losing market share. Indeed, the vast majority of those centres are powered by fossil-fuel energy sources – completely contrary to the Paris Agreement to limit global warming. We must wisely allocate Earth’s strictly limited resources, with what is wasted on AI instead going towards vital things.

To solve the climate crisis, there is definitely no need for AI. All the solutions have already been known for decades: phasing out fossil fuels, reversing deforestation, reducing energy and resource consumption, regulating global trade, reforming the economic system away from its dependence on growth. The problem is that the solutions are not implemented because of short-term selfish profiteering, which AI only exacerbates.

Playing with fire

AI, like all other technologies, is not a magic wand and, as Hinton says, potentially has many negative consequences. It is not, as the enthusiasts seem to think, a magical free resource that provides output without input (and waste). I believe we must rethink our naïve, uncritical, overly fast, total embrace of AI. Universities are known for wise reflection, but worryingly they seem to be hurrying to jump on the AI bandwagon. The problem is that the bandwagon may be going in the wrong direction or crash and burn entirely.

Why then should universities and organizations send their precious money to greedy, reckless and almost totalitarian tech billionaires? If we are going to use AI, shouldn’t we create our own AI tools that we can hopefully control better? Today, more money and power is transferred to a few AI companies that transcend national borders, which is also a threat to democracy. Democracy only works if citizens are well educated, committed, knowledgeable and have influence.

AI is like using a hammer to crack a nut. Sometimes a hammer may be needed but most of the time it is not and is instead downright harmful. Happy-go-lucky people at universities, companies and throughout society are playing with fire without knowing about the true consequences now, let alone in 10 years’ time. Our mapped-out path towards AGI is like a zebra on the savannah creating an artificial lion that begins to self-replicate, becoming bigger, stronger, more dangerous and more unpredictable with each generation.

Wise reflection today on our relationship with AI is more important than ever.

Back to Artificial intelligence Artificial intelligence
Copyright © 2026 by IOP Publishing Ltd and individual contributors