Developments in AI are moving fast but Matin Durrani is not convinced that top-down regulation is the best approach
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That was the terrifying-sounding statement signed by more than 350 business leaders and researchers at the end of May. Released by the non-profit Center for AI Safety, signatories included the astronomer Martin Rees, who is known for his deep thoughts about the future of humanity.
Rees later explained he was less worried about “some super-intelligent ‘takeover’” and more concerned about the risk of us relying too much on big interconnected systems. “Large-scale failures of power grids, Internet and so forth can cascade into catastrophic societal breakdown,” he warned in the Times. For Rees – and many others who take an interest in such matters – we need regulation to control AI.
But who should do the regulating? I’m not sure I particularly trust tech firms to act in our best interests, while politicians are notorious for creating rules that are cumbersome, late and miss the point. Some say we should leave it to international bodies like the United Nations – in fact, the EU is already planning what it calls “the world’s first comprehensive AI law”. And anyway, is regulation even possible now that the AI genie is out of the bottle?
Sure, there are concerns. Large-language models like ChatGPT are ultimately trained on data and information created by people. But it’s unclear what sources it’s used and credit is rarely given. There have been instances of ChatGPT “making up” journal references, which can erode trust in what we see, watch and read online.
Will AI chatbots replace physicists?
With events moving at such a fast pace – the latest version of ChatGPT is due out shortly – I’m not sure anyone really knows what the future holds. Many UK universities have reacted by banning students from using AI tools, worried that they might be gaining an unfair advantage on coursework. Universities essentially are trying to win breathing space while they work out what to do long term.
But banning AI isn’t a wise idea, especially as it offers so many exciting possibilities. As well as addressing pressing global issues such as climate change, AI could help with day-to-day tasks: imagine a student taking a photo of a lab experiment and asking if they’ve set it properly up. AI tools could create videos or podcasts as teaching aids. Or do coding. Or spot patterns in data. It could suggest titles to research papers or summarize talks.
AI is here to stay; ultimately, it’s up to us to use it wisely.