Demis Hassabis, the CEO of Google’s AI unit, has issued a warning that the world should treat the risks associated with artificial intelligence (AI) as seriously as it does the climate crisis. Hassabis emphasises that action is needed urgently to address the potential dangers posed by AI, including its role in creating bioweapons and the existential threat from super-intelligent systems.
Hassabis, renowned for his work on AlphaFold, a program that predicts protein structures, acknowledges the transformative potential of AI but stresses the importance of oversight and regulation. He suggests that an organisation similar to the Intergovernmental Panel on Climate Change (IPCC) could be established to oversee the AI industry. This organisation would focus on scientific research and reporting, laying the foundation for future regulatory efforts.
In the long term, Hassabis envisions the creation of an AI safety counterpart to CERN (European Organization for Nuclear Research) and the International Atomic Energy Agency (IAEA). Such an entity would conduct international research and audits to ensure the responsible development and use of AI technology.
Hassabis believes that while direct regulatory analogies for AI might not exist, valuable lessons can be drawn from existing international institutions. Last week, Eric Schmidt, former Google CEO, and Mustafa Suleyman, co-founder of DeepMind, also called for the establishment of an IPCC-style panel on AI, a concept that has received support from UK officials who believe the United Nations should spearhead this initiative.
AI offers tremendous opportunities in fields like medicine and science, but concerns about the development of artificial general intelligence (AGI) systems with human or superhuman levels of intelligence have prompted a call for proactive discussion and research. Hassabis acknowledges that AGI systems may still be far from realisation but urges starting these conversations now.
Hassabis points out that while current AI systems don’t pose significant risks, future generations, with enhanced capabilities like planning and memory, might introduce new challenges. These advanced systems have the potential for both exceptional benefits and risks.
A summit on AI safety is scheduled for November 1 and 2, where experts will focus on the threats posed by advanced AI systems, including their potential involvement in bioweapon creation, cyber-attacks, and evading human control. Hassabis, along with other AI industry leaders, will participate in this event.
The AI industry has gained significant political attention, especially with the public release of ChatGPT, a chatbot capable of generating highly plausible text responses to various prompts. Concerns have also arisen regarding AI image-generating tools that can produce realistic but misleading images. To address these challenges, Hassabis suggests the possibility of a Kitemark-style system for AI models, with the UK government’s Frontier AI task force working on creating guidelines for testing cutting-edge AI models.