A Stanford University survey reveals that over one-third of AI researchers believe artificial intelligence could trigger a “nuclear-level catastrophe,” highlighting growing concerns about the risks posed by this rapidly evolving technology. This alarming finding is part of the 2023 AI Index Report, published by the Stanford Institute for Human-Centered Artificial Intelligence, which explores the latest developments, risks, and opportunities in AI.
AI’s Growing Capabilities and Ethical Challenges
The report underscores the incredible advancements AI has made over the past decade, with AI systems now capable of performing tasks like question answering, generating text, images, and even code at levels previously thought impossible. However, these systems also face significant challenges, such as prone to hallucinations, biases, and manipulation, raising important ethical questions regarding their deployment. These risks are compounded by incidents and controversies surrounding AI, such as a chatbot-related suicide and the creation of deepfake videos.
Concerns Over AI’s Potential Impact on Society
The Stanford survey revealed that 36% of researchers believe AI-made decisions could result in a nuclear-level disaster, and 73% foresee AI leading to “revolutionary societal changes” in the near future. The survey, conducted between May and June of last year, involved 327 experts in natural language processing (NLP)—the technology behind chatbots like GPT-4. The report comes at a time when calls for AI regulation are growing louder, particularly following high-profile controversies involving AI misuse.
Public Warnings and Calls for AI Pause
Amid these concerns, prominent figures such as Elon Musk and Steve Wozniak signed an open letter last month calling for a six-month pause on the development of AI systems more advanced than GPT-4. They argued that powerful AI systems should only be developed when we can ensure their effects will be positive and their risks manageable.
Public Opinion on AI and Global Regulation Efforts
A public survey conducted by IPSOS, highlighted in the AI Index Report, shows that Americans are especially cautious about AI. Only 35% of U.S. respondents believe AI-based products and services offer more benefits than drawbacks, in stark contrast to 78% in China, 76% in Saudi Arabia, and 71% in India.
The report also highlights that AI-related incidents and controversies have increased 26-fold over the last decade. In response, global efforts to regulate AI are intensifying. China’s Cyberspace Administration has recently introduced draft regulations to govern generative AI technologies like GPT-4 and local competitors such as Baidu’s ERNIE, ensuring they align with “core socialist values.” Meanwhile, the European Union has proposed the “Artificial Intelligence Act,” which aims to regulate AI applications by classifying acceptable and banned uses.
Regulation and Control of AI in the US
While public skepticism about AI is widespread in the U.S., federal regulations remain in development. The Biden administration has launched public consultations to explore how to ensure AI systems are legal, ethical, effective, and trustworthy. This effort signals a move toward more comprehensive regulation to address the societal risks posed by AI technologies.