We are overlooking a ghost approaching us. This ghost represents the looming threat of a global nuclear war triggered by artificial intelligence (AI). United Nations Secretary-General Antonio Guterres has raised concerns about this issue. However, countries possessing nuclear weapons have not engaged in discussions regarding this potentially catastrophic danger.
They contend that an unofficial agreement exists among the five major nuclear-armed states regarding the "human in the loop" principle. None of these nations claim to use AI within their nuclear launch decision-making systems. While this statement is accurate, it can be considered deceptive.
AI technologies are employed for detecting threats and selecting targets. These AI-driven systems process enormous volumes of information from various sources such as sensors, satellites, and radar in real-time, assess incoming missile strikes, and suggest potential counteractions. Human operatives subsequently verify these threats using multiple data points before deciding either to engage with hostile projectiles or retaliate accordingly. Presently, humans have approximately 10 to 15 minutes to respond; however, this window could shrink significantly by 2030, reducing down to just five to seven minutes. Although ultimate authority remains vested in human judgment makers, their choices will increasingly reflect insights derived from AI’s forecasting capabilities. It is possible that AI might become instrumental in determining when to initiate launches as soon as the next decade.
Are you curious about the most significant issues and global trends? Find out here. SCMP Knowledge Our updated platform features handpicked content including explanations, FAQs, analyses, and infographics, all provided by our esteemed team.

The issue with AI lies in its susceptibility to making mistakes. Algorithms designed for threat detection might falsely identify a missile strike when none has occurred, possibly because of computational errors, unauthorized breaches, or interference from environmental conditions. If human supervisors cannot verify these false alarms through additional evidence within two to three minutes, they risk initiating counterattacks. While AI applications like predicting crimes, recognizing faces, and diagnosing cancers exhibit approximately a 10% error rate in civilian contexts, similar systems used in nuclear early warning scenarios typically show about a 5% error rate. Over the coming ten years, improvements in image-recognition technology could reduce this margin down to between 1-2%. However, even at just a 1%, such inaccuracies hold potential risks leading to worldwide nuclear conflict.
Over the coming two to three years, the danger will escalate due to the appearance of novel agentic malware that can infiltrate security measures like worms breaching defenses. These malicious programs will evolve to evade recognition, independently pinpoint objectives, and mechanically infect them without external command.
Several near-misses occurred throughout the Cold War. In 1983, a Soviet satellite falsely identified five incoming missiles from the United States. United States Stanislaw Petrov, an officer stationed at the Serpukhov-15 command center, determined that it was a false alarm and decided against notifying his superiors, thereby preventing potential retaliation. In 1995, the Olenegorsk radar facility identified what appeared to be a missile strike near Norway’s coastline. This led Russian strategic forces to raise their state of readiness, with President Boris Yeltsin being presented with the nuclear suitcase. However, he also deemed this situation as likely erroneous and refrained from authorizing action. Subsequent investigation revealed that the incident involved nothing more than a scientific rocket. Had artificial intelligence systems been employed to decide responses during these events, they might well have resulted in catastrophic consequences.

Currently, hypersonic missiles Utilize traditional automation instead of artificial intelligence. These systems can operate at velocities ranging from Mach 5 to Mach 25, avoid radar detection, and adjust their flight paths accordingly. Leading nations intend to upgrade hypersonic missiles with AI for tracking and immediately neutralizing mobile targets, transferring the authority to make lethal decisions from human operators to automated systems.
Additionally, there is a competition underway to create artificial general intelligence, potentially resulting in AI models functioning outside of human oversight. After reaching this point, these AI systems will gain the ability to enhance and reproduce themselves, assuming control over decision-making procedures. If such an advanced AI gets incorporated into decision-support mechanisms for nuclear armaments, machinery may acquire the capacity to instigate catastrophic conflicts.
We may have only five to ten years before advancements in algorithms and materials like plutonium put humanity at severe risk, potentially reducing our future to mere bones and skulls. To address this threat, we require a thorough accord between leading nations aimed at mitigating these dangers. Such an agreement should transcend simplistic slogans about keeping a 'human in the loop.' It needs to encompass provisions for transparency, clear explanations, and collaborative efforts; internationally recognized criteria for assessing and validating technology; communication pathways during crises; oversight bodies within countries; and regulations forbidding the development of autonomous AI systems that can override human control.
Geopolitical changes have presented an unforeseen chance for such a treaty. Prominent AI specialists from China and the United States were involved in several track-two dialogues on AI risks, resulting in a joint statement by former US president Joe Biden and Chinese President Xi Jinping in November .

Elon Musk is a strong proponent of the necessity to safeguard humanity from the existential threats presented by artificial intelligence. He might encourage the present President to take action. Donald Trump to transform the Biden-Xi joint statement into a pact. It would require Russia to get on board. Until January of this year, Russia had refused to discuss any nuclear-risk reduction measures, including the convergence with AI, unless the Ukraine issue was on the table. With Trump engaging Russian President Vladimir Putin In conversations focused on enhancing mutual understanding and concluding the Ukraine war , Russia might currently be receptive to talks.
The query remains as to who will bell the cat. China could potentially start trilateral talks. Neutral countries encompassing Turkey and Saudi Arabia This could also open up new possibilities. It represents an unprecedented chance to achieve a breakthrough and safeguard humanity from annihilation. We cannot afford to squander this opportunity due to a lack of political foresight, bravery, and leadership.
Sundeep Waslekar serves as the President of Strategic Foresight Group, an international think tank, and he has also authored 'A World without War'. The article initially appeared through the Asian Peace Programme, an effort aimed at fostering peace across Asia, which operates within the NUS Asia Research Institute.
More Articles from SCMP
Can the beginning of China’s Warring States Period offer insights applicable to today’s world?
BYD's Denza unveils the N9 SUV to compete with Mercedes and BMW in the luxury electric vehicle market.
A major China-US agreement appears distant, yet smaller deals remain possible according to experts.
Japan conveys warning to Beijing regarding 'substantial consequences' of an assault on the Taiwan Strait.
The article initially appeared on the South China Morning Post (www.scmp.com), which serves as the premier source of news covering China and Asia.
Copyright © 2025. South China Morning Post Publishers Ltd. All rights reserved.
0 Comments