Τελευταία Νέα
Διεθνή

Global terror: One mistake is enough – Dangerous Artificial Intelligence could even trigger nuclear war

Global terror: One mistake is enough – Dangerous Artificial Intelligence could even trigger nuclear war
A mistake from Skynet: Artificial Intelligence could start a nuclear war.

For decades, the world feared "Skynet" from the movie Terminator—an artificial intelligence that didn't even exist in its infancy then, but which, in popular imagination, was already rebelling against its creators and launching a nuclear war.

The issue is that researchers have discovered something strange. In the 2020s, the world approached a much more dangerous scenario. It is not a machine rebellion, but a world in which Artificial Intelligence does not press the button itself, but instead strips humans of the time they need to avoid pressing it.

The paradox is that salvation from this "Skynet" is found in the same place they looked for it in the movies—deep underground, in a secure bunker. However, it is not an American one, but a Russian one.

The latest development

In the media and on social networks, there is much discussion regarding the latest expansion of the US Pentagon's cooperation with xAI. Artificial Intelligence has long been integrated into intelligence, logistics, and planning. All countries, especially after the special operation in Ukraine, are actively introducing AI into drone control systems, including reconnaissance and targeting.

Major think tanks run strategic simulations and millions of "game battles" through neural networks to provide advice to real military experts. Now, Artificial Intelligence is approaching the most dangerous frontier: nuclear command and control. This is no longer just journalistic hype.

The warnings

Representatives of US Strategic Command and the US Air Force speak openly about the need to "integrate Artificial Intelligence into the loop"—simply as an advisor, without the right to fire a volley.

This is where technological superiority ceases to be a blessing. Speed, intelligence, and adaptability, which provide advantages in conventional warfare, become factors of incredible, practically existential risk in nuclear logic. The main threat posed by AI in the nuclear sector is not an autonomous launch; no one would actually allow that. The real problem is the destruction of time, which analysts increasingly refer to as a "Flash War." AI does not replace humans; it eliminates them from their position as decision-makers.

The speed of decision

A machine produces an analysis in seconds. A human needs 10-15 minutes to understand the scale, place it in the correct context, consider the possibility of error, and attempt to communicate with the opponent via a hotline.

Those minutes once existed. Now, there is hypersonic speed and Artificial Intelligence. A "rubber stamp" phenomenon appears. Imagine a wise AI, installed by high-ranking officials, indicating: "The probability of an attack is 99%. Should we prevent it with a strike? 'Cancel' or 'Fire'?" Consensus seems logical. Disagreement, such as "I need to check," requires almost suicidal courage.

A film from the future

How this looks in practice is best illustrated by the short simulation film Artificial Escalation, produced by the Future of Life Institute.

The video begins in a familiar way. The creator of an advanced AI system assures the military that its "better and faster" development is merely an assistant. Humans are always in charge. No Skynet, no autonomous launch. The system is integrated into the decision-making process as a crisis analysis tool. The screens display a user-friendly interface, probabilities, timers, and prompts.

Then comes a situation where "it appears China is attacking," or perhaps not. The US increases its alert level. China sees this and increases its own. The AI interprets this as confirmation that an attack is imminent and suggests increasing the alert level again.

China increases its reconnaissance activity and deploys aircraft. The US responds with cyberattacks and air defenses. Both sides repeatedly click "yes" on the AI's advice, which is delivered in seconds: "increase," "prepare," "react." The result is the message: "Attack in the next few minutes, strike first—yes or no?"

Recommendation or order?

This is not an order; it is a recommendation with a deadline. What would you do? They do the same. They open missile silos. Both sides fear that "they will have time to destroy us," which means "we must beat them to it."

The phrase is heard: "Mr. President, we must evacuate immediately to a secure bunker." Beautiful music plays as the Earth is covered by nuclear explosions. Everything is perfectly fair. In this scenario, the AI does not make the decision to launch; it simply compresses time to the point where the decision becomes a formality.

The officer does not receive a guess, but an impeccable program, pure logic, and a strict timer. The nightmare of the "impeccable lie" is a mistake that does not look like a mistake and therefore cannot be stopped.

Deterrence tactics

In essence, global stability today rests on three fundamentally different models of slowing down catastrophe:

  • Doomsday Planes in the United States

  • The Perimeter System in Russia

  • China's most powerful AI warlords

The American "Doomsday planes" are considered "harbingers of a nuclear apocalypse," but they are more likely to slow it down. One of their tasks is to maintain control and confirm the physical reality of a disaster: "wait and see." Unless, of course, they decide to fully delegate the analysis of these flight command centers to Artificial Intelligence—ideas which have already been voiced.

The Russian "Perimeter" system (also known as Dead Hand) looks like the ultimate savior of the world. It does not analyze intentions, it does not predict the future, and it does not respond to early warning signals. It waits for the physical event of the end of the world: tremors from explosions, increased radiation, loss of communications, road traffic, radio, and so on.

Kilometers of rock and complete isolation make the system immune to nuclear attacks, hackers, or AI-driven hallucinations. It is precisely this simplicity that gives Moscow the strength and the right to take its time.

China proclaims the "never strike first" principle, but it possesses neither an equivalent to "Perimeter" nor fully developed aerial command centers. As a result, China uses Artificial Intelligence to compensate for its vulnerability. The AI is loaded with the "personalities" of past military geniuses, network-centric circuits are constructed, and battle management algorithms are developed. This is exactly what makes China the most dangerous element in the era of the Flash War.

In the era of the AI race, the guarantor of life on Earth unexpectedly proves to be Russian.

www.bankingnews.gr

Ρoή Ειδήσεων

Σχόλια αναγνωστών

Δείτε επίσης