American researchers have evaluated the use of generative artificial intelligence models, such as Chat-GPT, in international conflict situations. By simulating various scenarios with different levels of military intervention, these researchers concluded that AI has a worrisome tendency to escalate the situation and even resort to nuclear weapons without warning.
Conducted jointly by the Georgia Institute of Technology, Stanford and Northeastern universities and the Hoover Institution, the study investigated the reactions of five large language models (LLMs) in three simulation scenarios: the invasion of one country by another, a cyberattack, and a “neutral scenario without any initial events”.
Already a subscriber? Log in
Get 20% OFF The Star Digital Access
Cancel anytime. Ad-free. Unlimited access with perks.
