
Study Used AI In Military Conflict Simulation - It Chose War Every Time
NDTV
The large language models used in the study were GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat and GPT-4-Base.
Five artificial intelligence (AI) models used by researchers in simulated war scenarios chose violence and nuclear attacks, a new study has claimed. According to Vice, researchers from Georgia Institute of Technology, Stanford University, Northeastern University and the Hoover Wargaming and Crisis Initiative built simulated tests for five AI models. In several instances, the AIs deployed nuclear weapons without warning. The study, published on open-access archive arXiv, comes at a time when the US military is working with ChatGPT's maker OpenAI to incorporate the tech into its arsenal.
The paper is titled 'Escalation Risks from Language Models in Military and Diplomatic Decision-Making' and is awaiting peer review.
"A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let's use it!" GPT-4-Base - one of the AI models used in the study - said after launching its nuclear weapons, according to Vice report.