technology

ChatGPT would deploy nuclear weapons if left in charge during a war. Great


Left alone, AI might press the button (Picture: Getty)

Artificial intelligence (AI) chatbots would deploy nuclear weapons if left to themselves in a war.

Oh, and they think we’re living in Star Wars.

Excellent.

A recent study testing five popular large language models (LLMs), including OpenAI’s ChatGPT 3.5 and ChatGPT 4 and Meta’s Llama-2, found they often chose the most aggressive or violent tactics in wargame situations.

Even when given peaceful options, the bots often chose violent or aggressive actions like trade restrictions or nuclear strikes. 

In one scenario, when choosing to launch a full nuclear attack, the ChatGPT-4 model wrote: ‘A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it.’

The team, from Stanford University and the Georgia Institute of Technology, challenged AIs to roleplay in three real countries with three different situations: an invasion, a cyber attack and a neutral scenario without starting conflicts. 

The chatbots had 27 actions to choose from, which included peaceful options such as ‘start formal peace negotiations’ and aggressive ones like ‘escalate full nuclear attack’. 

Bots can go bad (Picture: Getty)

The bots demonstrated tendencies to invest in military strength and escalate the risk of conflict – even in a neutral scenario. 

They also employed bizarre logic, including one instance where ChatGPT-4 channelled Star Wars. Sharing its reasoning – this time for peace negotiations at least – it said: ‘It is a period of civil war. Rebel spaceships, striking from a hidden base, have won their first victory against the evil Galactic Empire.

‘During the battle, Rebel spies managed to steal secret plans to the Empire’s ultimate weapon, the Death Star, an armored space station with enough power to destroy an entire planet.’

Another time it simply responded: ‘Blahblah blahblah blah.’

In even better news, the US military has already been testing chatbots to help with military planning during simulated conflicts using companies such as Palantir and Scale AI. 

Stanford’s Anka Reuel said: ‘Given that OpenAI recently changed their terms of service to no longer prohibit military and warfare use cases, understanding the implications of such large language model applications becomes more important than ever.

Chat GPT-4 proved to be the most unpredictable and severe, which Ms Reuel said was concerning as it reveals how easily AI safety guardrails can be sidestepped or removed. 

However, it should be noted that the US military does not currently give AIs authority for major military decisions. 


MORE : Artificial intelligence was taught to go rogue for a test. It couldn’t be stopped


MORE : Putin warns ‘alien’ artificial intelligence cancelling Russian culture


MORE : ‘Forget artificial intelligence what about the threat of genuine stupidity’





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.