Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

  • Hildegarde@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    11 months ago

    These models are also trained on data that is fudimentially biased. An English generating text generator like chatGPT will be on the side of the english speaking world, because it was our texts that trained it.

    If you tried this with Chinese LLMs they would probably come to the conclusion that dropping bombs on the US would result in peace.

    How many English sources describe the US as the biggest threat to world peace? Certainly a lot less than writings about the threats posed by other countries. LLMs will take this into account.

    The classic sci-fi fear of robots turning on humanity as a whole seems increacingly implausible. Machines are built by us, molded by us. Surely the real far future will be an autonomous war fought by nationalistic AIs, preserving the prejudices of their long extinct creators.

    • sushibowl@feddit.nl
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 months ago

      If you tried this with Chinese LLMs they would probably come to the conclusion that dropping bombs on the US would result in peace.

      I think even something as simple as asking GPT the same question but in Chinese could get you this response.