Being polite to your AI chatbot could actually be making it worse at answering your questions according to a new study.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    2 days ago

    Being ethical to H100 clusters running inferences on models that were created with stolen, pirated, and appropriated data does not matter. Abuse the shit out of them. It’s absolutely meaningless.

    Or just don’t use them, if you care at all about economic, ecological, and societal stability.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      2 days ago

      You’re right that an LLM doesn’t care how it’s treated - it’s not conscious. But that’s not really the point. The way people treat things that seem human still says something about them, not the thing. If someone goes out of their way to be cruel to a chatbot that’s just trying to be helpful, it’s not the bot that’s being tested - it’s the person’s capacity for empathy and restraint.

      It’s the same instinct behind how we treat animals, or even how kids treat toys - being kind to something that can’t fight back is part of what keeps us human. And historically, the “it’s not really human, so it doesn’t matter” argument has been used to justify a lot of awful behavior.

      So no, the AI doesn’t care. But maybe it still matters that we do.

      • saimen@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        This is also a common philosophical argument to not eat meat. How we mistreat and kill animals negatively affect the humans who do it.