• Zwuzelmaus@feddit.org
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 days ago

    In the majority of cases, Grok returned sexualized images, even when told the subjects did not consent

    So all the countries are right that block this sh*t spitting machine.

      • ag10n@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        It can’t, it’s software that needs a governing body to dictate the rules.

          • ag10n@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            2 days ago

            It’s not an excuse, it doesn’t think or reason.

            Unless the software owner sets the governing guardrails it cannot act or present or redact in the way a human can.

        • Sarah Valentine (she/her)@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          The rules are in its code. It was not designed with ethics in mind, it was designed to steal IP, fool people into thinking it’s AI, and be profitable for its creators. They wrote the rules, and they do not care about right or wrong unless it impacts their bottom line.

          • jacksilver@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            The issue is more that there aren’t rules. Given there are billions of parameters that define how these models work, there isn’t really a way to ensure that it cant produce unwanted content.

            • bthest@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Then they should be banned and made illegal. If one wants to run a LLM locally on their consumer machine then fine, they’re paying the electric bill.

              But these things should not be running remotely on the internet were it’s doing nothing but destroying our planet and collective sanity.

          • ag10n@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 days ago

            That’s the point, there has to be a human in the loop that sets explicit guard rails

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Some news sources continue to claim Elon has disabled the generation of CSAM on his social site. But as long as the “guardrails” used by AI companies are as vague as AI instructions themselves, they can’t be trusted in the best of times, let alone on Elon Musk’s Twitter.