• Hirom@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    20 days ago

    According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.

    Producing innaccurate technical advice, with a confident tone, at scale.

    If that LLM were an employee it would get a formal blame, and then demoted or fired as it continues.

    • Tim@lemmy.snowgoons.ro
      link
      fedilink
      arrow-up
      2
      ·
      22 days ago

      That sounds sweetly naive. “Producing innaccurate technical advice, with a confident tone, at scale” sounds like the perfect credentials for a career in consultancy.

      • Hirom@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        22 days ago

        That’s a good way to represent LLMs. Very bad and very prolific consultants.