Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    think the “fancy auto complete” meme is a disingenuous

    “LLMs don’t have human understanding or metacognition”

    Then what’s the (auto-completing) fucking problem? It’s just a series of steps on data. You could feed it white noise and it would vomit up more noise. And keep doing it as long as there’s power.

    Intelligent?

      • HugeNerd@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 months ago

        Instead it tries to make sense of it. Why? Because it learned strong language priors from us and it leans on that when the prompt is meaningless. It tries to make sense of it.

        No, it doesn’t. You’re in sci-fi land. There is no “it” “trying to make sense”. That cogitation is happening in YOU, not the motherboard.

          • Iconoclast@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            Sure, there’s no ghost in the machine - but that’s true of your neurons too.

            Touché.

            Intelligence doesn’t require “self” and we’re a living proof of that. The way LLMs and humans operate have much more similarities than people like to admit. We’re just applying higher standards to AI.