• chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    19 hours ago

    LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning …

    Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

    But take away language from a large language model, and you are left with literally nothing at all.

    The author seems to be making the assumption that a LLM is the equivalent of the language processing parts of the brain (which according to the cited research supposedly focus on language specifically and the other parts of the brain do reasoning) but that isn’t really how it works. LLMs have to internally model more than just the structure of language because text contains information that isn’t just about the structure of language. The existence of Multimodal models makes this kind of obvious; they train on more input types than just text, whatever it’s doing internally is more abstract than only being about language.

    Not to say the research on the human brain they’re talking about is wrong, it’s just that the way they are trying to tie it in to AI doesn’t make any sense.