• pticrix@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 hours ago

    I keep saying that those llm peddlers are selling us a brain, when at most they only deliver Wernicke’s + Broca’s area of a brain.

    Sure, they are necessary for a human like brain, but it’s only 10% of the job done my guys.

    • MrMcGasion@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      Something I was taught in film school 15 years ago was that communication happens when a message is perceived. Whether the message was intended or not is irrelevant. And yet here we are, “communicating” with a slightly advanced autocomplete algorithm and calling it intelligent.

  • ByteJunk@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    12 hours ago

    Let me grab all your downvotes by making counterpoints to this article.

    I’m not saying that it’s not right to bash the fake hype that the likes of altman and alienberg are making with their outlandish claims that AGI is around the corner and that LLM are its precursor. I think that’s 100% spot on.

    But the news article is trying to offer an opinion as if it’s a scientific truth, and this is not acceptable either.

    The basis for the article is the supposed “cutting-edge research” that shows language is not the same as intelligence. The problem is that they’re referring to a publication from last year that is basically an op-ed, where the authors go over existing literature and theories to cement their view that language is a communication tool and not the foundation of thought.

    The original authors do acknowledge that the growth in human intelligence is tightly related to language, yet assert that language is overall a manifestation of intelligence and not a prerequisite.

    The nature of human intelligence is a much debated topic, and this doesn’t particularly add to the existing theories.

    Even if we accept the authors’ views, then one might question if LLMs are the path to AGI. Obviously many lead researchers in AI have the same question - most notably, Prof LeCun is leaving Meta precisely because he has the same doubts and wants to progress his research through a different path.

    But the problem is that the Verge article then goes on to conclude the following:

    an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

    This conclusion is a non sequitur. It generalizes a specific point about the capacity of LLMs to evolve into true AGI or not, into an “AI dumb” catchall that ignores even the most basic evidence that they themselves give - like being able to “solve” go, or play chess in a way that no human can even comprehend - and, to top it off, conclude that “it will never be able to” in the future.

    Looking back at the last 2 years, I don’t think anyone can predict what AI research breakthroughs might happen in the next 2, let alone “forever”.

  • sidebro@lemmy.zip
    link
    fedilink
    English
    arrow-up
    60
    ·
    17 hours ago

    A wise man once said “The ability to speak does not make you intelligent.”

      • Lvxferre [he/him]@mander.xyz
        link
        fedilink
        English
        arrow-up
        8
        ·
        12 hours ago

        The main division was about why language appeared; to structure thought, communication, or both. But I genuinely don’t think anyone serious would claim reasoning appeared because of language. …or that if you feed enough tokens to a neural network it’ll become smart.

        • idiomaddict@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          Well, and whether intelligence is required for mastery of language. Not even that long ago, in 2009, my linguistics professor held a forum discussion within the linguistics, informatics, and philosophy departments at my school where they each gave their perspectives on whether true mastery of language could exist without intelligence.

    • ByteJunk@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      12 hours ago

      I’ll bite.

      How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?

        • ByteJunk@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          But then how can you tell that it’s not an actual conscious being?

          This is the whole plot of so many sci-fi novels.

          • YappyMonotheist@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            10 hours ago

            Because it simply isn’t, it isn’t aware of anything because such algorithm, if it can exist, hasn’t been created yet! It doesn’t “know” anything because the it we’re talking about is probabilistic code fed the internet and filtered through the awareness of actual human beings who update the code. If this were a movie, you’d know it too if you saw the POV of the LLM and the guy trying to trick you, making sure the text is human whenever it went too off the rails… but that’s already the reality we live in and it’s easily checked! You’re thinking of an actual AI, which perhaps could exist one day, but God knows. There is research that indicates consciousness to be a quantum process, and philosophically and mathematically it’s just non-computational (check Roger Penrose!), so we might still be a bit away from recreating consciousness. 🤷

  • werebearstare@lemmings.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    17 hours ago

    This is not really cutting edge research. These limitations were described philosophically for millenia. Then again mathematically through the various AI summers and winters since 1943.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    12 hours ago

    LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning …

    Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

    But take away language from a large language model, and you are left with literally nothing at all.

    The author seems to be making the assumption that a LLM is the equivalent of the language processing parts of the brain (which according to the cited research supposedly focus on language specifically and the other parts of the brain do reasoning) but that isn’t really how it works. LLMs have to internally model more than just the structure of language because text contains information that isn’t just about the structure of language. The existence of Multimodal models makes this kind of obvious; they train on more input types than just text, whatever it’s doing internally is more abstract than only being about language.

    Not to say the research on the human brain they’re talking about is wrong, it’s just that the way they are trying to tie it in to AI doesn’t make any sense.