Looks so real !

  • ji59@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?

    • SkavarSharraddas@gehirneimer.de
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      IMO language is a layer above consciousness, a way to express sensory experiences. LLMs are “just” language, they don’t have sensory experiences, they don’t process the world, especially not continuously.

      Do they want to preserve themselves? Or do they regurgitate sci-fi novels about “real” AIs not wanting to be shut down?

    • LesserAbe@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.

      The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever

      The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it “remembers” a conversation it’s just because we prime it by feeding every previous interaction before the most recent query when we hit submit.