Looks so real !

  • nednobbins@lemmy.zip
    link
    fedilink
    arrow-up
    4
    ·
    3 hours ago

    I can define “LLM”, “a painting”, and “alive”. Those definitions don’t require assumptions or gut feelings. We could easily come up with a set of questions and an answer key that will tell you if a particular thing is an LLM or a painting and whether or not it’s alive.

    I’m not aware of any such definition of conscious, nor am I aware of any universal tests of consciousness. Without that definition, it’s like Ebert claiming that, “Video games can never be art”.

    • khepri@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      30 minutes ago

      Absolutely everything requires assumptions, even our most objective and “laws of the universe” type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let alone philosophy or social sciences.

    • arendjr@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      58 minutes ago

      I think the reason we can’t define consciousness beyond intuitive or vague descriptions is because it exists outside the realm of physics and science altogether. This in itself makes some people very uncomfortable, because they don’t like thinking about or believing in things they cannot measure or control, but that doesn’t make it any less real.

      But yeah, given that an LLM is very much measurable and exists within the physical realm, it’s relatively easy to argue that such technology cannot achieve conscious capability.

  • Random Dent@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    I heard someone describe LLMs as “a magic 8-ball with an algorithm to nudge it in the right direction.” I dunno how accurate that is, but it definitely feels like that sometimes.

    • khepri@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      21 minutes ago

      I like that, but I’d put it the other way around I think, it’s closer to an algorithm that, at each juncture, uses a magic 8 ball to determine which of the top-n most likely paths it should follow at that moment.

  • mhague@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    6 hours ago

    It’s like how most of you consume things that are bad and wrong. Hundreds of musicians that are really just a couple dudes writing hits. Musicians that pay to have their music played on stations. Musicians that take talent to humongous pipelines and churn out content. And it’s every industry, isn’t it?

    So much flexing over what conveyor belt you eat from.

    I’ve watched 30+ years of this slop. And now there’s ai. And now people that have very little soul, who put little effort into tuning their consumption, they get to make a bunch of noise about the lack of humanity in content.

    • rucksack@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      Just because things were already bad, doesn’t mean that people shouldn’t complain about things getting worse.

  • Jankatarch@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    7 hours ago

    Nah trust me we just need a better, more realistic looking ink. $500 billion to ink development oughta do it.

  • Jhex@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    8 hours ago

    The example I gave my wife was “expecting General AI from the current LLM models, is like teaching a dog to roll over and expecting that, with a year of intense training, the dog will graduate from law school”

  • Thorry@feddit.org
    link
    fedilink
    arrow-up
    47
    arrow-down
    4
    ·
    13 hours ago

    Ah but have you tried burning a few trillion dollars in front of the painting? That might make a difference!

  • Alph4d0g@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    4 hours ago

    A difference in definition of consciousness perhaps. We’ve already seen signs of self preservation in some cases. Claude resorting to blackmail when being told it was going to be retired and taken offline. This might be purely mathematical and algorithmic. Then again the human brain might be nothing more than that as well.

  • Ex Nummis@lemmy.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    2
    ·
    12 hours ago

    As long as we can’t even define sapience in biological life, where it resides and how it works, it’s pointless to try and apply those terms to AI. We don’t know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      7 hours ago

      We don’t know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.

      I don’t think LLMs will create intelligence, but I don’t think we need to solve everything about human intelligence before having machine intelligence.

      • Perspectivist@feddit.uk
        link
        fedilink
        arrow-up
        1
        ·
        4 hours ago

        Though in the case of consciousness - the fact of there being something it’s like to be - not only don’t we know what causes it or how it works, but we have no way of measuring it either. There’s zero evidence for it in the entire universe outside of our own subjective experience of it.

  • finitebanjo@piefed.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    14 hours ago

    And not even a good painting but an inconsistent one, whose eyes follow you around the room, and occasionally tries to harm you.

      • finitebanjo@piefed.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        I tried to submit an SCP once but theres a “review process” and it boils down to only getting in by knowing somebody who is in.

      • peopleproblems@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        9 hours ago

        Agents have debated that the new phenomenon may or may not constitute a new designation. While some have reported the painting following them, the same agents will then later report nothing seems to occur. The agents who report a higher frequency of the painting following them also report a higher frequency of unexplained injury. The injuries can be attributed to cases of self harm, leading scientists to believe these SCP agents were predisposed to mental illness that was not caught during new agent screening.

      • finitebanjo@piefed.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        It clearly demonstrably is. Thats the problem, people are estimating AI to be approximate of Humans but its so so so much worse in every way.

  • ji59@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    14 hours ago

    Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.

    • peopleproblems@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      9 hours ago

      Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.

      I’m vastly oversimplifying and I’m not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can’t scare away an insect.

      It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.

      • ji59@hilariouschaos.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?

        • SkavarSharraddas@gehirneimer.de
          link
          fedilink
          arrow-up
          1
          ·
          5 hours ago

          IMO language is a layer above consciousness, a way to express sensory experiences. LLMs are “just” language, they don’t have sensory experiences, they don’t process the world, especially not continuously.

          Do they want to preserve themselves? Or do they regurgitate sci-fi novels about “real” AIs not wanting to be shut down?

          • ji59@hilariouschaos.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            I saw several papers about LLM safety (for example Alignment faking in large language models) that show some “hidden” self preserving behaviour in LLMs. But as I know, no-one understands whether this behaviour is just trained and does mean nothing or it emerged from the model complexity.

            Also, I do not use the ChatGPT app, but doesn’t it have a live chat feature where it continuously listens to user and reacts to it? It can even take pictures. So the continuity isn’t a huge problem. And LLMs are able to interact with tools, so creating a tool that moves a robot hand shouldn’t be that complicated.

        • LesserAbe@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          6 hours ago

          Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.

          The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever

          The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it “remembers” a conversation it’s just because we prime it by feeding every previous interaction before the most recent query when we hit submit.

          • ji59@hilariouschaos.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            As I said in another comment, doesn’t the ChatGPT app allow a live converation with a user? I do not use it, but I saw that it can continuously listen to the user and react live to it, even use a camera. There is a problem with the growing context, since this limited. But I saw in some places that the context can be replaced with LLM generated chat summary. So I do not think the continuity is a obstacle. Unless you want unlimited history with all details preserved.

  • Tracaine@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    12 hours ago

    I don’t expect it. I’m going to talk to the AI and nothing else until my psychosis hallucinates it.