• Sadbutdru@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    2
    ·
    3 days ago

    Right, I’m no expert (and very far from an AI fanboi), but not all “AI” are LLMs. I’ve heard there’s good use cases in protein folding, recognising diagnostic patterns in medical images.

    It fits with my understanding that you could train a similar model on more constrained datasets than ‘all the English language text on the Internet’ and it might be good at certain jobs.

    Am I wrong?

    • alk@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      76
      ·
      3 days ago

      You are correct. However, more often than not it’s just like the image describes and people are actually applying LLM’s en masse to random problems.

    • Tomassci@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      The problems with AI we talk of here is mostly with generative AI. Protein folding, diagnostic patterns and weather prediction works a bit differently than image making or text writing services.

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      28
      ·
      3 days ago

      Hallucinating studies is however very on brand for LLM as opposed to other types of machine learning.

    • jaredwhite@piefed.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 days ago

      Technically, LLMs as used in Generative AI fall under the umbrella term “machine learning”…except that until recently machine learning was mostly known for “the good stuff” you’re referring to (finding patterns in massive datasets, classifying data entries like images, machine vision, etc.). So I feel like continuing to use the term ML for the good stuff helps steer the conversation away from what is clearly awful about genAI.

      • peoplebeproblems@midwest.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        3 days ago

        There is no generative AI. It’s just progressively more complicated chatbots. The goal is to fool the human into believing it’s real.

        Its what Frank Herbert was warning us all about in 1965.

        • shalafi@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          What was Frank on about? The Butlerian Jihad I assume? Read the book 8 times and don’t remember why thinking machines had gone rogue. ?

        • fushuan [he/him]@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 days ago

          Chatbprs are genAI. Any artificial intelligence like NPCs, autopilot, playing games against the machine, playing chess against the machine… All of those have been called AI.

          GenAI is a subset where what the AI does is generate text or images instead of taking a deterministic option. GenAI describes pretty well what it does generate a text or image output, no matter the accuracy of the text. The AI is optimised to generate output that looks like what you would expect with the given input, and generally it does exactly that, even if it hallucinated facts to fit the idea of the response that they are supposed to give with the given input.

    • baggachipz@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 days ago

      That’s because “AI” has come to mean anything with an algorithm and a training set. Technologies under this umbrella are vastly different, but nontechnical people (especially the press) don’t understand the difference.

    • minnow@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Right. You’re talking about specialized AI that are programmed and trained to perform very specific tasks, and are absolutely useless outside of those tasks.

      Llama are generalized AI which can’t do any of those things. The problem is that what it’s good at, really REALLY good at, is giving the appearance of specialized AI. Of course this is only a problem because people keep getting fooled into thinking that generalized AI can do all the same things that specialize AI does.

    • Sadbutdru@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      3 days ago

      Obviously that should be in an advisory capacity, and not making decisions (like approving drugs for human use [which i heavy doubt was actually happening])

    • takeda@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Yeah, AI (not LLM) can be a very useful tool in doing research, but this takes about deciding if a drug should be approved or not.