• hex@programming.dev
    link
    fedilink
    English
    arrow-up
    59
    ·
    4 months ago

    Facts are not a data type for LLMs

    I kind of like this because it highlights the way LLMs operate kind of blind and drunk, they’re just really good at predicting the next word.

    • CleoTheWizard@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      4 months ago

      They’re not good at predicting the next word, they’re good at predicting the next common word while excluding most unique choices.

      What results is essentially if you made a Venn diagram of human language and only ever used the center of it.

      • hex@programming.dev
        link
        fedilink
        English
        arrow-up
        13
        ·
        4 months ago

        Yes, thanks for clarifying what I meant! AI will never create anything unique unless prompted uniquely and even then it will tend to revert back to what you expect most.