By “good” I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

  • DeLacue@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    4 months ago

    That all depends on where the data set comes from. The code you’ll get out of an LLM is the average code of the data set. If it’s scraped from the internet (which is very likely) the code you’ll get will be an amalgam of concise examples from one website, incorrect examples from another, bits from blogs with all the typos and all the gunk and garbage that’s out there.

    Getting LLM code to work well takes an understanding of what the code it gives you actually does and why it’s bad. It will always be bad because it cannot be better than the dataset and in order for a dataset to be big enough to train an LLM it’ll have to have everything they can get including all the trash. But it can be good for providing you a framework to start with. It is however never going to replace actual programming and understanding of programming. The talk of LLMs completely replacing programers is mostly coming from people who do not understand coding or LLMs at all.

    • GrammarPolice@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      6
      ·
      4 months ago

      Can’t LLM’s eventually gain some form of “sentience”, and be able to self correct? A sort of thinking before speaking kind of situation.

      • DeLacue@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        4 months ago

        This question right here perfectly encapsulates everything wrong with LLMs right now. They could be good tools but the people pushing them have no idea what they even are. LLMs do not make decisions. All the decisions an LLM appears to make were made in the dataset. All those things that an LLM does that make it seem intelligent were done or said by a human somewhere on the internet. It is a statistical model that determines what output is mostly likely to come next. That is it. It is nothing else. It is not smart. It does not and cannot make decisions. It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.

        Listen think of it like this; a man decides to take exams to become a doctor in France, but for some reason he doesn’t learn either french or medicine. No, no instead he studies every former exam and all the answers to them. He gets very good at regurgitating those answers so much so that he can even pass the exam. But at no point does he understand what any of it means and when asked new and novel questions he provides utter nonsense answers. No matter how good he gets at memorising those answers he will never get any better at medicine. LLMs are as likely to gain sentience as my excel spreadsheets are.

        • hikaru755@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          4 months ago

          It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.

          This is very misleading. An LLM doesn’t have access to its training dataset in order to “search” it. Producing convincing looking gibberish is what it always does, that’s its only mode of operation. The key is that the gibberish that comes out of today’s models is so convincing that it actually becomes broadly useful.

          That also means that no, not everything an LLM produces has to have been in its training dataset, they can absolutely output things that have never been said before. There’s even research showing that LLMs are capable of creating actual internal models of real world concepts, which suggests a deeper kind of understanding than what the “stochastic parrot” moniker wants you to believe.

          LLMs do not make decisions.

          What do you mean by “decisions”? LLMs constantly make decisions about which token comes next, that’s all they do really. And in doing so, on a higher, emergent level they can make any kind of decision that you ask them to, the only question is how good those decisions are going be, which in turn entirely depends on the training data, how good the model is, and how good your prompt is.