• Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    3
    ·
    6 months ago

    *slow clapping*

    I’m actually quite interested in machine learning and generative models, specially LLMs. But… frankly? I wish that I was the one saying everything that the author said, including his dry humour. And more importantly, I think that he is being spot on.

    People are selling generative models like they were a magical answer for everything and a bit more. It is not. It is just a bloody tool dammit. Sometimes the best for a job, sometimes helpful, sometimes even harmful. And the output is not trustable, and this is a practical problem because it means that you need to cross-check every bloody iot of the output for potential errors.


    I think that I’ll join in and drop my own “angry” rant: I want to piledrive the next muppet who claims that the current models are intelligent.

    inb4:

    1. “But in the fuchure…” - Vomiting certainty over future events.
    2. “Do you have proofs it is not intellijant?” - Inversion of the burden of the proof. Prove me that there’s no dragon orbiting Pluto, or that your mum didn’t get syphilis from sharing a cactus dildo with Hitler.
    3. “Ackshyually wut u’re definishun of intellijans?” - If you’re willing to waste my time with the “definitions game”, I hope that you’re fine wasting hours defining what a “dragon” is, while I “conveniently” distort the definition to prevent you from proving the above.
    4. “y u a sceptic? I dun unrurrstand” - shifting the focus from the topic to the person voicing it. Even then, let’s bite: what did you expect, F.A.I.TH. (filthy assumptions instead of thinking)? Go join a temple dammit. And don’t forget to do some silly chanting while burning an effigy.
    5. “Ackshyually than ppl r not intelljant” - you’re probably an example of that. However that does not address your claim. Sucks to be you.

    Based on real discussions. Misspelled for funzies.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      From HN comments:

      I just used Groq / llama-7b to classify 20,000 rows of Google sheets data (Sidebar archive links) that would have taken me way longer… Every one I’ve spot checked right now has been correct, and I might write another checker to scan the results just in case. // Even w/ a 20% failure it’s better than not having the classifications

      I classified ~1000 GBA game roms files by using their file names to put each in a category folder. It worked like 90% correctly. Used GPT 3.5 and therefore it didn’t adhere to my provided list of categories but they were mostly not wrong otherwise.

      Both are best case scenarios for the usage of LLMs: simple categorisation of stuff where mistakes are not a big deal.

      [A] I work at Microsoft, though not in AI. This describes Copilot to a T. The demos are spectacular and get you so excited to go use it, but the reality is so underwhelming.

      [B] Copilot isn’t underwhelming, it’s shit. What’s impressive is how Microsoft managed to gut GPT-4 to the point of near-uselessness. It refuses to do work even more than OpenAI models refuse to advise on criminal behavior. In my experience, the only thing it does well is scan documents on corporate SharePoint. For anything else, it’s better to copy-paste to a proper GPT-4 yourself.

      [C] lol I can’t help but assume that people who think copilot is shit have no idea what they are doing.

      [D] I have it enabled company-wide at enterprise level, so I know what it can and can’t do in day-to-day practice. // Here’s an example: I mentioned PowerPoint in my earlier comment. You know what’s the correct way to use AI to make you PowerPoint slides? A way that works? It’s to not use the O365 Copilot inside PowerPoint, but rather, ask GPT-4o in ChatGPT app to use Python and pandoc to make you a PowerPoint.

      A: see, it’s this kind of stuff that makes me mock HN as “Reddit LARPing as h4x0rz”. If a Reddit comment starts out by prefacing the alleged authority of the author over a subject, and then makes a claim, there’s high likelihood that the claim is some obtuse shit. Like this - the problem is not just LLMs, it’s Copilot being extra shite.

      B: surprisingly sane comment for HN standards, even offering a way to prove their own claim.

      C: yeah of course you assume = make shit up. Specially about things that you cannot reliably know. And while shifting the discussion from “what” is said to “who” says it. Muppet.

      Author makes good points but suffers from “i am genius and you are an idiot” syndrome which makes it seem mostly the ranting of an asshole vs a coherent article about the state of AI.

      Emphasis mine. It’s like “C” from the quote above, except towards the author of the article. Next~

      I didn’t find this article refreshing. If anything, it’s just the same dismissive attitude that’s dominating this forum, where AI is perceived as the new blockchain. An actually refreshing perspective would be one that’s optimistic.

      I’m glad to see that I’m not the only one who typically doesn’t bother reading HN comments. This guy doesn’t either - otherwise they’d know that most comments are in the opposite direction, blinded with idiocy/faith/stupidity (my bad, I listed three synonyms for the same thing.)

      I’m just going to say it. // The author is an idiot who is using insults as a crutch to make his case.

      I’m just going to say it: the author of this comment is an idiot who is using insults as a crutch to make his case.

      I’m half-joking by being cheeky with the recursion. (It does highlight the hypocrisy though; the commenter is whining about insults while insulting the author.)

      Serious now: if you’re unable to extract the argumentation from the insults, or to understand why the insults are there (it’s a rant dammit), odds are that you’d do a great favour for everyone on the internet by going offline. Forever.


      “But LLMs are intellig–” PILEDRIVE TIME!!!