The main use case for LLMs is writing text nobody wanted to read. The other use case is summarizing text nobody wanted to read. Except they don’t do that either. The Australian Securities and…
The problem is not the LLMs, but what people are trying to do with them.
They are currently spoons, but people are desperately wishing they were katanas.
They work really well for soup, but they can’t cut steak. But they’re being hyped as super ninja steak knives, and people are getting pissed when they can’t cut steak.
If you give them watery, soupy tasks they can do successfully, they can lighten your workload, as long as you’re aware of what they are and aren’t good at.
What people want LLMs to be able to do, ie. “Steak” tasks:
write complex documents
apply complex knowledge/rules to a situation
Write complex code and create entire programs based on vague description
What LLMs can currently do ie. “Soup” tasks:
check this document and fix all spelling, punctuation and grammatical errors
summarise this paragraph as dot points
write a python program that sorts my photographs into folders based on the year they were taken
Half of Lemmy is hyping katanas, the other half is yelling “Why won’t my spoon cut this steak?!! AI is so dumb!!!”
Clearly this post is about LLMs not succeeding at this task, but anecdotally I’ve seen it work OK and also fail. Just like humans, which is the benchmark but they are faster.
The problem is not the LLMs, but what people are trying to do with them.
They are currently spoons, but people are desperately wishing they were katanas.
They work really well for soup, but they can’t cut steak. But they’re being hyped as super ninja steak knives, and people are getting pissed when they can’t cut steak.
If you give them watery, soupy tasks they can do successfully, they can lighten your workload, as long as you’re aware of what they are and aren’t good at.
What people want LLMs to be able to do, ie. “Steak” tasks:
write complex documents
apply complex knowledge/rules to a situation
Write complex code and create entire programs based on vague description
What LLMs can currently do ie. “Soup” tasks:
check this document and fix all spelling, punctuation and grammatical errors
summarise this paragraph as dot points
write a python program that sorts my photographs into folders based on the year they were taken
Half of Lemmy is hyping katanas, the other half is yelling “Why won’t my spoon cut this steak?!! AI is so dumb!!!”
The entire point here is that they can’t?
Clearly this post is about LLMs not succeeding at this task, but anecdotally I’ve seen it work OK and also fail. Just like humans, which is the benchmark but they are faster.
humans are clearly faster at generating utterly banal shit, as proven by your posts in this thread
they don’t do any of that soup shit reliably either and reading the article might have told you that
They absolutely do, and I have no idea why you’re so angry
hahaha ok fuck off now
I’d offer congratulations on obfuscating a bad claim with a poor analogy, but you didn’t even do that very well.
more of a Trabant analogy than a Corvette analogy
“spoons and katanas” has got to be the most baby brained analogy. are you a child
Thanks Donald, good luck in November
I get that this is some sort of attempt at an election related Epic Comeback, but it doesn’t make sense
Who cares? It paints the correct picture and adds useful context.
you do realize steaks arriving purple or green are bad things, right
it doesn’t do either of those things
it is stupid and wrong, and i pity your inability to understand that fact
good god this entire post is the most tortured believer whataboutism I’ve encountered this month and there’s extremely strong competition here
you should make a youtube channel,
The Katana Steak-Eater
. I’d watch the shit out of that at least one saturday afternoonWhy did this immediately give me a flashback to Donald Trump yelling, “when it comes to great steaks, I’ve just raised the stakes!”
This level of discourse wouldn’t fly on 4chan, how is it so popular with LLM fans?
needs to be a car analogy
more of a Power Wheels Barbie Jeep whose battery got left out in the sun too long, but I’ll allow it
don’t diss the course, this steak’s great
Actually, LLMs are syringes filled with brain-parasite-infested poop