And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling
And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling
In many of those applications, we are seeing that the required labor is not reduced. The translations need significant editing, just like machine translations of the past. The summaries contain mistakes that lead to wasted time and money. The same thing with the code, plus the maintenance burden is increased even in cases where the code is produced faster, which is often not actually the case. Companies lay people off, then hire them back because it doesn’t work. We can see this being proven in real time as the hype slowly collapses.
I’m not a liberal and I’ve been extremely cautious to avoid conflating different types of so-called “AI” in this thread. If you keep doing so, we’re just going to be talking past each other.
100% agreed, and it can’t come soon enough. In a few years at most we’ll see where SNLT was actually, meaningfully reduced by these tools, and my prediction is that it will be very narrow (as you say). At that point, I’ll believe in the applications that have been proven. What’s tragic is not only the wasted resources involved, but the opportunity cost of the technologies not explored as a result. Especially other forms of “AI” that are less hyped but more useful.
As I said, I agree. It’s not special, it’s just treated as special by the people who hype it, to potentially disastrous consequences. The use cases are unproven, and the mounting evidence indicates that a lot of the use cases aren’t real and AI actually doesn’t reduce SNLT.
Yes, in many instances LLMs make mistakes, and if improperly used they can raise the labor-time used by a company vs what’s socially necessary. I’d even say I agree with you if you said this was the norm right now. However, SNLT will go down once the actual use-cases of AI in general are narrowed down, and as AI improves. The sheer fact that use-cases are non-zero necessitates that.
Regarding what may be more or less useful to develop, that’s what I mean by saying capitalism can’t effectively pick or choose what to develop. Once the AI bubble pops and the hysteria grounds, we will see where the actual usefulness lies.
As for talking about how liberals see it, I’m not necessarily addressing you, but talking about how liberals see AI presently. My point was about that perception and tendency, which takes on a stance similar to the Luddites: correctly identifying how capitalism uses new machinery to alienate workers and destroy their living standards, incorrectly identifying the machinery as the problem rather than the capital relations themselves.
I agree with everything you said here, I think I’m just more pessimistic about how narrow the actually useful applications of LLMs will be.
That’s fair, my desire is more to try to bridge the gap between comrades I see disagreeing more than I think they actually do.