And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling
And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling
True, but like I said, companies don’t seem to be able to successfully reduce labor requirements using LLMs, which makes it seem likely that they’re not useful in general. This isn’t an issue of capitalism, the issue of capitalism is that despite that they still get a hugely disproportionate amount of resources for development and maintenance.
I do oppose the tool (LLMs, not AI) because I have yet to see any use case that justifies the development and maintenance costs. I’ll believe that this technology has useful applications once I actually see those useful applications in practice, I’m no longer giving the benefit of the doubt to technology we’ve seen fail repeatedly to be implemented in a useful manner. Even the few useful applications I can think of, I don’t see how they could be considered proportional to the costs of producing and maintaining the models.
As others have explained in this thread, LLMs can be used for translation, summarization, and can at the present moment produce rudimentary code. You also acknowledged use cases like medical imaging software, and I gave the example of stock images earlier. I lump these distinct tools together because they are all conflated anyways, and opposition to, say, generative AI and LLMs is often tied together by liberals.
When I say the problems with capitalism finding use-cases for AI, I mean so from the acknowledged position that capitalists are by and large idealists. They reject the law of value, and believe AI to be capable of creating new value. This is the reason why it’s being shoved everywhere it doesn’t actually help, because capitalists are under the mistaken impression that they can profit off of AI and not off of surplus labor. We are experiencing a temporary flood of investment in a tool with far more narrow use-cases than Liberalism will acknowledge, and when this proves divorced from reality and the AI bubble crashes, we will be able to more properly analyze use-cases.
I’m against both the idea that AI has no utility, and the idea that AI is anything more than just another tool that needs to be correctly analyzed for possible use-cases. Our job as communists is to develop a correct line on AI and agitate for that line within the broader working class, so that it can be used (in however big or small the capacity) for proletarian liberation. It will not be some epoch-changing tool, and will only be one tool in a much larger toolbox, but it does have utility and already exists whether we like it or not.
In many of those applications, we are seeing that the required labor is not reduced. The translations need significant editing, just like machine translations of the past. The summaries contain mistakes that lead to wasted time and money. The same thing with the code, plus the maintenance burden is increased even in cases where the code is produced faster, which is often not actually the case. Companies lay people off, then hire them back because it doesn’t work. We can see this being proven in real time as the hype slowly collapses.
I’m not a liberal and I’ve been extremely cautious to avoid conflating different types of so-called “AI” in this thread. If you keep doing so, we’re just going to be talking past each other.
100% agreed, and it can’t come soon enough. In a few years at most we’ll see where SNLT was actually, meaningfully reduced by these tools, and my prediction is that it will be very narrow (as you say). At that point, I’ll believe in the applications that have been proven. What’s tragic is not only the wasted resources involved, but the opportunity cost of the technologies not explored as a result. Especially other forms of “AI” that are less hyped but more useful.
As I said, I agree. It’s not special, it’s just treated as special by the people who hype it, to potentially disastrous consequences. The use cases are unproven, and the mounting evidence indicates that a lot of the use cases aren’t real and AI actually doesn’t reduce SNLT.
Yes, in many instances LLMs make mistakes, and if improperly used they can raise the labor-time used by a company vs what’s socially necessary. I’d even say I agree with you if you said this was the norm right now. However, SNLT will go down once the actual use-cases of AI in general are narrowed down, and as AI improves. The sheer fact that use-cases are non-zero necessitates that.
Regarding what may be more or less useful to develop, that’s what I mean by saying capitalism can’t effectively pick or choose what to develop. Once the AI bubble pops and the hysteria grounds, we will see where the actual usefulness lies.
As for talking about how liberals see it, I’m not necessarily addressing you, but talking about how liberals see AI presently. My point was about that perception and tendency, which takes on a stance similar to the Luddites: correctly identifying how capitalism uses new machinery to alienate workers and destroy their living standards, incorrectly identifying the machinery as the problem rather than the capital relations themselves.
I agree with everything you said here, I think I’m just more pessimistic about how narrow the actually useful applications of LLMs will be.
That’s fair, my desire is more to try to bridge the gap between comrades I see disagreeing more than I think they actually do.