And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling

  • LeninWeave [none/use name, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    6 days ago

    In many of those applications, we are seeing that the required labor is not reduced. The translations need significant editing, just like machine translations of the past. The summaries contain mistakes that lead to wasted time and money. The same thing with the code, plus the maintenance burden is increased even in cases where the code is produced faster, which is often not actually the case. Companies lay people off, then hire them back because it doesn’t work. We can see this being proven in real time as the hype slowly collapses.

    I lump these distinct tools together because they are all conflated anyways, and opposition to, say, generative AI and LLMs is often tied together by liberals.

    I’m not a liberal and I’ve been extremely cautious to avoid conflating different types of so-called “AI” in this thread. If you keep doing so, we’re just going to be talking past each other.

    We are experiencing a temporary flood of investment in a tool with far more narrow use-cases than Liberalism will acknowledge, and when this proves divorced from reality and the AI bubble crashes, we will be able to more properly analyze use-cases.

    100% agreed, and it can’t come soon enough. In a few years at most we’ll see where SNLT was actually, meaningfully reduced by these tools, and my prediction is that it will be very narrow (as you say). At that point, I’ll believe in the applications that have been proven. What’s tragic is not only the wasted resources involved, but the opportunity cost of the technologies not explored as a result. Especially other forms of “AI” that are less hyped but more useful.

    I’m against both the idea that AI has no utility, and the idea that AI is anything more than just another tool that needs to be correctly analyzed for possible use-cases. Our job as communists is to develop a correct line on AI and agitate for that line within the broader working class, so that it can be used (in however big or small the capacity) for proletarian liberation. It will not be some epoch-changing tool, and will only be one tool in a much larger toolbox, but it does have utility and already exists whether we like it or not.

    As I said, I agree. It’s not special, it’s just treated as special by the people who hype it, to potentially disastrous consequences. The use cases are unproven, and the mounting evidence indicates that a lot of the use cases aren’t real and AI actually doesn’t reduce SNLT.

    • Cowbee [he/they]@lemmygrad.ml
      link
      fedilink
      arrow-up
      6
      ·
      6 days ago

      Yes, in many instances LLMs make mistakes, and if improperly used they can raise the labor-time used by a company vs what’s socially necessary. I’d even say I agree with you if you said this was the norm right now. However, SNLT will go down once the actual use-cases of AI in general are narrowed down, and as AI improves. The sheer fact that use-cases are non-zero necessitates that.

      Regarding what may be more or less useful to develop, that’s what I mean by saying capitalism can’t effectively pick or choose what to develop. Once the AI bubble pops and the hysteria grounds, we will see where the actual usefulness lies.

      As for talking about how liberals see it, I’m not necessarily addressing you, but talking about how liberals see AI presently. My point was about that perception and tendency, which takes on a stance similar to the Luddites: correctly identifying how capitalism uses new machinery to alienate workers and destroy their living standards, incorrectly identifying the machinery as the problem rather than the capital relations themselves.