And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling

  • Cowbee [he/they]@lemmygrad.ml
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    6 days ago

    Some people treat it that way, and I agree that it’s a problem. There’s also the people that take a dogmatically anti-AI stance that teeters into idealist as well. The real struggle around AI is in identifying how we as the proletariat can make use of it, identifying what its limits are, while using it to the best of our abilities for any of its actually useful use-cases. As communists, we sit at an advantage already by understanding that it cannot create new value, and is why we must do our best to take a class-focused and materialist analysis of how it changes class dynamics (and how it doesn’t).

    • LeninWeave [none/use name, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      6 days ago

      I agree with you here, although I want to make a distinction between “AI” in general (many useful use cases) and LLMs (personally, I have never seen a truly convincing use case, or at least not one that justifies the amount of development going into them). Not even LLM companies seem to be able to significantly reduce SNLT with LLMs without causing major problems for themselves.

      Fundamentally, in my opinion, the mistaken way people treat it is a core part of the issue. No capitalist ever thought a drill press was a human being capable of coming up with its own ideas. The fact that this is a widespread belief about LLMs leads to widespread decision making that produces extremely harmful outcomes for all of society, including the creation of a generation of workers who are much less able to think for themselves because they’re used to relying on the recycled ideas of an LLM, and a body of knowledge contaminated with garbage that’s difficult to separate from genuine information.

      I think any materialist analysis would have to consume that these things have very dubious use cases (maybe things like customer service chat bots) and therefore that most of the labor and resources put into their development are wasted and would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.

      • CriticalResist8@lemmygrad.ml
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        6 days ago

        would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.

        This is what China is developing currently, and many other cool things with AI. Although medical imaging AI was also found to have their limitations, but maybe they need to use a different neural method.

        Even if capitalist companies say that you can or should use their bot as a companion doesn’t mean you have to. We don’t have to listen to them. I’ve used AI to code stuff a lot, and it got the results – all for volunteer and free work, where hiring someone would have been prohibitive, and AI (LLM specifically) was the difference between offering this feature or canceling the idea completely.

        There’s a guy on youtube who bought Unitree’s top of the line humanoid robot (yes they ship to your doorstep from China lol) and with LLM help codes for it, because the documentation is not super great yet. Then with other models he can have real-time image detection, or use the LIDAR more meaningfully than without AI. I’m not sure where he’s at today with his robot, he was working on getting it to fetch a beer from the fridge - baby steps, because at this stage these bots come with nothing in them except the SDK and you have to code literally everything you want it to do, including standing idle. The image recognition has an LLM in it so that it can detect any object, he showed an interesting demo: in just one second, it can detect the glass bottles in the camera fram and even their color, and adds a frame around it. This is a new-ish model and I’m not entirely sure how it works but I assume it has to have an LLM in it to describe the image.

        I’m mostly on Deepseek these days, I’ve completely stopped using chatGPT because it just sucks at everything. It hallucinates so much less and becomes more and more reliable, although it still outputs nonsensical comparisons. But it’s like with everything you don’t know: double-check and exercise critical thinking. Before LLMs to ask our questions we had wikipedia, and it wasn’t any better (and still isn’t). edit - like when deepseek came out with reasoning, which they pioneered, it completely redefined LLM development and more work has been done from this new state of things, improving it all the time. They find new methods to improve AI. I think if there was a fundamental criticism I would make of it is that perhaps it was launched too soon (though neural networks have existed for over a decade), and of course overpromised by tech companies who rely on their AI product to survive. OpenAI is dying because they don’t have anything else to offer than GPT, they don’t make money on cloud solutions or hardware or anything like that. If their model dies, they die along with it. So they’re in startup philosophy mode where they try to iterate as fast as possible and consider any update is a good update (even when it’s not) just to try and retain users. They bleed 1 billion $ a month and live entirely on investor value, startup mode just doesn’t scale that high up. It’s not their 20$ subscriptions that are ever going to keep them afloat lol.

      • Cowbee [he/they]@lemmygrad.ml
        link
        fedilink
        arrow-up
        7
        ·
        6 days ago

        I think that’s a problem general to capitalism, and the orientation of production for profit rather than utility. What we need to do as communists is take an active role in clarifying the limitations and use-cases of AI, be they generative images, LLMs, or things like imaging analysis. I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.

        • LeninWeave [none/use name, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          edit-2
          6 days ago

          I think that’s a problem general to capitalism, and the orientation of production for profit rather than utility.

          True, but like I said, companies don’t seem to be able to successfully reduce labor requirements using LLMs, which makes it seem likely that they’re not useful in general. This isn’t an issue of capitalism, the issue of capitalism is that despite that they still get a hugely disproportionate amount of resources for development and maintenance.

          I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.

          I do oppose the tool (LLMs, not AI) because I have yet to see any use case that justifies the development and maintenance costs. I’ll believe that this technology has useful applications once I actually see those useful applications in practice, I’m no longer giving the benefit of the doubt to technology we’ve seen fail repeatedly to be implemented in a useful manner. Even the few useful applications I can think of, I don’t see how they could be considered proportional to the costs of producing and maintaining the models.

          • Cowbee [he/they]@lemmygrad.ml
            link
            fedilink
            arrow-up
            7
            arrow-down
            1
            ·
            6 days ago

            As others have explained in this thread, LLMs can be used for translation, summarization, and can at the present moment produce rudimentary code. You also acknowledged use cases like medical imaging software, and I gave the example of stock images earlier. I lump these distinct tools together because they are all conflated anyways, and opposition to, say, generative AI and LLMs is often tied together by liberals.

            When I say the problems with capitalism finding use-cases for AI, I mean so from the acknowledged position that capitalists are by and large idealists. They reject the law of value, and believe AI to be capable of creating new value. This is the reason why it’s being shoved everywhere it doesn’t actually help, because capitalists are under the mistaken impression that they can profit off of AI and not off of surplus labor. We are experiencing a temporary flood of investment in a tool with far more narrow use-cases than Liberalism will acknowledge, and when this proves divorced from reality and the AI bubble crashes, we will be able to more properly analyze use-cases.

            I’m against both the idea that AI has no utility, and the idea that AI is anything more than just another tool that needs to be correctly analyzed for possible use-cases. Our job as communists is to develop a correct line on AI and agitate for that line within the broader working class, so that it can be used (in however big or small the capacity) for proletarian liberation. It will not be some epoch-changing tool, and will only be one tool in a much larger toolbox, but it does have utility and already exists whether we like it or not.

            • LeninWeave [none/use name, any]@hexbear.net
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              6 days ago

              In many of those applications, we are seeing that the required labor is not reduced. The translations need significant editing, just like machine translations of the past. The summaries contain mistakes that lead to wasted time and money. The same thing with the code, plus the maintenance burden is increased even in cases where the code is produced faster, which is often not actually the case. Companies lay people off, then hire them back because it doesn’t work. We can see this being proven in real time as the hype slowly collapses.

              I lump these distinct tools together because they are all conflated anyways, and opposition to, say, generative AI and LLMs is often tied together by liberals.

              I’m not a liberal and I’ve been extremely cautious to avoid conflating different types of so-called “AI” in this thread. If you keep doing so, we’re just going to be talking past each other.

              We are experiencing a temporary flood of investment in a tool with far more narrow use-cases than Liberalism will acknowledge, and when this proves divorced from reality and the AI bubble crashes, we will be able to more properly analyze use-cases.

              100% agreed, and it can’t come soon enough. In a few years at most we’ll see where SNLT was actually, meaningfully reduced by these tools, and my prediction is that it will be very narrow (as you say). At that point, I’ll believe in the applications that have been proven. What’s tragic is not only the wasted resources involved, but the opportunity cost of the technologies not explored as a result. Especially other forms of “AI” that are less hyped but more useful.

              I’m against both the idea that AI has no utility, and the idea that AI is anything more than just another tool that needs to be correctly analyzed for possible use-cases. Our job as communists is to develop a correct line on AI and agitate for that line within the broader working class, so that it can be used (in however big or small the capacity) for proletarian liberation. It will not be some epoch-changing tool, and will only be one tool in a much larger toolbox, but it does have utility and already exists whether we like it or not.

              As I said, I agree. It’s not special, it’s just treated as special by the people who hype it, to potentially disastrous consequences. The use cases are unproven, and the mounting evidence indicates that a lot of the use cases aren’t real and AI actually doesn’t reduce SNLT.

              • Cowbee [he/they]@lemmygrad.ml
                link
                fedilink
                arrow-up
                6
                ·
                6 days ago

                Yes, in many instances LLMs make mistakes, and if improperly used they can raise the labor-time used by a company vs what’s socially necessary. I’d even say I agree with you if you said this was the norm right now. However, SNLT will go down once the actual use-cases of AI in general are narrowed down, and as AI improves. The sheer fact that use-cases are non-zero necessitates that.

                Regarding what may be more or less useful to develop, that’s what I mean by saying capitalism can’t effectively pick or choose what to develop. Once the AI bubble pops and the hysteria grounds, we will see where the actual usefulness lies.

                As for talking about how liberals see it, I’m not necessarily addressing you, but talking about how liberals see AI presently. My point was about that perception and tendency, which takes on a stance similar to the Luddites: correctly identifying how capitalism uses new machinery to alienate workers and destroy their living standards, incorrectly identifying the machinery as the problem rather than the capital relations themselves.