And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling

  • LeninWeave [none/use name, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    4
    ·
    8 days ago

    The cost to human knowledge and even thinking ability is huge

    100%.

    We are communists. We should understand the labor theory of value. Therefore, we should understand why GenAI does not create any new value: it’s not a person and it does no labor. It recycles existing knowledge into a lower-average-quality slurry, which is dispersed into the body of human knowledge used to train the next model which is used to produce slop that is dispersed into the… and so on and so forth.

    • Cowbee [he/they]@lemmygrad.ml
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      7 days ago

      I don’t think that’s the point Marxists that are less anti-AI are making. Liberals might, but they reject the LTV. If we apply the law of value to generative AI, then we know that it’s the same as all machinery, it’s simply crystallized former labor that can lower the socially necessary labor time of certain commodities in certain conditions.

      Take, say, a stock image for a powerpoint slide that illistrates a concept. We can either have people dedicated to making stock images in broad and unique enough situations, and have people search for and select the right image, or we can generate an image or two and be done with it. Side by side, the end products are near-identical, but the labor-time involved in the chain for each is different. The value isn’t higher for the generated image, it lowers the socially necessary labor time for stock images.

      We are communists, here, and while I do think there’s some merit to the argument that misunderstanding the boundaries and limitations of LLMs leads to some workers and capitalists relying on it in situations it cannot be, I also think the visceral hatred I see for AI is sometimes clouding people’s judgements.

      TL;DR AI does have use cases. It isn’t creating new value, but it can lower SNLT in certain situations, and we as communists need to properly analyze those rather than dogmatically dismiss it whole-cloth. It’s over-applied in capitalism due to the AI bubble, that doesn’t mean it’s never usable.

      • LeninWeave [none/use name, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        7 days ago

        I generally agree with you here, my problem is that despite this people do treat AI as though it’s capable of thought and of labor. In this very thread there are some (luckily not many) people doing it. As you say, it’s crystallized labor, just like a drill press.

        • Cowbee [he/they]@lemmygrad.ml
          link
          fedilink
          arrow-up
          10
          arrow-down
          1
          ·
          7 days ago

          Some people treat it that way, and I agree that it’s a problem. There’s also the people that take a dogmatically anti-AI stance that teeters into idealist as well. The real struggle around AI is in identifying how we as the proletariat can make use of it, identifying what its limits are, while using it to the best of our abilities for any of its actually useful use-cases. As communists, we sit at an advantage already by understanding that it cannot create new value, and is why we must do our best to take a class-focused and materialist analysis of how it changes class dynamics (and how it doesn’t).

          • LeninWeave [none/use name, any]@hexbear.net
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            edit-2
            7 days ago

            I agree with you here, although I want to make a distinction between “AI” in general (many useful use cases) and LLMs (personally, I have never seen a truly convincing use case, or at least not one that justifies the amount of development going into them). Not even LLM companies seem to be able to significantly reduce SNLT with LLMs without causing major problems for themselves.

            Fundamentally, in my opinion, the mistaken way people treat it is a core part of the issue. No capitalist ever thought a drill press was a human being capable of coming up with its own ideas. The fact that this is a widespread belief about LLMs leads to widespread decision making that produces extremely harmful outcomes for all of society, including the creation of a generation of workers who are much less able to think for themselves because they’re used to relying on the recycled ideas of an LLM, and a body of knowledge contaminated with garbage that’s difficult to separate from genuine information.

            I think any materialist analysis would have to consume that these things have very dubious use cases (maybe things like customer service chat bots) and therefore that most of the labor and resources put into their development are wasted and would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.

            • Cowbee [he/they]@lemmygrad.ml
              link
              fedilink
              arrow-up
              7
              ·
              7 days ago

              I think that’s a problem general to capitalism, and the orientation of production for profit rather than utility. What we need to do as communists is take an active role in clarifying the limitations and use-cases of AI, be they generative images, LLMs, or things like imaging analysis. I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.

              • LeninWeave [none/use name, any]@hexbear.net
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                2
                ·
                edit-2
                7 days ago

                I think that’s a problem general to capitalism, and the orientation of production for profit rather than utility.

                True, but like I said, companies don’t seem to be able to successfully reduce labor requirements using LLMs, which makes it seem likely that they’re not useful in general. This isn’t an issue of capitalism, the issue of capitalism is that despite that they still get a hugely disproportionate amount of resources for development and maintenance.

                I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.

                I do oppose the tool (LLMs, not AI) because I have yet to see any use case that justifies the development and maintenance costs. I’ll believe that this technology has useful applications once I actually see those useful applications in practice, I’m no longer giving the benefit of the doubt to technology we’ve seen fail repeatedly to be implemented in a useful manner. Even the few useful applications I can think of, I don’t see how they could be considered proportional to the costs of producing and maintaining the models.

                • Cowbee [he/they]@lemmygrad.ml
                  link
                  fedilink
                  arrow-up
                  7
                  arrow-down
                  1
                  ·
                  7 days ago

                  As others have explained in this thread, LLMs can be used for translation, summarization, and can at the present moment produce rudimentary code. You also acknowledged use cases like medical imaging software, and I gave the example of stock images earlier. I lump these distinct tools together because they are all conflated anyways, and opposition to, say, generative AI and LLMs is often tied together by liberals.

                  When I say the problems with capitalism finding use-cases for AI, I mean so from the acknowledged position that capitalists are by and large idealists. They reject the law of value, and believe AI to be capable of creating new value. This is the reason why it’s being shoved everywhere it doesn’t actually help, because capitalists are under the mistaken impression that they can profit off of AI and not off of surplus labor. We are experiencing a temporary flood of investment in a tool with far more narrow use-cases than Liberalism will acknowledge, and when this proves divorced from reality and the AI bubble crashes, we will be able to more properly analyze use-cases.

                  I’m against both the idea that AI has no utility, and the idea that AI is anything more than just another tool that needs to be correctly analyzed for possible use-cases. Our job as communists is to develop a correct line on AI and agitate for that line within the broader working class, so that it can be used (in however big or small the capacity) for proletarian liberation. It will not be some epoch-changing tool, and will only be one tool in a much larger toolbox, but it does have utility and already exists whether we like it or not.

                  • LeninWeave [none/use name, any]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    arrow-down
                    2
                    ·
                    7 days ago

                    In many of those applications, we are seeing that the required labor is not reduced. The translations need significant editing, just like machine translations of the past. The summaries contain mistakes that lead to wasted time and money. The same thing with the code, plus the maintenance burden is increased even in cases where the code is produced faster, which is often not actually the case. Companies lay people off, then hire them back because it doesn’t work. We can see this being proven in real time as the hype slowly collapses.

                    I lump these distinct tools together because they are all conflated anyways, and opposition to, say, generative AI and LLMs is often tied together by liberals.

                    I’m not a liberal and I’ve been extremely cautious to avoid conflating different types of so-called “AI” in this thread. If you keep doing so, we’re just going to be talking past each other.

                    We are experiencing a temporary flood of investment in a tool with far more narrow use-cases than Liberalism will acknowledge, and when this proves divorced from reality and the AI bubble crashes, we will be able to more properly analyze use-cases.

                    100% agreed, and it can’t come soon enough. In a few years at most we’ll see where SNLT was actually, meaningfully reduced by these tools, and my prediction is that it will be very narrow (as you say). At that point, I’ll believe in the applications that have been proven. What’s tragic is not only the wasted resources involved, but the opportunity cost of the technologies not explored as a result. Especially other forms of “AI” that are less hyped but more useful.

                    I’m against both the idea that AI has no utility, and the idea that AI is anything more than just another tool that needs to be correctly analyzed for possible use-cases. Our job as communists is to develop a correct line on AI and agitate for that line within the broader working class, so that it can be used (in however big or small the capacity) for proletarian liberation. It will not be some epoch-changing tool, and will only be one tool in a much larger toolbox, but it does have utility and already exists whether we like it or not.

                    As I said, I agree. It’s not special, it’s just treated as special by the people who hype it, to potentially disastrous consequences. The use cases are unproven, and the mounting evidence indicates that a lot of the use cases aren’t real and AI actually doesn’t reduce SNLT.

            • CriticalResist8@lemmygrad.ml
              link
              fedilink
              arrow-up
              8
              arrow-down
              1
              ·
              7 days ago

              would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.

              This is what China is developing currently, and many other cool things with AI. Although medical imaging AI was also found to have their limitations, but maybe they need to use a different neural method.

              Even if capitalist companies say that you can or should use their bot as a companion doesn’t mean you have to. We don’t have to listen to them. I’ve used AI to code stuff a lot, and it got the results – all for volunteer and free work, where hiring someone would have been prohibitive, and AI (LLM specifically) was the difference between offering this feature or canceling the idea completely.

              There’s a guy on youtube who bought Unitree’s top of the line humanoid robot (yes they ship to your doorstep from China lol) and with LLM help codes for it, because the documentation is not super great yet. Then with other models he can have real-time image detection, or use the LIDAR more meaningfully than without AI. I’m not sure where he’s at today with his robot, he was working on getting it to fetch a beer from the fridge - baby steps, because at this stage these bots come with nothing in them except the SDK and you have to code literally everything you want it to do, including standing idle. The image recognition has an LLM in it so that it can detect any object, he showed an interesting demo: in just one second, it can detect the glass bottles in the camera fram and even their color, and adds a frame around it. This is a new-ish model and I’m not entirely sure how it works but I assume it has to have an LLM in it to describe the image.

              I’m mostly on Deepseek these days, I’ve completely stopped using chatGPT because it just sucks at everything. It hallucinates so much less and becomes more and more reliable, although it still outputs nonsensical comparisons. But it’s like with everything you don’t know: double-check and exercise critical thinking. Before LLMs to ask our questions we had wikipedia, and it wasn’t any better (and still isn’t). edit - like when deepseek came out with reasoning, which they pioneered, it completely redefined LLM development and more work has been done from this new state of things, improving it all the time. They find new methods to improve AI. I think if there was a fundamental criticism I would make of it is that perhaps it was launched too soon (though neural networks have existed for over a decade), and of course overpromised by tech companies who rely on their AI product to survive. OpenAI is dying because they don’t have anything else to offer than GPT, they don’t make money on cloud solutions or hardware or anything like that. If their model dies, they die along with it. So they’re in startup philosophy mode where they try to iterate as fast as possible and consider any update is a good update (even when it’s not) just to try and retain users. They bleed 1 billion $ a month and live entirely on investor value, startup mode just doesn’t scale that high up. It’s not their 20$ subscriptions that are ever going to keep them afloat lol.

    • CriticalResist8@lemmygrad.ml
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      8 days ago

      I don’t follow. LLMs are a machine of course, what does that imply? That Something needs to be productive to exist? By the same LTV, LLMs reduce socially necessary labor time, like all machines.

      • LeninWeave [none/use name, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        7 days ago

        LLMs are a machine of course, what does that imply?

        That they create nothing on their own, and the way they are used currently leads to a degradation of the body of knowledge used to train the next generation of LLMs because people treat them like they’re human beings capable of thought and not language recyclers, spewing their output directly into written works.

    • chgxvjh [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      7 days ago

      Sure that tells us that some of the massive investments are stupid because their end-product won’t have much or any value.

      You still have a bunch of workers that used to produce something of value that required a certain amount of amount of labor that is now replaced by slop.

      So the conclusion of the analysis ends up fairly similar, you just sound more like a dork in the process.

      • LeninWeave [none/use name, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        7 days ago

        You still have a bunch of workers that used to produce something of value that required a certain amount of amount of labor that is now replaced by slop.

        A lot of the applications of AI specifically minimize worker involvement, meaning the output is 100% slop. That slop is included in the training data for the next model, leading to a cycle of degradation. In the end, the pool of human knowledge is contaminated with plausible-sounding written works that are wrong in various ways, the amount of labor required to learn anything is increased by having to filter through it, and the amount of waste due to people learning incorrect things and acting on them is also increased.