And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling

  • blobii@lemmygrad.ml
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    genAI tries to computerise the only thing we can truly call human. Abstract thought in creativity. So it’s bad because it feels cold and inhuman and doesn’t even do its job that well.

  • Twongo [she/her]@lemmy.ml
    link
    fedilink
    arrow-up
    11
    ·
    4 days ago

    genai turned the internet into a hellhole. nothing is genuine. information became worthless. facts don’t matter anymore.

    it carries itself into the world outside the internet. slopaganda, decision making and policymaking are affected by genai and will make your life actively worse.

    welcome to the post-fact world where you can’t even trust yourself.

  • BarrelsBallot@lemmygrad.ml
    link
    fedilink
    arrow-up
    9
    ·
    4 days ago

    Why would you want to outsource one of the last vestiges of being a human we have left (thinking) to a 3rd party of any kind?

    I don’t care if it’s an AI or an underprivileged person in another region of the world, get that shit out of here. The internet and similar tools of isolation are bad enough, now we’re being handed keys to an artificial friend keen on severing our social connections and ability to think on our own.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    35
    arrow-down
    2
    ·
    5 days ago

    It’s a toy. I’m not against toys, but the amount of energy and resources we are pouring into this toy is alarming.

  • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
    link
    fedilink
    arrow-up
    31
    ·
    5 days ago

    My impression is that a lot of people realize this tech will be used against them under capitalism, and they feel threatened by it. The real problem isn’t with the tech itself, but with capitalist relations, and that’s where people should direct their energy.

  • LeninWeave [none/use name, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    5
    ·
    5 days ago

    an alternative where you can get instant feedback when you’re journaling

    GenAI isn’t giving you feedback. It’s not a person. The entire thing is a social black hole for a society where everyone is already deeply alienated from each other.

  • knfrmity@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    9
    ·
    5 days ago
    • It’s a complete waste of resources
    • The economic fallout of the bubble bursting could be unprecedented. (Yes shareholder value ≠ quality of life, but we’ve seen how working people get fucked over when the stock market crashes)
    • The environmental fallout is rarely considered
    • The cost to human knowledge and even thinking ability is huge
    • The emotional relationships people form with these models are concerning
    • What’s the societal cost of further isolating people?
    • What opportunity cost is there? How many actually useful things aren’t being discovered because the big seven are too focused on LLMs?
    • Nobody even wants LLMs. There’s no path to profitability. GenAI is a trillion dollar meme.
    • Even when it does generate useful output sometimes, LLMs are probabilistic and therefore outputs are not reproducible
    • Why do you need instant feedback when you’re doing absolutely anything? (Sometimes it’s warranted but then talk with a person)
    • LeninWeave [none/use name, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      4
      ·
      5 days ago

      The cost to human knowledge and even thinking ability is huge

      100%.

      We are communists. We should understand the labor theory of value. Therefore, we should understand why GenAI does not create any new value: it’s not a person and it does no labor. It recycles existing knowledge into a lower-average-quality slurry, which is dispersed into the body of human knowledge used to train the next model which is used to produce slop that is dispersed into the… and so on and so forth.

      • Cowbee [he/they]@lemmygrad.ml
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        5 days ago

        I don’t think that’s the point Marxists that are less anti-AI are making. Liberals might, but they reject the LTV. If we apply the law of value to generative AI, then we know that it’s the same as all machinery, it’s simply crystallized former labor that can lower the socially necessary labor time of certain commodities in certain conditions.

        Take, say, a stock image for a powerpoint slide that illistrates a concept. We can either have people dedicated to making stock images in broad and unique enough situations, and have people search for and select the right image, or we can generate an image or two and be done with it. Side by side, the end products are near-identical, but the labor-time involved in the chain for each is different. The value isn’t higher for the generated image, it lowers the socially necessary labor time for stock images.

        We are communists, here, and while I do think there’s some merit to the argument that misunderstanding the boundaries and limitations of LLMs leads to some workers and capitalists relying on it in situations it cannot be, I also think the visceral hatred I see for AI is sometimes clouding people’s judgements.

        TL;DR AI does have use cases. It isn’t creating new value, but it can lower SNLT in certain situations, and we as communists need to properly analyze those rather than dogmatically dismiss it whole-cloth. It’s over-applied in capitalism due to the AI bubble, that doesn’t mean it’s never usable.

        • LeninWeave [none/use name, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          5 days ago

          I generally agree with you here, my problem is that despite this people do treat AI as though it’s capable of thought and of labor. In this very thread there are some (luckily not many) people doing it. As you say, it’s crystallized labor, just like a drill press.

          • Cowbee [he/they]@lemmygrad.ml
            link
            fedilink
            arrow-up
            10
            arrow-down
            1
            ·
            5 days ago

            Some people treat it that way, and I agree that it’s a problem. There’s also the people that take a dogmatically anti-AI stance that teeters into idealist as well. The real struggle around AI is in identifying how we as the proletariat can make use of it, identifying what its limits are, while using it to the best of our abilities for any of its actually useful use-cases. As communists, we sit at an advantage already by understanding that it cannot create new value, and is why we must do our best to take a class-focused and materialist analysis of how it changes class dynamics (and how it doesn’t).

            • LeninWeave [none/use name, any]@hexbear.net
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              edit-2
              5 days ago

              I agree with you here, although I want to make a distinction between “AI” in general (many useful use cases) and LLMs (personally, I have never seen a truly convincing use case, or at least not one that justifies the amount of development going into them). Not even LLM companies seem to be able to significantly reduce SNLT with LLMs without causing major problems for themselves.

              Fundamentally, in my opinion, the mistaken way people treat it is a core part of the issue. No capitalist ever thought a drill press was a human being capable of coming up with its own ideas. The fact that this is a widespread belief about LLMs leads to widespread decision making that produces extremely harmful outcomes for all of society, including the creation of a generation of workers who are much less able to think for themselves because they’re used to relying on the recycled ideas of an LLM, and a body of knowledge contaminated with garbage that’s difficult to separate from genuine information.

              I think any materialist analysis would have to consume that these things have very dubious use cases (maybe things like customer service chat bots) and therefore that most of the labor and resources put into their development are wasted and would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.

              • CriticalResist8@lemmygrad.ml
                link
                fedilink
                arrow-up
                8
                arrow-down
                1
                ·
                5 days ago

                would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.

                This is what China is developing currently, and many other cool things with AI. Although medical imaging AI was also found to have their limitations, but maybe they need to use a different neural method.

                Even if capitalist companies say that you can or should use their bot as a companion doesn’t mean you have to. We don’t have to listen to them. I’ve used AI to code stuff a lot, and it got the results – all for volunteer and free work, where hiring someone would have been prohibitive, and AI (LLM specifically) was the difference between offering this feature or canceling the idea completely.

                There’s a guy on youtube who bought Unitree’s top of the line humanoid robot (yes they ship to your doorstep from China lol) and with LLM help codes for it, because the documentation is not super great yet. Then with other models he can have real-time image detection, or use the LIDAR more meaningfully than without AI. I’m not sure where he’s at today with his robot, he was working on getting it to fetch a beer from the fridge - baby steps, because at this stage these bots come with nothing in them except the SDK and you have to code literally everything you want it to do, including standing idle. The image recognition has an LLM in it so that it can detect any object, he showed an interesting demo: in just one second, it can detect the glass bottles in the camera fram and even their color, and adds a frame around it. This is a new-ish model and I’m not entirely sure how it works but I assume it has to have an LLM in it to describe the image.

                I’m mostly on Deepseek these days, I’ve completely stopped using chatGPT because it just sucks at everything. It hallucinates so much less and becomes more and more reliable, although it still outputs nonsensical comparisons. But it’s like with everything you don’t know: double-check and exercise critical thinking. Before LLMs to ask our questions we had wikipedia, and it wasn’t any better (and still isn’t). edit - like when deepseek came out with reasoning, which they pioneered, it completely redefined LLM development and more work has been done from this new state of things, improving it all the time. They find new methods to improve AI. I think if there was a fundamental criticism I would make of it is that perhaps it was launched too soon (though neural networks have existed for over a decade), and of course overpromised by tech companies who rely on their AI product to survive. OpenAI is dying because they don’t have anything else to offer than GPT, they don’t make money on cloud solutions or hardware or anything like that. If their model dies, they die along with it. So they’re in startup philosophy mode where they try to iterate as fast as possible and consider any update is a good update (even when it’s not) just to try and retain users. They bleed 1 billion $ a month and live entirely on investor value, startup mode just doesn’t scale that high up. It’s not their 20$ subscriptions that are ever going to keep them afloat lol.

              • Cowbee [he/they]@lemmygrad.ml
                link
                fedilink
                arrow-up
                7
                ·
                5 days ago

                I think that’s a problem general to capitalism, and the orientation of production for profit rather than utility. What we need to do as communists is take an active role in clarifying the limitations and use-cases of AI, be they generative images, LLMs, or things like imaging analysis. I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.

                • LeninWeave [none/use name, any]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  2
                  ·
                  edit-2
                  5 days ago

                  I think that’s a problem general to capitalism, and the orientation of production for profit rather than utility.

                  True, but like I said, companies don’t seem to be able to successfully reduce labor requirements using LLMs, which makes it seem likely that they’re not useful in general. This isn’t an issue of capitalism, the issue of capitalism is that despite that they still get a hugely disproportionate amount of resources for development and maintenance.

                  I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.

                  I do oppose the tool (LLMs, not AI) because I have yet to see any use case that justifies the development and maintenance costs. I’ll believe that this technology has useful applications once I actually see those useful applications in practice, I’m no longer giving the benefit of the doubt to technology we’ve seen fail repeatedly to be implemented in a useful manner. Even the few useful applications I can think of, I don’t see how they could be considered proportional to the costs of producing and maintaining the models.

      • CriticalResist8@lemmygrad.ml
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        5 days ago

        I don’t follow. LLMs are a machine of course, what does that imply? That Something needs to be productive to exist? By the same LTV, LLMs reduce socially necessary labor time, like all machines.

        • LeninWeave [none/use name, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          5 days ago

          LLMs are a machine of course, what does that imply?

          That they create nothing on their own, and the way they are used currently leads to a degradation of the body of knowledge used to train the next generation of LLMs because people treat them like they’re human beings capable of thought and not language recyclers, spewing their output directly into written works.

      • chgxvjh [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        5 days ago

        Sure that tells us that some of the massive investments are stupid because their end-product won’t have much or any value.

        You still have a bunch of workers that used to produce something of value that required a certain amount of amount of labor that is now replaced by slop.

        So the conclusion of the analysis ends up fairly similar, you just sound more like a dork in the process.

        • LeninWeave [none/use name, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          5 days ago

          You still have a bunch of workers that used to produce something of value that required a certain amount of amount of labor that is now replaced by slop.

          A lot of the applications of AI specifically minimize worker involvement, meaning the output is 100% slop. That slop is included in the training data for the next model, leading to a cycle of degradation. In the end, the pool of human knowledge is contaminated with plausible-sounding written works that are wrong in various ways, the amount of labor required to learn anything is increased by having to filter through it, and the amount of waste due to people learning incorrect things and acting on them is also increased.

    • CriticalResist8@lemmygrad.ml
      link
      fedilink
      arrow-up
      20
      arrow-down
      2
      ·
      5 days ago

      These are all historical problems of capitalism; we need to be able to cut through the veil instead of going around it, and attack the root cause, otherwise we are just reacting to new developments.

        • CriticalResist8@lemmygrad.ml
          link
          fedilink
          arrow-up
          14
          arrow-down
          1
          ·
          edit-2
          5 days ago

          I didn’t want to dump a point-by-point on you unprompted but if you let me know I can write one up happily. A lot of what is said about AI is just capitalism developing as it does, the technology might be novel and unprecedented (it’s not entirely, a lot of what AI and AI companies do was already commonplace), but the trend is perfectly in line with historical examples and the theory.

          Some less political people might say we just need better laws to steer companies correctly but of course we know where that goes, so the solution is to transform the class character of the state to transform the relations of production, and we recognized this long before AI existed. So my bigger point is that we need to keep sight on what’s important, socialism; not simply reacting to new developments any time they happen as this would only keep us running circles within the existing state of things.

          A lot of what happens in the western tech sphere is happening in other industries under late-stage capitalism, chasing shorter and shorter term profits and therefore shorter-term commodities as well. But there is also a big ecosystem of open-source AI that exists inside capitalism, though it’s again not unique to AI and open-source under capitalism has its own contradictions.

          It’s like… at this point I think a DotP is more likely than outlawing AI is lol. And I think it’s healthy to see it like this.

    • 10TH_OF_SEPTEMBER_CALL [any, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      5 days ago

      Most of the harm comes from the hype and social panic around it. We could have threaded it as the interesting gadget it is, but the crapitalists thoughts they finally had a way to get rid of human labour and crashed the work economy… again

  • Darkcommie@lemmygrad.ml
    link
    fedilink
    arrow-up
    18
    ·
    5 days ago

    Because we can see what it does without properly regulation and also it’s very overhyped by tech companies in how much utility it actually has

    • The Free Penguin@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      5 days ago

      Ye imo they’re not regulating it in the right places They’re uber-focused on making it reject making how-to guides for things they don’t like that they don’t see the real problem: Technofascist cults like palantir being able to kill random people with the press of a button

  • KalergiPlanner@lemmygrad.ml
    link
    fedilink
    arrow-up
    18
    ·
    5 days ago

    “And i don’t mean stuff like deepfakes/sora/palantir/anything like that” bro, we don’t live in a world where LLMs are excluded from those uses

    the technology itself isn’t bad, but we live in a shitty capitalist world where every instance of automation, rather than liberating mankind, fucks them over. a thing that can allow one person to do the labor of many is a beautiful thing, but under capitalism increases of productivity only lead to unemployment; though, on the bright side, it consequently also causes a decrease in the rate of profit.

  • infuziSporg [e/em/eir]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    5 days ago

    Why would you want instant feedback when you’re journaling? The whole point of journaling is to have something that’s entirely your own thoughts.

    • The Free Penguin@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      5 days ago

      I dont like writing my own thoughts down and just having them go into the void lol and i want a real hoomin to talk to about these things but i dont have one TwT

      • ZWQbpkzl [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 days ago

        I would be extremely cautious about that sort of usage of AI. Commercial AI’s are psychopathic sychophants and have been known to drive people insane by constantly gassing them up.

        Like you clearly want someone to talk to about your life and such (who doesn’t?) and I understand not having someone to talk to (fewer and fewer do these days). But you’re opting for a corporate machine which certainly has instructions to encourage your dependence on it.

        • The Free Penguin@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          Also i delete my convos about these things after 1 prompt so i dont have a lasting convo on that But tbh exposure to the raw terms of the topic has let me go from tech allegories to T9 cipher to where i am now where i can at least prompt a robot using A1Z26 or hex to obscure the raw terms a bit

          • ZWQbpkzl [none/use name]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            No idea. But I’d say its less likely. Especially if you’re running a local model with Ollama.

            I think key here is to prevent the AI from developing a “profile” on you and self controlled ollama sessions are the surest bet for that.

      • infuziSporg [e/em/eir]@hexbear.net
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        5 days ago

        What does “go into the void” mean? The LLM may use them as context for a while or it may not use them as context at all, it may even periodically erase its memory of you.

        I find talking about heavy or personal things way easier with strangers than with people you know. There’s no stakes with a stranger you can literally walk up to someone on the street or in a park who doesn’t look busy and ask them if they want to talk.

        • Fruitbat [she/her]@lemmygrad.ml
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          4 days ago

          Is it okay if I push back a bit? Since your last comment just feels a little dismissive? I don’t know the Free Penguin, but I will point out to other things why someone might not be able to easily talk to someone? Like for example, if someone can’t can’t walk or get around, they won’t be able to just talk to someone like that. Mainly speaking about my mom before she died since she had copd and her health decline after something happened to her at her former work place. But anyways she really hurt her spine and couldn’t really get around. I remember her being very upset with how alone she felt.

          Then also speaking for myself, I have a speech impediment, + anxiety, so it is really difficult for me to just approach someone and talk to them depending on various factors. along with that, another thing to, but some strangers can be outright hostile and make things worse and someone else might just have a lot of bad interactions with strangers. Since to go back to myself, people do judge how someone speaks and tends to see little of you, like if you have an accent or have trouble speaking.

          • infuziSporg [e/em/eir]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            4 days ago

            Chronic loneliness and anxiety are a function of societal arrangements that are exacerbated by capitalist solutions, not inherent and unavoidable parts of the human condition until they are cured by a panacea ex machina.

            Believe it or not, before 2022 we did have lots of different approaches around the world to these things. And we are poorer for turning away from all those approaches.

            I am a rather awkward person in many ways, I am instantly recognizable by many people as “weird”, I have my own share of anxiety that I’ve gotten better at masking over the years. If I spent ages 19-25 interacting with a digital yes-man instead of with humans, I would have no social skills.

            Your response sounds closely analogous to when car proponents use the disabled as a shield. We don’t need everyone to drive, we need to minimize the distance between each other, and making driving (or LLM usage) a necessity for getting by in society only creates bigger problems, because the root problem is not being adequately addressed.

            • Fruitbat [she/her]@lemmygrad.ml
              link
              fedilink
              arrow-up
              4
              ·
              edit-2
              4 days ago

              I feel like you might be taking me at bad faith here or misinterpreting me.

              Chronic loneliness and anxiety are a function of societal arrangements that are exacerbated by capitalist solutions, not inherent and unavoidable parts of the human condition until they are cured by a panacea ex machina.

              I agree? I’m very aware.

              Believe it or not, before 2022 we did have lots of different approaches around the world to these things. And we are poorer for turning away from all those approaches.

              I would argue that depends. Not everywhere has a lot of different approaches to these things. If anything, if we go to LLM’s, all they did was take inherit contradictions and brought them to new heights, but that these things were already there to begin with, maybe smaller in form.

              Your response sounds closely analogous to when car proponents use the disabled as a shield. We don’t need everyone to drive, we need to minimize the distance between each other, and making driving (or LLM usage) a necessity for getting by in society only creates bigger problems, because the root problem is not being adequately addressed.

              Again, where do I say that besides being taken at bad faith or misread into? All I’m simply is trying to point out that there usually reasons why someone would turn to something like an LLM or might not easily talk to someone else. As you said, the root problem is not being addressed. To add, it also just leaves a bad taste in my mouth and kind of hurts, to be that what I said sounds closely analogous to using the disable as a shield, especially when I was talking about myself or my mom.

              Since for example, when my mom was in the hospital before the last few weeks she died. She had to communicate on a white board for staff since they couldn’t understand her. I also had to use the same white board to because staff couldn’t understand what I was saying either. Just to give you an idea of how I have trouble speaking to others. I’m not saying someone shouldn’t try to interact with others you know and just go talk to a chatbot. People should have another person to talk to.

          • infuziSporg [e/em/eir]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            5 days ago

            The ability to self-actualize and shape the world belongs to those who are willing to potentially cause momentary discomfort.

            Also the default status of many people is lonely and/or anxious; receiving social energy from someone often at least takes their mind off that.

            Advancements in material technology in the past half century have often ended up stunting our social development and well-being.

  • HakFoo@lemmy.sdf.org
    link
    fedilink
    arrow-up
    27
    arrow-down
    1
    ·
    5 days ago

    What I don’t like is that they’re selling a toy as a tool, and arguably as the One And Only Tool.

    You’re given a black box and told to just keep prompting it to get lucky. That’s fine for toys like “give me a fresh low-quality wallpaper every morning.” or “pretend you’re Monkey D. Luffy and write a song from his perspective.”

    But it’s not appropriate for high-stakes work. Professional tools have documented rules, behaviours, and limits. They can be learned and steered reliably because they’re deterministic to a fault. They treat the user with respect and prioritixe correctness. Emacs didn’t wrap it in breathess sycopantic language when the code didn’t compile. Lotus 1-2-3 didn’t decide to replace half the “7’s” in your spreadsheet with some random katakana becsuse it was close enough. AutoCAD didn’t add a spar in the middle of your apartment building because it was statistically probable after looking at airplane wings all day.

    • CriticalResist8@lemmygrad.ml
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      5 days ago

      I mean software glitches all the time, some widespread software has long-standing bugs in it that its developers or even auditors can’t figure out and people just learn to work around the bug. Photoshop is made on 20 year old legacy code and also uses non-deterministic algorithms that predate AI (the spot healing brush for example which you often have to redo several times to get a different result). I agree that there’s a big black box aspect to LLMs and GenAI, can’t say for all AI, but I don’t think it’s necessarily inherent to the tech or means it shouldn’t be developed more.

      Actually image AI is severely simple in its methods. Provide it with the exact same inputs (including the seed number) and it will output the exact same image every time, with only very minor variations. Should it have no variations? Depends; image gen AI isn’t an engineering tool and doesn’t profess to have a 0.1mm margin of error like other machines might need to.

      Back in 2023 already China used an AI (they didn’t say what type exactly) to blueprint the electrical cabling on a new ship model, and it did it with 100% accuracy. It used to take a team of engineers one year to do this and an AI did it in 24 hours. There’s a lot of toy aspects to LLMs but this is also a trap of capitalism as this is what tech companies in startup mode are banking on. It’s not all neural models are capable of doing.

      You might be interested that the Iranian government has recently published guidelines on AI in academia. Unfortunately I don’t have a source as this comes from an Iranian compsci student I know, they say that you can use LLMs in university but need to note the specific model used, time of usage, and can prove you understand the topic then that’s 100% clean for Iranian academic standards.

      Iran is investing a lot in tech under heavy sanctions, and making everything locally (it is estimated 40-50% of all uni degrees in Iran are science degrees). To them AI is a potential way to improve their conditions under this context, and that’s what they’re exploring.

      • Sleepless One@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        Back in 2023 already China used an AI (they didn’t say what type exactly) to blueprint the electrical cabling on a new ship model, and it did it with 100% accuracy.

        Do you have a link to the story? I ask because AI is a broad umbrella that many different technologies fall under, so it isn’t necessarily synonymous with generative AI/machine learning (even if that’s how the term has been used the past few years). Hell, machine learning isn’t even synonymous with neural networks.

        Circling back to the Chinese ship, one type of AI I could plausibly see being used is a solver for a constraint satisfaction problem. The techniques I had to learn for these in college don’t even involve machine learning, let alone generative AI.

        • CriticalResist8@lemmygrad.ml
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          5 days ago

          I sent the story on perplexity and looked at its sources :P (people often ask me how I find sources, I just ask perplexity and then look at its links and find one that fits)

          https://asiatimes.com/2023/03/ai-warship-designer-accelerating-chinas-naval-lead/ they report here that a paper was published in a science journal, though Chinese-language.

          I did find this paper: https://www.sciencedirect.com/science/article/abs/pii/S004579492400049X but it’s not from the same team and seems to be about a different problem, though still in ship design (hull specifically) and mentions neural networks.

          • Conselheiro@lemmygrad.ml
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            4 days ago

            This is sort of the issue with “AI” often just meaning “good software” rather than any specific technique.

            From a quick read the first one seems to refer to a knowledge-base or auto-CAD solution which is fundamentally different from any methods related to LLMs.

            The second one is some actually really impressive feature engineering used to solve an optimization problem with Machine Learning tools, which is actually much closer to a statistician using linear regressions and data mining than somebody using an LLM or a GAN.

            Importantly, neither method is as computationally intensive as LLMs, and the second one at least is a very involved process requiring a lot of domain knowledge, which is exactly the opposite of how GenAI markets itself.

      • 10TH_OF_SEPTEMBER_CALL [any, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        5 days ago

        I mean software glitches all the time, some widespread software has long-standing bugs in it that its developers or even auditors can’t figure out and people just learn to work around the bug

        yeah my dad can kill a dozen people if something goes wrong at work. Yet they use windows and proprietary shit.

        If software isn’t secured it shouldn’t be used.

        • CriticalResist8@lemmygrad.ml
          link
          fedilink
          arrow-up
          8
          arrow-down
          2
          ·
          5 days ago

          We can make software less prone to errors with proper guidelines and procedures to follow, as with anything. Just to add that it’s not solely on software devs to make it failproof.

          I would make the full switch to Linux but I need Windows for photoshop and premiere lol. And I never got Wine to work on Mint, but if I could I would ditch windows today. I think helping people get acquainted with linux is something AI can really help with, and may help more people make the switch.

          • 10TH_OF_SEPTEMBER_CALL [any, any]@hexbear.net
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            5 days ago

            yes. It’s a tool that can (and must) be seized and re-appropriated imo. But it’s not magic. Main issue is that capitalists are selling it as some kind of genius in a bottle.

          • Horse {they/them}@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 days ago

            I never got Wine to work on Mint, but if I could I would ditch windows today.

            apologies if this is annoying, but have you tried Lutris?
            it’s designed for games, but i use it for everything that needs wine because it makes it easy to manage prefixes etc. with a nice gui

            • CriticalResist8@lemmygrad.ml
              link
              fedilink
              arrow-up
              7
              ·
              5 days ago

              No worries, I haven’t tried it but I also don’t have my Mint install anymore lol (Windows likes to delete the dual boot file when it updates and I never bothered to get it working again). I might give it another try down the line but I’m not ready to ditch Adobe yet. I’ll keep it in mind for if I make the switch in the future.

  • Pieplup (They/Them)@lemmygrad.ml
    link
    fedilink
    arrow-up
    8
    ·
    4 days ago

    The Kavernacle has videos on this. He talks about how it’s eroding emotional connection in society and having people offload their thikcing onto chatgpt. I think this is a problem But my main issue i’m most passionate about is the issue of misinformation. In the process of writing thsi post i did an experiment and asked it some questiosn about autism. I asked them waht autsitic burnout is. They gave an explanation that’s incorrect, and furthers the incorrect assumption alot of pepole make that i’ts something specific to autistic people. But it’s a wider phenomon of physiological neurocongitive burnout. I confronted them on this they refined their position then I asked them why they said it. It constnatly contradicts itself and will just be like yeah you are correct i am wrong, while continuing to not repeat the same incorrect claim. https://i.imgur.com/KINH7lV.png https://i.imgur.com/EHtDwNj.png According to chatgpt their own sentence contradicts itself. They also proceeded to tell invent a new usage of a very obscure medical term that is not widely used then try to gaslight me into believing it’s a commonly jused term among autsitic people whne it isn’t https://i.imgur.com/LStZdNg.png

    And what frustrates me even more is a couple months ago i had someone swear to me up and down that, the hallucinations in chatgpt were fixed and they ain’t that bad anymore. Granted, they were far worse in the past. It litaerlly tol dme autims level system was something that no longer exists despite it being currently widely used.

    But here’s the problem. I am an expert on this topic. Most people aren’t asking chatgpt questions about things they are an expert in, and they also are using it as a therapist.

    All in all i wasn’t expecting it to have no hallucinations but i was atelast expecting it to not still be a massive issue in just basic information retrival on topics that aren’t even super obscure and information si widely available about.

    Ultimately here’s the issue. The vast majority of pro-genai people don’t know what genai actually is and why it is bad to use it in the way they are as a result. GenAI is a very advanced from of predictive text function. It just predicts what it thinks the words following that queery is based on the tereabytes maybe evne petabytes of infromation it’s scrapped from the internet. Which means it’s not really useful for anything beyond very basic things like asking it to generate simple ideas or summarize an article or video and very basic coding. I only dabble very lighlty in programming but frmo hwat i’ve heard actaul experienced programmers say it trying to use chatgpt for major coding just means having to rewrite most of the code.

    • The Free Penguin@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      4 days ago

      And honestly i think the main reason is that i feel like i worry too much about my own self-image and have had bad experiences with other hoomins on the interwebs being absolute assholes to me cuz i told them im a commie in confidence only for them to share it with their friends who then went on to harass me and idk i just feel like because the robot wont talk about me behind my back i dont feel as much weighing me down talking about more sensitive stuff to it

      • Pieplup (They/Them)@lemmygrad.ml
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        I thought you were a penguin? You’re a fake penguin, A huamn pretend to be one.

        Mr. The Fake Penguin the answer is to find better friends. maybe try joining an online communist group or something. Also i’m ckinda confused cause you say and like you are refreencing something but ikd what you are referencing.

          • Conselheiro@lemmygrad.ml
            link
            fedilink
            arrow-up
            2
            ·
            3 days ago

            It was a joke, a la “in the internet nobody knows you’re a dog”. But adding to his point, yes, you need better friends.

            Maybe join the Genzedong matrix server? It was a pretty chill place back when I used it.

            Also if you have access, consider therapy. One of the greatest advantages psychologists have over LLMs is that they are able to disagree with you. That could help you with whatever thoughts you’re struggling with, without having to care about being judged.

            • The Free Penguin@lemmygrad.mlOP
              link
              fedilink
              arrow-up
              2
              ·
              3 days ago

              yeah i was in communist groups and still am, it was just that the roblox elevator community is a shithole filled with people who make being anticommunist their whole personality and harass anyone who dares to say anything positive about the CPC

        • The Free Penguin@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          I had some drama in the past with someone from the Roblox Elevator Community (i alr made posts about what a shithole that place is) and i thought someone was cool but they turned on me

  • CoreComrade@lemmygrad.ml
    link
    fedilink
    arrow-up
    18
    ·
    5 days ago

    For myself, it is the projected environmental impact. The power demand for data centers has already been on the rise due to the growth of the internet. With the addition of AI and the training thereof, the amount of power is rising/will rise at an unsustainable rate. The amount of electricity used creates strain on existing power grids, the amount of water that goes into cooling the hardware for the data centers creates strain on water supply, and this all plays into a larger amount of carbon emissions.

    Here is a good link that speaks to the environmental impact: genAI Environmental Impact

    Beyond the above the threat of people losing jobs within an already brutal system is a bit terrifying to me. Though others have already wrote more in length here regarding this.

    • CriticalResist8@lemmygrad.ml
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      edit-2
      5 days ago

      We have to be careful how we wield the environmental arguments. In the first phase, it’s often used to demonize Global South countries that are developing. Many of these countries completely skipped the personal computer step and are heavy consumers of smartphones and 4G data because it came around the time they could begin to afford the infrastructure (it’s why China is developing 6G already), but there’s a lot of arguments people make against smartphones (how the materials for them are produced, how you have to recharge a battery, how they get disposed of, how much electricity 5G consumes etc), but if they didn’t have smartphones then these countries would just not have the internet.

      edit: putting it all under the spoiler dropdown because I ended up writing an essay anyway lol.

      environmental arguments

      In the second phase in regards to LLM environmental impact it really depends and can already be mitigated. I’ll try not to make a huge comment because I don’t want to write an essay, but the source’s claims need scrutiny. Everything consumes energy - even we as human bodies release GHG. Going to work requires energy and using a computer for work requires energy too. If AI can do in 10 seconds what takes a human 2 hours, then you are certainly saving energy, if that’s the only metric we’re worried about.

      So it has to be relativized which most AI environmental articles don’t do. A chatGPT prompt consumes five times more electricity than a google search, sure, but that amount is close to 0 watts. Watching Youtube also consumes energy, a minute of youtube consumes much more energy than an LLM query does.

      Some people will say that we need to stop watching Youtube, no more treats or fun for workers, which is obviously not something we take seriously (deleting your emails to make room in data centers was a huge thing on linkedin a few years ago too).

      And all of this pales in comparison to the fossil fuel industry that we keep pumping money into in the west or obsolete tech that does have greener alternatives but we keep forcing on people because there’s money to be made.

      edit - and the meat and animal industry… Beef is very water-intensive and polluting, it’s not even close to AI. If that’s the metric then those that can should become vegan.

      Likewise for the water usage, there was that article about texas telling people to take fewer showers because it needs the water for data centers… I don’t know if you saw it at the time, it went viral on social media. It was a satirical article against AI, that people used as a serious argument. Texas never said to take fewer showers, these datacenters don’t use a lot of water at all as a share of total consumption in their respective geographical areas. In the US a bigger problem imo is the damming of the Colorado River so that almost no water reaches Mexico downstream, and the water is given out to farmers for free in arid regions so they can grow water-intensive crops like rice or dates (and US dates don’t even taste good)

      It also has sort of an anti-civ conclusion… Everything consumes energy and emits pollution, so the most logical conclusion is to destroy all technology and go back to living like the 13th century. And if we can keep some technology how do we choose between AI and Youtube?

      Rather I believe investments in research make things better over time, and this is the case for AI too (and we would have much better, safe nuclear power plants too if we kept investing in research instead of giving in to fearmongering and halting progress but I digress). I changed a lot of my point of view on environmentalism when back in 2020 people were protesting against 5G because “microwaves” and “we don’t need it” and I was on board (4G was plenty fast enough) until I saw how in some places they use 5G for remote surgery and that’s a great thing that they couldn’t do with 4G because there was too much latency. A doctor in China with 6G could perform remote surgery on a child in the Congo.

      In China electricity is considered a solved problem; at any time the grid has 2-3x more energy than it needs. The west has decided to stop investing in public projects and instead concentrate all surplus value in the hands of a select few. We have stopped building housing, we stopped building roads and rail, but we find the money to build datacenters that could be much greener, but why would they be when that costs money and there’s no laws that mandate it?

      Speaking of China they use a lot of coal still (comparatively speaking) but they also see it just an outdated means of energy production that can be replaced by newer, better alternatives. It’s very different, they’re doing a lot of solar and wind - in the west btw chinese solar panels are tariffed to hell and back, if they weren’t every single building in europe would be equipped with solar panels - and even pioneering new methods of energy production and storage, like the sodium battery or gravity storage. Gravity battery storage (raising and lowering heavy blocks of concrete over the day) is not necessarily Chinese but in Europe this is still just a prototype. In China they’re already building them as part of their energy strategy. They don’t demonize coal as uniquely evil like liberals might, but rather that once they’re able to, they’ll ditch coal because there’s better alternatives now.

      In regards to AI in China there’s been a few articles posted on the grad and it’s promising. They are careful about efficiency because they have to be. I don’t know if you saw the article from a few days ago about Alibaba Cloud cutting the number of GPUs needed to host their model farm by 82%. The test was done on NVidia H20 cards which is not a coincidence, it’s the best China can get by US decree. The top of the line model is the H100 (the H20 having only 20% of the capabilities) but the US has an order not to export anything above the H20 to China, so they find creative ways to stretch it. And now they’re developing their own GPU industry and the US shot itself in the foot again.

      Speaking of model farm… it’s totally possible to run models locally. I have a 16GB GPU and I can generate realistic pictures (if that’s the benchmark) in 30 seconds, the model only needs 5GB Vram but the architecture inside the card is also important for speed. For LLM generation I can run 12B models, rarely higher, and with new efficiency algorithms I think over time that will stretch to bigger and bigger models, all on the same card. They run model farms for the cloud service because so many people connect to it at the same time, but it’s not a hard requirement for running LLMs. In another comment I mentioned how Iran is interested in LLMs because like 4G and other modern tech that lags a bit in the west, they see it as a way to stretch their material conditions more (being heavily sanctioned economically).

      There’s also stuff being done in the open source community, for example LORAs are used in image generation and help skew the generation towards a certain result. This means you don’t need to train a whole model, loras are usually trained by people on their machines with like 100 images. As for training time it can be done in 30 minutes to train a lora. So what we see is comparatively few companies/groups making full models (either LLM or image gen, called checkpoints) and most people making finetunes for these models.

      Meanwhile in the West there’s a 500 billion $ “plan” to invest in the big tech companies that already have a ton of money, that’s the best they can muster. Give them unlimited money and expect that they won’t act like everything is unlimited. Deepseek actually came out shortly after that plan (called Stargate) and I think pretty much killed it before it even took off lol. It’s the destiny of capitalism to con the government into giving them money, of course they were not going to say “no actually if we put some personal investment we could make a model that uses 5x less energy”, because they would not get 500 billion $ if they did. They also don’t care about the energy grid, that’s an externality for them - the government will take care of it, from their pov.

      Anyway it’s not entirely a direct response to your comment because I’m sure you don’t believe in all the fearmongering, but it’s stuff I think is important to keep in mind and I wanted to add here. And I ended up writing an essay anyway lol.

  • fox [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    5 days ago

    isn’t providing an alternative where you can get instant feedback when you’re journaling

    ELIZA was written in the 60s. It’s a natural language processor that’s able to have reflective conversations with you. It’s not incredible but there’s been sixty years of improvements on that front and modern ones are pretty nice.

    Otherwise, LLMs are a a probabilistic tool: the input doesn’t determine the output. This makes them useless at things tools are good at, which is repeatable results based on consistent inputs. They generate text with an authoritative voice but all domain experts find that they’re wrong more often than they’re right, which makes them unsuitable as automation for white-collar jobs that require any degree of precision.

    Further, LLMs have been demonstrated to degrade thinking skills, memory, and self-confidence. There are published stories about LLMs causing latent psychosis to manifest in vulnerable people, and LLMs have encouraged suicide. They present a social harm which cannot be justified by their limited use cases.

    Sociopolitically, LLMs are being pushed by some of the most evil people alive and their motives must be questioned. You’ll find oceans of press about all the things LLMs can do that are fascinating or scary, such as the TaskRabbit story (which was fabricated entirely). The media is culpable in the image that LLMs are more capable than they are, or that they may become more capable in the future and thus must be invested in now.

  • Conselheiro@lemmygrad.ml
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    4 days ago

    GenAI is the highest form of commodification of culture so far. It treats all text, images, videos, songs, speech and all other forms of organic cultural expression as slop to be generated over and over without its original context. It provides little to no serious improvement in industry, and is only propped up despite no profits due to either artificial growth in internet platforms or unrealistic expectations from the AGI folks.

    And it’s inneficient. We could easily have more therapists rather than wasteful chatbots that cost billions. Such technology can only exist as a bandage to the ailments of neoliberalism, and is not a solution to anything. And that’s not even going into the worsening impact of cultural imperialism due to the tendency of these models to reproduce Northwestern cultural hegemony.

    The alternative is actually pretty simple: measures to lower unemployment. Most capitalist countries have issues with unemployment or underemployment. And most tasks of Gen"AI" can be done by paid humans quite well, possibly even at actually lower costs than what the informatics cartel is tanking in order to ride the bubble.

    Human labour is what produces value. All else is secondary.