• amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    9 days ago

    But I guess this (AI topic) discussion needs more back and forth and probably q&a style answers to get anywhere long term. Otherwise increasingly scared left reactionaries will scream at increasingly frustrated ai proponents.

    I think a few critical parts of the conversation surrounding AI to keep things grounded in perspective is:

    1. Recognizing that how art is perceived culturally is not a static, universal thing across cultures and time periods. (A good example of this is how Hula dance in Hawaiian culture is a means of telling stories and passing down history. It’s not merely some tool of “self expression” and commodity that western capitalist art tends to be.)

    2. The western investor class are not the only ones working on AI research. (Socialist China is playing an important part in its development too.)

    3. Generative AI (text, image, audio, etc.) is only a portion of AI research. Automation that could be called AI has been around for decades. The difference is that its level of capability is now being seriously compared to humans in some skillsets in limited contexts; anyone who tells you generative AI is overall nearing levels of capability comparative to humans is selling you a bridge. What people tend to hate in a reactionary way is generative AI, but they talk about it like it’s AI as a whole, which confuses the issue.

    I find there’s also just a lot of basic things about it that people don’t know and this ignorance probably makes it harder for them to approach it in a grounded way. For example, even among people who use generative AI, it’s not uncommon for people to think a model has a “database” of information. As if it saved everything it was trained on intact and calls on it to make new things. When it’s closer to something vaguely like the model has a Katamari Damacy ball of concepts glommed together by association and it makes probablistic guesses on what should come next, depending on where in the glom of “things like what it was trained on” you have ended up in.

    People also tend to associate text gen AI with ChatGPT, but the underlying architecture of those models is just a continuation model; it tries to guess what “token” (which may be a whole word “go” or a component of a word such as “-ly”) comes next. The chat format AI are just designed with special UI and other tweaks to make sure that it stops before writing your part of the conversation. If you removed that component, what you would observe is the model seemingly having a conversation with itself. That’s what it’s always doing because it doesn’t know there’s a you and and it, truly, but with the right presentation, it can appear as if it’s waiting for you to say your part.

    The point of all this info dumping is like… if someone is bent on hating it, at least understand what it is, ya know? And when you do understand, you might realize it’s a bit more complicated than you thought. Some people might hate it anyway, even if they understand fully what it is because of it threatening their livelihood, but I’d still rather them know than not know.

    • haui@lemmygrad.ml
      link
      fedilink
      arrow-up
      5
      ·
      8 days ago

      I agree to all of this.

      What I think people hate is the capitalist way of treating it and selling it. As a world wonder that will solve everything and make thinking obsolete.

      But at the same time they hate the people who believe this, which feels like the majority, which points at the contradictions in society.

      Then what I personally hate on top is the layer that we are made to fight about this constantly and so on.

      And all these are the material conditions we are forced to exist in which makes my neighbor use chatgpt in a discussion with me which ultimately just breaks my spirit at any future for us human monkeys. Its literally heartbreaking.

      So yes, i absolutely see the potential that ai has for leftists and people in general, I just totally refuse to have a discussion with what feels like ai evangelicals who seem to think its the best thing since bread.

      I mean the contradictions show up in myself too. I positively wept looking at rosa fucking luxembourg singing a marxist rock song because I’m fucking easy to emotionally manipulate and I understand that this will work on goddamn everyone. But it ultimately means we are going to lose because it means whoever has the bigger model will win the fight to manipulate the masses. Then again china is showing that socialism even in its infancy is the fucking terminator of capitalism if properly applied. This does show strong promise and pretty much proves that they are right playing the ai game.

      This list can go on forever, going back and forth (which I alluded to before).

      • amemorablename@lemmygrad.ml
        link
        fedilink
        arrow-up
        3
        ·
        8 days ago

        I can understand that, I have definitely had some back and forth on it myself. I think like with anything, we have to keep firmly in view that it’s a tool distorted by the societal model it exists under and that most of what it’s doing in the bad way is intensifying issues that were already there. For example, when someone uses chatgpt as a source, is that bad because AI is bad or is it bad because it highlights the problems with people individually turning to the internet for answers to questions (which has long been a problem with web searching and wikipedia and so on, just wasn’t as bad before). Or when a publishing platform gets flooded with AI genned low effort crap, is that bad because AI is bad or is it bad because it highlights the unsustainable nature of internet platforms that have little to no gatekeeping and the inability to manage the volume of “content” that gets uploaded on a regular basis.

        I do think it’s contributing to the acceleration of some problems. But it’s not as anomalous as it’s made out to be, if that makes sense. If it didn’t exist, similar problems would still exist because (I would argue) AI in its current form is an accelerated stage of automation rather than a wholly new form of development. There are aspects of it which are unprecedented as forms of automation, but automation as a whole is nothing new. So the favored response to it for us is also nothing particularly new; it’s a technology that, if it is going to exist, needs to be in the hands of the organized proletariat and the organized liberation forces of imperialized and colonized peoples, not in the hands of a capitalist class or other like exploitative classes.

        • CriticalResist8@lemmygrad.ml
          link
          fedilink
          arrow-up
          2
          ·
          7 days ago

          I think the more you get into the AI ecosystem and use it, the more you integrate that it’s not actually as deep as you thought “mentally” speaking, like all the questions you might ask yourself about it before getting into the matter proper just disappear. It’s definitely put a lot of things in perspective for me. I think at this time, we’re still kind of seeing what comes out of it, with everyone scrambling to turn their idea into an AI startup. Down the line there will come best practices, i.e. “if you’re gonna use AI to do [task], then this is the only way you should do it”. It’s part of the bubble: it expands at first but then after some time shrinks. The plethora of AI tools and models will probably shrink eventually.

          And on that I think all of this was a long time coming. It was just slow until it wasn’t (you could say quantitative turns into qualitative change, leaps and bounds etc). There’s always been low-effort books on Amazon, we live in a world of 9 billion people who are increasingly getting access to the Internet and everyone wants to make it out of capitalism alive in any way possible. There’s always been shitty Sonic OCs on deviantart (not my qualifier, it’s what people on the website call them) and tons of “wtf is this” books that had absolutely 0 editing done to them, Amazon accepts those no problem. In fact, I don’t know what the amazon kindle ecosystem is like now but before AI, top sales were basically dominated by established authors who had a publisher behind them to put marketing money in their new book. It was very difficult to make ANY sale as an indie, artisanal writer who worked only by themselves and AI hasn’t changed that at all, because it was always the case.

          If you look at the authors who bemoan AI books on Amazon what they’re worried about is the perceived loss of sales. It’s the same old story. They think they’re losing out on something and they want protectionism where it helps them. Again not making a value judgment I don’t really care either way about either AI books or the petit-bourgeois authors lol, but that’s what their problem with AI books is. And certainly Amazon doesn’t worry about it either as long as they sell books.

          Like you said web searching wasn’t necessarily better before AI. I remember google being pretty good up until 2018 or so, then they started mutating your search query so you’d spend more time on search. And before that people were against AMP pages and snippets as they don’t drive traffic to your website but it stays on google. But again kind of a financial problem to have because you’re trying to get clients or ad revenue, I’m just happy they see communist theory.

          And speak of ads back in the early 2000s you could get up to 2.5$ per click on an ad banner lol it was wild. Now everyone has an ad blocker and a click might net you 30 cents if that. It’s just dialectics that situation couldn’t go on forever.

          But I use perplexity a lot too (LLM search engine) and it’s pretty good because you can follow up on stuff you’ve already asked and go down the rabbit hole in the course of a single conversation instead of making fresh searches every time. But it could still be improved in many ways imo.

          I think one contradiction people against AI have is they say it’s both replacing your brain while also not being that good. It’s a complete contradiction because it can only be one or the other (is it better than human cognition or is it not?), and until one addresses the contradiction and resolves it, they will live ‘in utter chaos under heaven’ as Mao said (paraphrased lol), and it leads to problematic conclusions such as “people who use AI are lesser people because AI is not very good, so clearly if they use it, their brain must be worse than AI, that’s why they think they gain something from it”.

          • amemorablename@lemmygrad.ml
            link
            fedilink
            arrow-up
            1
            ·
            7 days ago

            Good points, lot to think about.

            everyone wants to make it out of capitalism alive in any way possible.

            This part resonates with me in particular. I’ve had aspirations before to “make it out alive” via one artistic craft or another and it’s possible I still could “make it” well enough to live off of that (primarily if I got lucky), but generative AI may make it harder to do so. But I also understand that capitalism is unsustainable, as is much of the western internet landscape even pre-generative-AI, so it’s sorta like… yeah, some of my potential opportunities may be evaporating, but so is the stability of capitalism as a whole. And living in the US, the stability of the governance as a whole is in question with the stuff being done to the federal workforce, the seeming efforts to consolidate power behind a single neo-fascist(? for lack of a better term) faction, and so on. It comes out very individualist for me to be fretting about whether I can personally succeed in making a living out of some craft, while “the world burns”, so to speak.

            So yeah, I suspect some of the ire surrounding generative AI is due to individualism; people thinking about it like “I was supposed to get [or had already gotten] mine and now I can’t get it [or it is going to be taken away.” Rather than thinking of it like, “This is a progression of automation that has long been happening and much like in the past, the working class needs to organize because it’s never going to get fundamentally better until they have the levers of power.”

            I think one contradiction people against AI have is they say it’s both replacing your brain while also not being that good. It’s a complete contradiction because it can only be one or the other (is it better than human cognition or is it not?), and until one addresses the contradiction and resolves it, they will live ‘in utter chaos under heaven’ as Mao said (paraphrased lol), and it leads to problematic conclusions such as “people who use AI are lesser people because AI is not very good, so clearly if they use it, their brain must be worse than AI, that’s why they think they gain something from it”.

            Yeah, I think there’s a fair bit of elitist tropes wrapped up in thinking about AI as well. Human beings still don’t even understand our own consciousness all that well, much less the entire brain and its functioning, so it’s easy to fill in the gaps with nonsense like “people are stupid”. Arising out of that (it seems, I can’t demonstrate the connection cleanly) you get stuff like the people who hype “AGI” as something that will replace “human intelligence.” But what I never see in that realm, is any taking into account the fact that human capability derives out of the human form, not out of the ether (unless I suppose one believes in something metaphysical about it). So in order to believe a computer can reach the same capability, you have to believe it will be granted something metaphysical too. Otherwise I’d think the only way for “AI” to get anywhere close to humanity is for some kind of bio-engineering to be able to create artificial human life. And at that point, we’re basically just talking about making babies without a woman needing to go through pregnancy.

            But I do think when like, China, is getting into robotics, they are at least closer to understanding that particular problem. That for an AI to do certain of human tasks, it needs to have a human-like form. Still though, none of that brings us fundamentally closer to a self-aware artificially-created lifeform (partly because we still don’t entirely know what that form develops out of in the first place, in our own case; what cluster of factors crosses over into what we call sapience). It just brings us closer to tools that require less direction and maintenance than previous forms of tools. Which could eventually be used to replace us at certain kinds of tasks and thus change the labor landscape somewhat, but isn’t replacing us fundamentally.