• CriticalResist8@lemmygrad.ml
    link
    fedilink
    arrow-up
    2
    ·
    8 days ago

    I think the more you get into the AI ecosystem and use it, the more you integrate that it’s not actually as deep as you thought “mentally” speaking, like all the questions you might ask yourself about it before getting into the matter proper just disappear. It’s definitely put a lot of things in perspective for me. I think at this time, we’re still kind of seeing what comes out of it, with everyone scrambling to turn their idea into an AI startup. Down the line there will come best practices, i.e. “if you’re gonna use AI to do [task], then this is the only way you should do it”. It’s part of the bubble: it expands at first but then after some time shrinks. The plethora of AI tools and models will probably shrink eventually.

    And on that I think all of this was a long time coming. It was just slow until it wasn’t (you could say quantitative turns into qualitative change, leaps and bounds etc). There’s always been low-effort books on Amazon, we live in a world of 9 billion people who are increasingly getting access to the Internet and everyone wants to make it out of capitalism alive in any way possible. There’s always been shitty Sonic OCs on deviantart (not my qualifier, it’s what people on the website call them) and tons of “wtf is this” books that had absolutely 0 editing done to them, Amazon accepts those no problem. In fact, I don’t know what the amazon kindle ecosystem is like now but before AI, top sales were basically dominated by established authors who had a publisher behind them to put marketing money in their new book. It was very difficult to make ANY sale as an indie, artisanal writer who worked only by themselves and AI hasn’t changed that at all, because it was always the case.

    If you look at the authors who bemoan AI books on Amazon what they’re worried about is the perceived loss of sales. It’s the same old story. They think they’re losing out on something and they want protectionism where it helps them. Again not making a value judgment I don’t really care either way about either AI books or the petit-bourgeois authors lol, but that’s what their problem with AI books is. And certainly Amazon doesn’t worry about it either as long as they sell books.

    Like you said web searching wasn’t necessarily better before AI. I remember google being pretty good up until 2018 or so, then they started mutating your search query so you’d spend more time on search. And before that people were against AMP pages and snippets as they don’t drive traffic to your website but it stays on google. But again kind of a financial problem to have because you’re trying to get clients or ad revenue, I’m just happy they see communist theory.

    And speak of ads back in the early 2000s you could get up to 2.5$ per click on an ad banner lol it was wild. Now everyone has an ad blocker and a click might net you 30 cents if that. It’s just dialectics that situation couldn’t go on forever.

    But I use perplexity a lot too (LLM search engine) and it’s pretty good because you can follow up on stuff you’ve already asked and go down the rabbit hole in the course of a single conversation instead of making fresh searches every time. But it could still be improved in many ways imo.

    I think one contradiction people against AI have is they say it’s both replacing your brain while also not being that good. It’s a complete contradiction because it can only be one or the other (is it better than human cognition or is it not?), and until one addresses the contradiction and resolves it, they will live ‘in utter chaos under heaven’ as Mao said (paraphrased lol), and it leads to problematic conclusions such as “people who use AI are lesser people because AI is not very good, so clearly if they use it, their brain must be worse than AI, that’s why they think they gain something from it”.

    • amemorablename@lemmygrad.ml
      link
      fedilink
      arrow-up
      1
      ·
      7 days ago

      Good points, lot to think about.

      everyone wants to make it out of capitalism alive in any way possible.

      This part resonates with me in particular. I’ve had aspirations before to “make it out alive” via one artistic craft or another and it’s possible I still could “make it” well enough to live off of that (primarily if I got lucky), but generative AI may make it harder to do so. But I also understand that capitalism is unsustainable, as is much of the western internet landscape even pre-generative-AI, so it’s sorta like… yeah, some of my potential opportunities may be evaporating, but so is the stability of capitalism as a whole. And living in the US, the stability of the governance as a whole is in question with the stuff being done to the federal workforce, the seeming efforts to consolidate power behind a single neo-fascist(? for lack of a better term) faction, and so on. It comes out very individualist for me to be fretting about whether I can personally succeed in making a living out of some craft, while “the world burns”, so to speak.

      So yeah, I suspect some of the ire surrounding generative AI is due to individualism; people thinking about it like “I was supposed to get [or had already gotten] mine and now I can’t get it [or it is going to be taken away.” Rather than thinking of it like, “This is a progression of automation that has long been happening and much like in the past, the working class needs to organize because it’s never going to get fundamentally better until they have the levers of power.”

      I think one contradiction people against AI have is they say it’s both replacing your brain while also not being that good. It’s a complete contradiction because it can only be one or the other (is it better than human cognition or is it not?), and until one addresses the contradiction and resolves it, they will live ‘in utter chaos under heaven’ as Mao said (paraphrased lol), and it leads to problematic conclusions such as “people who use AI are lesser people because AI is not very good, so clearly if they use it, their brain must be worse than AI, that’s why they think they gain something from it”.

      Yeah, I think there’s a fair bit of elitist tropes wrapped up in thinking about AI as well. Human beings still don’t even understand our own consciousness all that well, much less the entire brain and its functioning, so it’s easy to fill in the gaps with nonsense like “people are stupid”. Arising out of that (it seems, I can’t demonstrate the connection cleanly) you get stuff like the people who hype “AGI” as something that will replace “human intelligence.” But what I never see in that realm, is any taking into account the fact that human capability derives out of the human form, not out of the ether (unless I suppose one believes in something metaphysical about it). So in order to believe a computer can reach the same capability, you have to believe it will be granted something metaphysical too. Otherwise I’d think the only way for “AI” to get anywhere close to humanity is for some kind of bio-engineering to be able to create artificial human life. And at that point, we’re basically just talking about making babies without a woman needing to go through pregnancy.

      But I do think when like, China, is getting into robotics, they are at least closer to understanding that particular problem. That for an AI to do certain of human tasks, it needs to have a human-like form. Still though, none of that brings us fundamentally closer to a self-aware artificially-created lifeform (partly because we still don’t entirely know what that form develops out of in the first place, in our own case; what cluster of factors crosses over into what we call sapience). It just brings us closer to tools that require less direction and maintenance than previous forms of tools. Which could eventually be used to replace us at certain kinds of tasks and thus change the labor landscape somewhat, but isn’t replacing us fundamentally.