And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling

  • CriticalResist8@lemmygrad.ml
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    6 days ago

    would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.

    This is what China is developing currently, and many other cool things with AI. Although medical imaging AI was also found to have their limitations, but maybe they need to use a different neural method.

    Even if capitalist companies say that you can or should use their bot as a companion doesn’t mean you have to. We don’t have to listen to them. I’ve used AI to code stuff a lot, and it got the results – all for volunteer and free work, where hiring someone would have been prohibitive, and AI (LLM specifically) was the difference between offering this feature or canceling the idea completely.

    There’s a guy on youtube who bought Unitree’s top of the line humanoid robot (yes they ship to your doorstep from China lol) and with LLM help codes for it, because the documentation is not super great yet. Then with other models he can have real-time image detection, or use the LIDAR more meaningfully than without AI. I’m not sure where he’s at today with his robot, he was working on getting it to fetch a beer from the fridge - baby steps, because at this stage these bots come with nothing in them except the SDK and you have to code literally everything you want it to do, including standing idle. The image recognition has an LLM in it so that it can detect any object, he showed an interesting demo: in just one second, it can detect the glass bottles in the camera fram and even their color, and adds a frame around it. This is a new-ish model and I’m not entirely sure how it works but I assume it has to have an LLM in it to describe the image.

    I’m mostly on Deepseek these days, I’ve completely stopped using chatGPT because it just sucks at everything. It hallucinates so much less and becomes more and more reliable, although it still outputs nonsensical comparisons. But it’s like with everything you don’t know: double-check and exercise critical thinking. Before LLMs to ask our questions we had wikipedia, and it wasn’t any better (and still isn’t). edit - like when deepseek came out with reasoning, which they pioneered, it completely redefined LLM development and more work has been done from this new state of things, improving it all the time. They find new methods to improve AI. I think if there was a fundamental criticism I would make of it is that perhaps it was launched too soon (though neural networks have existed for over a decade), and of course overpromised by tech companies who rely on their AI product to survive. OpenAI is dying because they don’t have anything else to offer than GPT, they don’t make money on cloud solutions or hardware or anything like that. If their model dies, they die along with it. So they’re in startup philosophy mode where they try to iterate as fast as possible and consider any update is a good update (even when it’s not) just to try and retain users. They bleed 1 billion $ a month and live entirely on investor value, startup mode just doesn’t scale that high up. It’s not their 20$ subscriptions that are ever going to keep them afloat lol.