• Breve@pawb.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    They aren’t making graphics cards anymore, they’re making AI processors that happen to do graphics using AI.

    • daddy32@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Except you cannot use them for AI commercially, or at least in data center setting.

      • Breve@pawb.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Oh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.