simple@lemm.ee to Games@lemmy.worldEnglish · 3 days agoNvidia Announces RTX 50's Graphic Card Blackwell Series: RTX 5090 ($1999), RTX 5080 ($999), RTX 5070 Ti ($749), RTX 5070 ($549)www.theverge.comexternal-linkmessage-square82fedilinkarrow-up129arrow-down11 cross-posted to: hackernews@lemmy.bestiver.sehardware@lemmit.online
arrow-up128arrow-down1external-linkNvidia Announces RTX 50's Graphic Card Blackwell Series: RTX 5090 ($1999), RTX 5080 ($999), RTX 5070 Ti ($749), RTX 5070 ($549)www.theverge.comsimple@lemm.ee to Games@lemmy.worldEnglish · 3 days agomessage-square82fedilink cross-posted to: hackernews@lemmy.bestiver.sehardware@lemmit.online
minus-squareinclementimmigrant@lemmy.worldlinkfedilinkEnglisharrow-up9·3 days agoThis is absolutely 3dfx level of screwing over consumers and all about just faking frames to get their “performance”.
minus-squareTastyWheat@lemmy.worldlinkfedilinkEnglisharrow-up3·2 days ago“T-BUFFER! MOTION BLUR! External power supplies! Wait, why isn’t anyone buying this?”
minus-squareBreve@pawb.sociallinkfedilinkEnglisharrow-up7·3 days agoThey aren’t making graphics cards anymore, they’re making AI processors that happen to do graphics using AI.
minus-squaredaddy32@lemmy.worldlinkfedilinkEnglisharrow-up1·2 days agoExcept you cannot use them for AI commercially, or at least in data center setting.
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkEnglisharrow-up1·2 days agoWhat if I’m buying a graphics card to run Flux or an LLM locally. Aren’t these cards good for those use cases?
minus-squareBreve@pawb.sociallinkfedilinkEnglisharrow-up3·2 days agoOh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.
This is absolutely 3dfx level of screwing over consumers and all about just faking frames to get their “performance”.
“T-BUFFER! MOTION BLUR! External power supplies! Wait, why isn’t anyone buying this?”
They aren’t making graphics cards anymore, they’re making AI processors that happen to do graphics using AI.
Except you cannot use them for AI commercially, or at least in data center setting.
What if I’m buying a graphics card to run Flux or an LLM locally. Aren’t these cards good for those use cases?
Oh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.