• Pimpmuckl@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    So the leakers were correct.

    Means the top end Blackwell offerings will cost an arm and a leg. F for the consumers. If Nvidia even puts out a 102-die as 5090 given the run on AI and just how insane margins are there. So if they are limited by fab capacity they might just pull another 4070 and sell us a 103 die as 5090 and force it down our throat.

    • Viandoox@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      There were the same leaks before RDNA 2, showing that AMD wouldn’t do better than a 2080ti and we still had a 6900xt… So i’m waiting for the official release to get an idea.

    • ThunderClap449@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m fine with it. Consumers reap what they sow, basically. AMD is likely gonna drop high end GPUs in general, if not dedicated GPUs completely.

        • ThunderClap449@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          From the last few gens? Not even close to best. Certainly not more than Radeon 9000 series, HD4000 series, HD5000 series, HD7000 series, and R9 200 series.

          7900 XTX, one year later is at 0.19% on Steam.

          A year after 6900 XT released, 6900 series was at 1.19%.

          Given their last high end before 6900 was 390X, which there is no steam hardware survey on, but 7970 was ahead… Yeah, not even close.

          • Pl4y3rSn4rk@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Their last “high end” before the RX 6900 XT was the Vega VII and before that Vega 64, yeah they only were able to compete with the RTX 2080 and GTX 1080 respectively but so does the RX 7900 XTX that can only compete with the RTX 4080…

            PS.: Also the high end AMD GPU before Vega 64 was the R9 Fury X (R9 390X was a 290X refresh that launched in the same period), that was quite competitive with the GTX 980 Ti but it’s 4 GB of HBM and and the necessity to be water cooled limited its sales…

              • Pl4y3rSn4rk@alien.topB
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Sadly they weren’t that impactful besides the Vega 56 - competed very well with the GTX 1070 and Nvidia launched the GTX 1070 Ti because of it - they consumed too much power at stock because of overvoltage and they launched way too late…

                • ThunderClap449@alien.topB
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Yep, they were honestly decent cards, but wrong time to launch. Same issue nVidia had with 400 series, without the whole “overpriced to fuck, trying to scam customers” kinda deal they had with the benchmarking requirements for reviewers.

                • LittlebitsDK@alien.topB
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  loved my Vega 56, performed well, and a little undervolting fixed the power issue big time… didn’t feel like I were missing out for “not buying Nvidia”

            • NoLikeVegetals@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              quite competitive with the GTX 980 Ti

              The Fury X was an instant no-buy for high-end 4K gamers, due to the measly 4GB of VRAM.

              Just as the RTX 4080 should be a no-buy for high-end 4K gamers, due to the measly 16GB of VRAM. In a year’s time, AAA RT-enabled games will suck up >16GB at 4K.

              • Pl4y3rSn4rk@alien.topB
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Surely the 4080 might not age very well but it’s very likely it’s RT performance will be insufficient before VRAM becomes an issue, even Alan Wake II limits its use of path tracing at max settings and still uses a decent amount of raster.

          • Pristine_Pianist@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Again with the steam numbers it’s not accurate as the data is gather from a pool of people who opts in to the survey that pool could be 500/5000 people we wouldn’t known

            • NoLikeVegetals@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Steam is also heavily biased towards Nvidia users. I’d like to see stats which discount China, which is flooded with Nvidia GPUs, especially in their internet cafes. The other issue is that Steam seems to count the same cafe PC twice, if two survey opted-in gamers log onto that same PC.

          • chapstickbomber@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            To be fair “highest selling high end GPU ever” for AMD is still not a lot compared to NV or their own midrange stuff.

            • ResponsibleJudge3172@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Sorry, I misread. I thought you said highest selling GPU which is what I have also read elsewhere.

              Seems to me 7800XT is their best performer but not sure

        • NoLikeVegetals@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Which is hilarious because 7900 XT(X) is the highest selling high end GPU they’ve ever made afaik

          That sounds plausible, but only because the total addressable market for GPUs is so much bigger now.

          The real measure is the ratio of 7900 XTX to RTX 4080 and also the 4090.

          I’m pretty sure the 4090 is outselling the 7900 XTX by something like 20:1…

      • DeeJayDelicious@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I mean, we can debate “high-end”. By RDNA 5, we should have 4k @ 120 fps as a base-line for all dedicated GPUs. Where do you go after that in consumer GPUs?

        While there will always be a small, enthusiast market for super-high end GPUs, I’m not sure the mainstream will be interested in pushing 240 FPS. Maybe Nvida sees the writing on the wall, which is why they’re pivoting away from consumer-focused GPUs.

        And if AMD continues to serve us solid 300-600$ dGPUs until then, I think that’s still a win. I don’t think the market for >1000$ dGPUs is that large anyway.

        • ThunderClap449@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I mean, all of that assumes requirements won’t keep increasing. Raytracing just artificially increases the performance requirements once you start getting to the top of what’s possible. The same will be done once RT is getting capped out.

      • lugaidster@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I doubt this. They just might drop TSMC for them and go with Samsung. TSMC is way too expensive for big dies given the reluctance people have to pay similar prices to Nvidia for similar performance from AMD. At the end of the day, AMD doesn’t have a significant edge from a cost perspective. Chiplet benefits are cool, but AMD needs an interposer so the cost advantage might not be as impressive and the 6nm dies might not be as cheap given that they’re still manufactured on TSMC.

        So TLDR: I think they might shift their focused for the higher end market, but I doubt they will entirely abandon it. They might just take a break from it.

    • regionaltrain253@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      So the leakers were correct: No top end RDNA4 cards (at least on launch).

      Where did you read that on that article?

      • Pimpmuckl@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It’s not verbatim confirmed, but it does match the leaks to the degree that, instead of a halo Navi 41/N4C die for the top end 8900 XTX (not that they still don’t call another chip this name), they will launch two different, monolithic dies, but quicker and closer to each other, time-wise.

        Since this is such a stark change of release cadence compared to N21 and N31, this points to the leaked release strategy being correct.

    • NoLikeVegetals@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      So the leakers were correct: No top end RDNA4 cards (at least on launch).

      Remember the claims from MLID/RGT/etc. when AMD release a halo desktop RDNA4 GPU…that’s unless they delete those particular videos.

  • FatBoyDiesuru@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    That simply means AMD’s likely in the validation/testing stages of RDNA 4 at best. Nothing to it other than that.

  • Viandoox@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    There were the same leaks before RDNA 2, showing that AMD wouldn’t do better than a 2080ti and we still had a 6900xt… So I’m waiting for the official release to get an idea.

    • Put_It_All_On_Blck@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      There were also rumors claiming RDNA 3 would be when AMD surpassed Nvidia. Unfortunately it was a regression in competitiveness.

      Leaks about the structure of a GPU or CPU, the cores, memory type, architecture used, are typically more reliable far ahead of release, but the leaks about performance are almost never reliable until a couple months before release.

      • timorous1234567890@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Performance leaks more than a few months prior to launch will at best be targeted performance and often it could be in a very simple metric like TFlops.

        The 2.5x 6900XT performance claim did not hold up in terms of FPS uplift but it did in terms of TFlop uplift. Even if some people did think it was a TFlop increase the expected fps improvement of such a huge jump was expected to be higher than what we got.

    • lugaidster@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The only hint people had for that was the bus width, which was indeed 256-bit. It’s just that people couldn’t fathom that AMD had a fast GPU with so little memory bandwidth.

      I’m not sure this leak shares anything that would tell us what performance target this GPU could belong to.

      • RealThanny@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It wasn’t about bus width, it was about nVidia’s fictitious CUDA core counts with Ampere.

        At the last minute, the 4352 CUDA cores of the 3080 (same as the 2080 Ti) was changed to 8704 “CUDA cores”, because the INT32 ALU was replaced with a dual-function INT32/FP32 ALU. People who didn’t understand that (i.e. basically everyone who didn’t call out nVidia’s dishonesty in marketing those figures) thought, from the leaks, that it’d be 8704 shaders against 4608 shaders. It wasn’t. It was more like ~5200-5400 shaders, depending on resolution, against 4608, with the latter running at a substantially higher clock speed.

        Ironically, the reverse happened with RDNA 3, as the values leaked were incorrect - they said 12,288 ALU’s for Navi 31, without mentioning that it was really 6144 FP32 ALU’s with 6144 INT32/FP32 ALU’s that could be partially used. So people thought it was 12,288 on the side of Navi 31 versus 16,384 on the side of the 4090, with those numbers meaning the same as they did with 5120 for the 6900 XT versus 10,496 for the 3090. But they didn’t mean the same thing at all. It was ~7400 effective shaders for Navi 31 versus ~10,240 effective shaders for the 4090. With no real clock speed advantage.

        As it turns out, the 4090 scales pretty poorly though, so it’s not as far ahead of the 7900 XTX as it should be base on raw compute.

        • lugaidster@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Maybe you’re right, but I’m not sure the timelines agree with this. While people were trying to figure out whether or not the Navi 21 was going to be competitive, Ampere was already a known quantity regardless of shader count. The cards released a couple of weeks before rdna2 specs were announced.

          • RealThanny@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            That’s kind of my point. Ampere was out, with known specs. RDNA 2 specs were leaked before AMD announced the cards. People who got this leaked information compared number A to number B without understanding that number A was manipulated by dishonest marketing. So they drew the wrong conclusions about performance, saying AMD would be lucky to match the 3070.

            Which made it pretty amusing when every one of the first three RDNA 2 cards that AMD released was faster than the 3070, from the 6800 to the 6900 XT.

      • ger_brian@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        On the other hand we had leaks for Rdna to be competitive at the high end and take the performance crown due to chiplets.

    • ResponsibleJudge3172@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      There were also posts about Big Navi and Nvidia running scared, 6800XT outperforming 3090 (AMD really fed that idea before launch)

  • Systemlord_FlaUsh@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    No 256-bit? If they really don’t care for highend I will likely have to buy a 4090 when Blackwell launches because it would be the only upgrade from my 7900 XTX. Buying Blackwell is unlikely an option because it will be horribly overpriced, especially if AMD starts to suck again. The XTX feels like AMD can finally have a hold in the highend again. It doesn’t need to outperform, it needs to outperform NVIDIA where they don’t: Pricing.

    • RockyXvII@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You don’t have to buy anything next gen if the performance of the 7900XTX is enough. Unless you want the best RT performance but if you wanted that you wouldn’t have bought from AMD to begin with. They can keep raising prices because of consumerism. If you always go to buy the next thing they can keep increasing prices because you’ll just keep buying.

      • Systemlord_FlaUsh@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        More performance is always good. RT is more like a bonus, I would like it, but I won’t pay double to get it. I’m running 4K120.

        • Systemlord_FlaUsh@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          GDDR7 should be 50 % faster but yes, its likely reaching similar bandwiths. Unfortunately its a trend now to reduce bus size. 256-bit GDDR7 is likely almost 1 TB/s.

    • kf97mopa@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      We can’t say until we know when the cards launch. Right now, the low-end has barely been updated since 2021. Navi 33/7600 is a very small update of Navi 23/6600XT. It moved to 6nm but there are no other relevant changes, because AMD did not increase the CU count and has utterly failed to make use of the dual issue shaders in RDNA3. That level is thus far more important to update right now. Say that AMD does that in early to mid 2024 and updates the high end in 2025. There is a new “7600XT” or “8600” with 48 CUs to fill the gap between 7700XT and 7600, meeting 4060 Ti more closely - that’s great. When Blackwell launches in early 2025, AMD can be ready to update the high end at the same time. They have done that before - after all, RDNA 2 was the only recent time that the they did the entire line on one generation (RDNA 3 has only done 3 cards, Navi 24 is a holdover).

      Not sure I think that that is particularly likely, but please remember the timeline when people talk about RDNA 4. Without release dates, those predictions are worthless.

      • Systemlord_FlaUsh@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I would upgrade if the expense was reasonable. But theres only one game where I could need more performance and that is Darktide. Otherwise I wish FSR3 became a thing, but just as DLSS is still lacks a solution to update older games.

    • WeedSlaver@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      From rumours there will be N31 refresh highend as RX 8800 that should be around 20% faster than 7900XTX as a refresh it does sound impressive as a competitive graphic card its depressing. Lets hope nvidia wont go crazy with prices

      • onlyslightlybiased@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Literally haven’t heard a single thing about rdna 3 refreshes and with rdna 4 launching probably end of next year, really Don’t see the point

      • Put_It_All_On_Blck@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Those were the rumors around launch, saying the flagship dies had issues and that they were going to be refreshed later for big performance gains, but I haven’t heard that rumor ever since. If AMD really had that much performance left on the table, I think they would’ve pushed to have the cards out this holiday season, and not wait until 2024, but I don’t think those rumors are true.

    • bubblesort33@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      First of all, those are rumors, and given how the leaker doesn’t even know if it’s 128bit or 192bit this late in the game when it’s been in development for 3 years and 10 months from release, it means the leaks are pretty much completely made up. RedGamingTech has a pretty bad leak accuracy record.

      That being said, if it’s targeting 7900xt to 7900xtx performance, and it’s using GDDR7, then 192 but makes sense. It’s at about 7900xt memory bandwidth in total if you work out the math at 34 Gbps. Currently GDDR7 is aiming for 32 to 36 Gbps.

  • Edgaras1103@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Well maybe mid end is where they gonna strike gold. So its smart to focus your resources and scale back the product stack. AMD always operates on value above all else , so making a product exclusively designed for it could be good.

      • Turbotef@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        A 7900 XTX that uses less power, more/better RT/AI cores (maybe), and costs less than a 7900 XT

        I’m buying a 7800 XT first and would easily snatch that kind of replacement up afterwards.