Just wondering,what AMD would need to do…to at least MATCH nvidias offering in A.I/dlss/Ray tracing tech

    • lusuroculadestec@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      AMD got a lot of AI-related IP when they made the acquisition of Xilinx. It’s just a matter of them dedicating the die space to it.

      • dahauns@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        The die space is only one part of the puzzle. The other - AMD’s achilles heel no less - is software support. I mean, Phoenix has XDNA already, but from everything I’ve read, it’s a PITA to actually use and rather limited by its currently available driver API, and as a consequence, barely any ML library/framework support as of now.

    • doneandtired2014@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      They have their own equivalent in the CDNA line of compute products.

      They absolutely could bring matrix multiplication units to their consumer cards, they just refuse to do so.

    • GomaEspumaRegional@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      They don’t even need to make dedicated tensor units, since programmable shaders already have the necessary ALU functionality.

      The main issue for AMD is their software, not their hardware per se.

      • Vokasak@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        This. AMD struggles with making drivers that don’t crash or get you VAC banned. They’re going to have to clear that bar before they can really start competing

        • lolatwargaming@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Those VAC bans really kinda sum the lack of ability with AMD’s software. AMD can’t ship fluid frames without literally getting you banned.

          Stop for a moment and think about this, AMD can’t even catch up to nvidia/intel much less be at the forefront.

          Really, AMD only exists so nvidia doesn’t charge $2k for a 4090… so uh thanks AMD for being a joke of a competitor but saving me $400

          • Jensen2052@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            WTF are you talking about, what Intel GPU is better than AMD? No one is buying Intel’s trash video cards. Also the 7800X3D is the fastest gaming chip.

          • plaskis@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            AMD is better than Intel on both gpu and cpu front lol. Not sure what you are on.

            Idd I think AMD has solid gpu products last decade. Had several AMD gpus just as nvidia. Just because nvidia has been ahead last 3 years doesn’t invalidate AMD. Its competition and as long as they offer decent performance for the price ppl will buy it. RDNA2/3 was definitely not bad architectures - the main gap atm is upscalers and framegen but that is also reflected in the price nvidia sells for.

        • GomaEspumaRegional@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Well, sure the application specific IP is always going to be more performant. But on a pinch, shader ALUs can do tensor processing just fine. But without a proper software stack, the presence of tensor cores is irrelevant ;-)

    • hibbel@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Of course they could. Intel does on their graphics cards. Apple does on its latest silicon.

      Question is, do they have the people that could develop this, can they and do they want to spend the money on it and can they and do they want to spend the money on the software side it this as well.

      Currently, it seems like they looked at it, did the math and decided to try and get by without the effort. And to a degree that’s doable. FSR2 isn’t as good as DLSS but it saves them the effort to have AI-cores on chip. Now they did the same with frame generation. Generally they seem to be able to be slightly worse for a lot less R&D-budget.

      Of course, they will never leave nVidia’s shadow this way and should Intel or nVidia ever manage to offer Microsoft and Sony an APU to power the next generation of consoles but with more features, their graphics division might be well and truly fucked.

      • isotope123@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Keep in mind too if they haven’t already made these decisions to inovate and invest 4+ years ago, then any solution they come up with is still years away. Chip development is a 5+ year cycle from concept to implementation.