Just wondering,what AMD would need to do…to at least MATCH nvidias offering in A.I/dlss/Ray tracing tech

  • Jorojr@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    How long did it take for AMD to catch up to Nvidia with tessellation? My google search pulls up threads from 8 years ago (2015) discussing if AMD had caught up. The first game to use it was Messiah from 2000. Tessellation personally caught my attention in 2011 with Batman Arkham City.

    With pressure from Sony/MS, I could possibly see AMD trying to ramp up RT performance or risk losing two lucrative partners.

  • ResponsibleTruck4717@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If It was Amd I would hire engineers and software developers to work on the software side.

    If 3rd party developers will have access to libraries / api that fully support Amd’s products and can deliver good performance more people will purchase amd, but when people want to start small / medium machine learning projects and the default choice is only Nvidia cause almost all libraries is been written for cuda, why should anyone choose Amd?

    So to solve this amd need to make sure there are libraries written for their hardware.

    • theloop82@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Ain’t that the truth. I remember reading a bunch of obituaries for AMD around that timeframe. Dr. Su is a very smart person and AMD has proven that they have the technical chops to change the game. Anyone who loves computer hardware should be rooting for whoever is losing to win because it just drives down prices and pushes competitors to innovate. Look at intel’s pricing since Ryzen dropped. If they could manage to beat Nvidia outright for one generation in GPU’s it would benefit all consumers.

  • Astigi@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    There are decades for AMD to catch Nvidia rn, and Nvidia is unstoppable.
    About software ecosystem the gap is even wider

  • BoltTusk@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Don’t need to. Latest leak is that AMD is focusing on mobile segment to push out Intel and so, RDNA 4 high-end is not needed in that effort. The latest Anti-Lag+ debacle showed it is not worth it for AMD to invest in software unless they can compete fully against Nvidia.

  • Plazmatic@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Im an expert in this area, I won’t reveal more than that. My understanding, though I could be wrong, is that AMDs biggest issue with raytracing, is they don’t do workload rescheduling, which Nvidia reports to have given a 25% performance uplift on its own, and Intel has from the get go.

    Basically, RT cores determine where a ray hit, and what material shader to use, but they don’t actually execute material shaders, they just figure out which one to call for that bounce. This then has to be fed back to normal compute cores, but groups of compute cores need to have the same instruction to execute that instruction in parallel, otherwise each instruction to be executed in the “subgroup” will have to be executed serially (in sequence). So what Nvidia and Intel do is reorder instructions first before handing them off to compute subgroups to increase performance.

    Im not sure why AMD didn’t bother with this, but they have in recent history had hardware/driver bugs that caused them to scrap entire features on thier GPUs.

    Now the upscaling and AI tech thing is a different issue. While AMD isn’t doing well power efficiency wise right now anyway, adding tensor cores, the primary driver for Nvidias ML capabilities, means sacrifices to power efficiency and die space. What I believe AMD wants to do is focus of generalized fp16 performance. This can actually be useful in non ML workloads, like HDR and other generalized low precision applications, or with sparse neural networks. (where tensor cores aren’t, they can’t be used at the same time IIRC as CUDA cores, where at least on Nvidia, fp16 and fp32 can execute at the same time within the same CUDA core/warp/subgroup)

    We can see power issues on the low end especially, Jetson orins ( ampere) don’t beat or barely beat jetson tX2s (8 year old hardware, pascal) at the same power draw, and more than doubled the “standard performance” power draw.

    Im addition to power draw, and tensor cores being dead weight for non ML, fully dedicated ASICs are the future for AI, not ML acceleration duct taped to GPUs, which can already accelerate it with out specialized hardware. See Microsoft news, Google Amazon, Apple, and even AMD looking to put ML acceleration on CPUs instead as a side thing (like integrated graphics).

    AMD probably doesn’t want to go down that route since inevitably they are going to cease using that with GPUs in the future.

    Finally, DLSS 2.0 quality upscaling should now be possible at acceptable speeds using AMDs fp16 capabilities. GPUs are so fast now that the fixed cost of DLSS is now small enough to be carried out by compute cores. AMDs solution this far has been pretty lacking give their own capabilities. Getting the data for this is very capital intensive, and it’s likely AMD still doesn’t want to spend the effort to make a better version of far n.0 despite it essentially being a software problem on the 7000 series.

    • estusflaskplus5@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Im not sure why AMD didn’t bother with this, but they have in recent history had hardware/driver bugs that caused them to scrap entire features on thier GPUs.

      What are some features they scrapped due to driver bugs?

      • Plazmatic@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago
        • AMD primitive shader issues on VEGA

        • AMD bugs with dynamic parallelism despite to the point it disabled the feature later, but also is required for raytracing extension to work in the first place.

        • AMDs variable rate shading was supposed to be on VEGA, didn’t happen, then was supposed to be on 5000 series, also didn’t happen (might be wrong about the 5000 series, but that’s what I remember, and afaik, and tried to pass off dynamic resolution as a type of VRS as a consolation, which it isn’t)

        • AMD only supporting harware ROV in their windows DX12 drivers, despite it being implemented by the mesa team for vulkan.

  • Eastrider1006@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Do people think they can’t catch up? Remember Ryzen?

    They can if they start taking their GPU division seriously, in terms of R&D and units produced, which they are not.

    • cstar1996@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Ryzen caught up first and foremost because Intel stalled. And the driving stall on Intels part was vis a vis TSMC more than it was against AMD.

      • Eastrider1006@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        The opposite is also true. In terms of CPU design itself, that is, what AMD actually does, they are able to match Intel’s offering since. It could’ve been a flop if TSMC didn’t deliver, but the Ryzen architecture (which is what we’re talking about in this thread, design) was up to the level of their competitor after being lagging behind for like half a decade.

        So, I insist. With enough R&D they’d be able to do something similar in the GPU side of things.

    • lolatwargaming@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes I remember ryzen, tell me more how it took what, 4 gens to best intel’s skylake++++++++ in gaming?

  • randysailer@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    They can’t Nvidia is to big if AMD put in as much as Nvidia does and it didn’t pay off with a few years they would go bankrupt.

  • softwareweaver@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    For AI, they need to invest in technologies that compete with CUDA, like DirectML, etc.

    Pay developers to develop AI apps using DirectML, which will run well on AMD GPUs and Market those apps to their customer base.

  • colefinbar1@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Competition drives innovation, so AMD catching up to Nvidia would only make both better. I’m hopeful they rise to the challenge for the benefit of us all.

  • Relevant-Cup2193@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    no, at 4k fsr quality maybe looks only as good as dlss performance. that makes a 7900xtx only as fast as a 3080.

  • sascharobi@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Of course, they could. Their hardware isn’t that bad; they are closer than anybody else. Their software stack is another story. AMD has been promising to do a better job at that for more than a decade. I don’t really trust their commitment to their software stack anymore. Actually, Intel might overtake them in that regard.