TL;DW:

  • FSR 3 is frame generation, similar to DLSS 3. It can greatly increase FPS to 2-3x.

  • FSR 3 can run on any GPU, including consoles. They made a point about how it would be dumb to limit it to only the newest generation of cards.

  • Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

  • Games will start using it by early fall, public launch will be by Q1 2024

It’s left to be seen how good or noticeable FSR3 will be, but if it actually runs well I think we can expect tons of games (especially on console) to make use of it.

  • echo64@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    10 months ago

    You guys are talking about this as if it’s some new super expensive tech. It’s not. The chips they throw inside tvs that are massively cost reduced do a pretty damn good job these days (albit, laggy still) and there is software you can run on your computer that does compute based motion interpolation and it works just fine even on super old gpus with terrible compute.

    It’s really not that expensive.

    • kadu@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      Frame generation is fundamentally different from the TV interpolation. It takes into account motion vectors and the Z depth.

      If you think DLSS 3 is like an LG TV interpolating a movie, you never used the feature and have got a very superficial understanding of how it’s implemented.

      • echo64@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        10 months ago

        Yeah, it does, which is something tv tech has to try and derive themselves. Tv tech has to figure that stuff out. It’s actually less complicated in a fun kind of way. But please do continue to explain how it’s more compute heavy

        • kadu@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 months ago

          You surely realize that a video decode doesn’t carry information like what’s a foreground object or background, or an UI element versus an object, or a shadow vs an obstacle.

          TVs quite literally blend two frames, with some attempt at interpreting when large blocks change value. That’s it.

          DLSS 3.0 uses actual geometry being handled by the GPU cores to come up with a prediction for the next frame, then merges this prediction with the previous frame.

          • echo64@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            10 months ago

            No. Tvs do not quite literally blend two frames. They use the same techniques as video codecs to extract rudimentary motion vectors by comparing frames, then do motion interpolation with them.

            Please, if you want to talk about this, we can talk about this, but you have to understand that you are wrong here. The Samsung TV I had a decade ago did this, it’s been a standard for a very long time.

            Again, tvs do not "literally blend two frames ** and if they did, they wouldn’t have the input lag problems they do with this feature as they need a few frames of derived motion vectors to make anything look good

            They do not need to know what is foreground or background, they don’t need to know what’s a ui element or not, they need to know what pixels moved between two frames and generate intermediate frames that moves those pixels along the estimated vector.

            Modern engines have this information available, it’s used for a few things, modern engines can provide this. A tv has to estimate it.

            • kadu@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              10 months ago

              Deleted my previous comment as we just got more data from reviewers that got early access to AMD’s tech:

              Exactly as I explained a billion times - and you tried to dismiss - FSR3 does not work like a TV’s frame interpolation. However, AMD’s Fluid Motion Frames that will be enabled as a fallback when FSR3 isn’t supported by a game is exactly like the TV tech, and they made it pretty clear it will look worse, but might still be worth enabling thanks to specific work that was made to avoid UI distortions.

              So here we go - it’s not the TV tech, what a massive surprise!

              • echo64@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                10 months ago

                Rolling my eyes so hard at this entire thread.

                You: doing this on shader units is bad! Not possible! Uses too much compute!

                Me: this tech has existed for over a decade on tvs, and there is motion interpolation software that you can get today that will do the same thing tvs do on compute and it works fine on even bad cards

                You: tvs just blend frames. This is different it uses motion vectors!

                Me: tvs use motion vectors. They compute them, whereas if you hook it up via amds thing, you don’t need to compute them

                You: No, this is different because if you hook it up via amds thing, you don’t need to compute them, and it can look better

                <— We are here.

                You’ve absolutely lost your thread on what you are mad about, you’re now agreeing with me but you want to fixate on this this as a marker of how it’s not the same thing as tvs, even though it’s the same thing as tvs without the motion estimation exactly like I have been saying this entire time, but you’re desperate to find some way that no, I was right and win! Even though you’ve lost what thread you originally were talking about.

                Maybe we need to reframe this. How is this not possible or a bad idea to do on shader units? That’s what you were mad about. How is this totally different from tv tech but also the same and less compute heavy as tv tech bad to run on shader units?

                • kadu@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  10 months ago

                  Love when people write 5 paragraphs to try and distract someone from the fact they were wrong all along, specially when the manufacturer confirms it.