TL;DW:

  • FSR 3 is frame generation, similar to DLSS 3. It can greatly increase FPS to 2-3x.

  • FSR 3 can run on any GPU, including consoles. They made a point about how it would be dumb to limit it to only the newest generation of cards.

  • Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

  • Games will start using it by early fall, public launch will be by Q1 2024

It’s left to be seen how good or noticeable FSR3 will be, but if it actually runs well I think we can expect tons of games (especially on console) to make use of it.

  • DarkThoughts@kbin.social
    link
    fedilink
    arrow-up
    20
    ·
    10 months ago

    Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

    So, no Vulkan?

    • Ranvier@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      I’m not sure, been trying to find the answer. But FSR3 they’ve stated will continue to be open source and prior versions have supported Vulkan on the developer end. It sounds like this is a solution for using it in games that didn’t necessarily integrate it though? So it might be separate. Unclear.

  • echo64@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    10 months ago

    For anyone confused about what this is, it’s your tvs motion smoothing feature, but less laggy. It may let 60fps fans on console get their 60fps with only a small drop in resolution or graphical features. But it’s yet to be seen.

    • NewNewAccount@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      10 months ago

      Looks like there are two versions. One is the one built into the game itself, far more advanced than what your tv can do. The other, supporting all dx11 and dx12 games, is like the soap opera effect from your tv.

      • echo64@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        10 months ago

        I don’t think so, there’s nothing I can see that suggests that. The only real differences are likely to be to do with lag. There’s nothing suggesting a quality difference between if a game has it built in vs you forcing it on a game.

        • simple@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          10 months ago

          EuroGamer confirmed there is a difference

          The principles are similar to DLSS 3, but the execution is obviously different as unlike the Nvidia solution, there are no AI or bespoke hardware components in the mix. A combination of motion vector input from FSR 2 and optical flow analysis is used.

          AMD wanted us to show us something new and very interesting. Prefaced with the caveat that there will be obvious image quality issues in some scenarios, we saw an early demo of AMD Fluid Motion Frames (AFMF), which is a driver-level frame generation option for all DirectX 11 and DirectX 12 titles. […] This is using optical flow only. No motion vector input from FSR 2 means that the best AFMF can do is interpolate a new frame between two standard rendered frames similar to the way a TV does it - albeit with far less latency. The generated frames will be ‘coarser’ without the motion vector data

          • echo64@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            10 months ago

            It’s part of their suite of tools, that includes other things like lag reduction tech. In addition, if your game isn’t dx11 or dx12 then you can still provide it to the user. The generic version only works with dx11/12

            Also just like nvidia, they pay developers to add these things to games

  • Carlos Solís@communities.azkware.net
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    10 months ago

    Given that it will eventually be open-source: I hope somebody hooks this to a capture card, to have relatively lag-less motion smoothing for console games locked to 30.

  • cordlesslamp@lemmy.today
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    Guys, what would be a better purchase?

    1. Used 6700xt for $200

    2. Used 3060 12GB for $220

    3. Non of the used, get a new $300 card for the 2 years warranty.

    4. Another recommendations.

    • simple@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      $200 for the 6700XT is a pretty good deal. It’s up to you if you’d prefer getting used or getting something with warranty.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    Anybody tried frame generation for VR? Does it work well there, or are the generated frames just out enough to break the illusion?

    • Brawler Yukon@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      10 months ago

      DLSS3 and FSR2 do completely different things. DLSS2 is miles ahead of FSR2 in the upscaling space.

      AMD currently doesn’t have anything that can even be compared to DLSS3. Not until FSR3 releases (next quarter, apparently?) and we can compare AMD’s framegen solution to Nvidia’s.

  • kadu@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    16
    ·
    edit-2
    10 months ago

    Jesus Christ, doing frame generation on shader units… this will look horrible, considering how FSR 2 just needs to upscale and is already far behind DLSS or XeSS (on Intel, not the generalistic fallback)

    • Hypx@kbin.social
      link
      fedilink
      arrow-up
      20
      arrow-down
      4
      ·
      10 months ago

      People made the same claim about DLSS 3. But those generated frames are barely perceptible and certainly less noticeable than frame stutter. As long as FSR 3 works half-decently, it should be fine.

      And the fact that it works on older GPUs include those from nVidia really shows that nVidia was just blocking the feature in order to sell more 4000 series GPUs.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Frame generation is limited to 40 series GPUs because Nvidias solution is dependant on their latest hardware. The improvements to DLSS itself and the new raytracing stuff work on 20/30 series GPUs. That said FSR 3 is fantastic news, competition benefits us all and I’d love to see it compete with DLSS itself on Nvidia GPUs.

        • Hypx@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          If FSR 3 supports frame generation on 20/30 series GPUs, you’ll wonder if they’ll port it to older GPUs anyways.

      • kadu@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        11
        ·
        edit-2
        10 months ago

        That’s because DLSS 3 uses hardware that is idle and not fighting for cycles during normal rendering, and actual neural networks.

        This is different from keeping shader units that are already extremely busy sharing resources with FSR. That’s why FSR can’t handle things like occlusion nearly as well as DLSS.

        So scale that up to an entire frame generation, rather than upscaling, and you can expect some ugly results.

        And no - when the hardware is capable, Nvidia backports features. Video upscaling is available for older GPUs, the newly announced DLSS Ray Reconstruction is also available. DLSS 3 is restricted because it actually does require extra hardware to allow the tensor cores to read the framebuffer, generate an image in VRAM, and deliver it without disrupting the normal flow.

        EDIT: downvotes don’t change how the technology works, but sure. This community is weirdly defensive of AMD. I have a Steam Deck, it’s in my best interest for FSR 3 to work well - it just can’t, fundamentally, due to how it works.

        • Hypx@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          3
          ·
          edit-2
          10 months ago

          You aren’t going to use these features on extremely old GPUs anyways. Most newer GPUs will have spare shader compute capacity that can be used for this purpose.

          Also, all performance is based on compromise. It is often better to render at a lower resolution with all of the rendering features turned on, then use upscaling & frame generation to get back to the same resolution and FPS, than it is to render natively at the intended resolution and FPS. This is often a better use of existing resources even if you don’t have extra power to spare.

    • hark@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      10 months ago

      The hit will be less than the hit of trying to run native 4k.

      • kadu@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Frame generation isn’t used to run at higher resolutions, usually.

          • kadu@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 months ago

            Hopefully, let’s see how it scales. Using shaders to do it is a heavy penalty. I’m hoping at 40 FPS it’s good enough to be used on a Deck.

    • Edgelord_Of_Tomorrow@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      10 months ago

      You’re getting downvoted but this will be correct. DLSSFG looks dubious enough on dedicated hardware, doing this on shader cores means it will be competing with the 3D rendering so will need to be extremely lightweight to actually offer any advantage.

      • echo64@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        10 months ago

        You guys are talking about this as if it’s some new super expensive tech. It’s not. The chips they throw inside tvs that are massively cost reduced do a pretty damn good job these days (albit, laggy still) and there is software you can run on your computer that does compute based motion interpolation and it works just fine even on super old gpus with terrible compute.

        It’s really not that expensive.

        • kadu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          10 months ago

          Frame generation is fundamentally different from the TV interpolation. It takes into account motion vectors and the Z depth.

          If you think DLSS 3 is like an LG TV interpolating a movie, you never used the feature and have got a very superficial understanding of how it’s implemented.

          • echo64@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            edit-2
            10 months ago

            Yeah, it does, which is something tv tech has to try and derive themselves. Tv tech has to figure that stuff out. It’s actually less complicated in a fun kind of way. But please do continue to explain how it’s more compute heavy

            • kadu@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              10 months ago

              You surely realize that a video decode doesn’t carry information like what’s a foreground object or background, or an UI element versus an object, or a shadow vs an obstacle.

              TVs quite literally blend two frames, with some attempt at interpreting when large blocks change value. That’s it.

              DLSS 3.0 uses actual geometry being handled by the GPU cores to come up with a prediction for the next frame, then merges this prediction with the previous frame.

              • echo64@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                10 months ago

                No. Tvs do not quite literally blend two frames. They use the same techniques as video codecs to extract rudimentary motion vectors by comparing frames, then do motion interpolation with them.

                Please, if you want to talk about this, we can talk about this, but you have to understand that you are wrong here. The Samsung TV I had a decade ago did this, it’s been a standard for a very long time.

                Again, tvs do not "literally blend two frames ** and if they did, they wouldn’t have the input lag problems they do with this feature as they need a few frames of derived motion vectors to make anything look good

                They do not need to know what is foreground or background, they don’t need to know what’s a ui element or not, they need to know what pixels moved between two frames and generate intermediate frames that moves those pixels along the estimated vector.

                Modern engines have this information available, it’s used for a few things, modern engines can provide this. A tv has to estimate it.

                • kadu@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  10 months ago

                  Deleted my previous comment as we just got more data from reviewers that got early access to AMD’s tech:

                  Exactly as I explained a billion times - and you tried to dismiss - FSR3 does not work like a TV’s frame interpolation. However, AMD’s Fluid Motion Frames that will be enabled as a fallback when FSR3 isn’t supported by a game is exactly like the TV tech, and they made it pretty clear it will look worse, but might still be worth enabling thanks to specific work that was made to avoid UI distortions.

                  So here we go - it’s not the TV tech, what a massive surprise!