- cross-posted to:
- pcmasterrace@lemmy.world
- cross-posted to:
- pcmasterrace@lemmy.world
TL;DW:
-
FSR 3 is frame generation, similar to DLSS 3. It can greatly increase FPS to 2-3x.
-
FSR 3 can run on any GPU, including consoles. They made a point about how it would be dumb to limit it to only the newest generation of cards.
-
Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.
-
Games will start using it by early fall, public launch will be by Q1 2024
It’s left to be seen how good or noticeable FSR3 will be, but if it actually runs well I think we can expect tons of games (especially on console) to make use of it.
Jesus Christ, doing frame generation on shader units… this will look horrible, considering how FSR 2 just needs to upscale and is already far behind DLSS or XeSS (on Intel, not the generalistic fallback)
People made the same claim about DLSS 3. But those generated frames are barely perceptible and certainly less noticeable than frame stutter. As long as FSR 3 works half-decently, it should be fine.
And the fact that it works on older GPUs include those from nVidia really shows that nVidia was just blocking the feature in order to sell more 4000 series GPUs.
Frame generation is limited to 40 series GPUs because Nvidias solution is dependant on their latest hardware. The improvements to DLSS itself and the new raytracing stuff work on 20/30 series GPUs. That said FSR 3 is fantastic news, competition benefits us all and I’d love to see it compete with DLSS itself on Nvidia GPUs.
If FSR 3 supports frame generation on 20/30 series GPUs, you’ll wonder if they’ll port it to older GPUs anyways.
If they did I’m pretty sure it would just be worse than FSR given the hardware requirements.
That’s because DLSS 3 uses hardware that is idle and not fighting for cycles during normal rendering, and actual neural networks.
This is different from keeping shader units that are already extremely busy sharing resources with FSR. That’s why FSR can’t handle things like occlusion nearly as well as DLSS.
So scale that up to an entire frame generation, rather than upscaling, and you can expect some ugly results.
And no - when the hardware is capable, Nvidia backports features. Video upscaling is available for older GPUs, the newly announced DLSS Ray Reconstruction is also available. DLSS 3 is restricted because it actually does require extra hardware to allow the tensor cores to read the framebuffer, generate an image in VRAM, and deliver it without disrupting the normal flow.
EDIT: downvotes don’t change how the technology works, but sure. This community is weirdly defensive of AMD. I have a Steam Deck, it’s in my best interest for FSR 3 to work well - it just can’t, fundamentally, due to how it works.
You aren’t going to use these features on extremely old GPUs anyways. Most newer GPUs will have spare shader compute capacity that can be used for this purpose.
Also, all performance is based on compromise. It is often better to render at a lower resolution with all of the rendering features turned on, then use upscaling & frame generation to get back to the same resolution and FPS, than it is to render natively at the intended resolution and FPS. This is often a better use of existing resources even if you don’t have extra power to spare.
The hit will be less than the hit of trying to run native 4k.
Frame generation isn’t used to run at higher resolutions, usually.
Either way, it pays for itself.
Hopefully, let’s see how it scales. Using shaders to do it is a heavy penalty. I’m hoping at 40 FPS it’s good enough to be used on a Deck.
You’re getting downvoted but this will be correct. DLSSFG looks dubious enough on dedicated hardware, doing this on shader cores means it will be competing with the 3D rendering so will need to be extremely lightweight to actually offer any advantage.
You guys are talking about this as if it’s some new super expensive tech. It’s not. The chips they throw inside tvs that are massively cost reduced do a pretty damn good job these days (albit, laggy still) and there is software you can run on your computer that does compute based motion interpolation and it works just fine even on super old gpus with terrible compute.
It’s really not that expensive.
Frame generation is fundamentally different from the TV interpolation. It takes into account motion vectors and the Z depth.
If you think DLSS 3 is like an LG TV interpolating a movie, you never used the feature and have got a very superficial understanding of how it’s implemented.
Yeah, it does, which is something tv tech has to try and derive themselves. Tv tech has to figure that stuff out. It’s actually less complicated in a fun kind of way. But please do continue to explain how it’s more compute heavy
You surely realize that a video decode doesn’t carry information like what’s a foreground object or background, or an UI element versus an object, or a shadow vs an obstacle.
TVs quite literally blend two frames, with some attempt at interpreting when large blocks change value. That’s it.
DLSS 3.0 uses actual geometry being handled by the GPU cores to come up with a prediction for the next frame, then merges this prediction with the previous frame.
No. Tvs do not quite literally blend two frames. They use the same techniques as video codecs to extract rudimentary motion vectors by comparing frames, then do motion interpolation with them.
Please, if you want to talk about this, we can talk about this, but you have to understand that you are wrong here. The Samsung TV I had a decade ago did this, it’s been a standard for a very long time.
Again, tvs do not "literally blend two frames ** and if they did, they wouldn’t have the input lag problems they do with this feature as they need a few frames of derived motion vectors to make anything look good
They do not need to know what is foreground or background, they don’t need to know what’s a ui element or not, they need to know what pixels moved between two frames and generate intermediate frames that moves those pixels along the estimated vector.
Modern engines have this information available, it’s used for a few things, modern engines can provide this. A tv has to estimate it.
deleted by creator
Deleted my previous comment as we just got more data from reviewers that got early access to AMD’s tech:
Exactly as I explained a billion times - and you tried to dismiss - FSR3 does not work like a TV’s frame interpolation. However, AMD’s Fluid Motion Frames that will be enabled as a fallback when FSR3 isn’t supported by a game is exactly like the TV tech, and they made it pretty clear it will look worse, but might still be worth enabling thanks to specific work that was made to avoid UI distortions.
So here we go - it’s not the TV tech, what a massive surprise!
Rolling my eyes so hard at this entire thread.
You: doing this on shader units is bad! Not possible! Uses too much compute!
Me: this tech has existed for over a decade on tvs, and there is motion interpolation software that you can get today that will do the same thing tvs do on compute and it works fine on even bad cards
You: tvs just blend frames. This is different it uses motion vectors!
Me: tvs use motion vectors. They compute them, whereas if you hook it up via amds thing, you don’t need to compute them
You: No, this is different because if you hook it up via amds thing, you don’t need to compute them, and it can look better
<— We are here.
You’ve absolutely lost your thread on what you are mad about, you’re now agreeing with me but you want to fixate on this this as a marker of how it’s not the same thing as tvs, even though it’s the same thing as tvs without the motion estimation exactly like I have been saying this entire time, but you’re desperate to find some way that no, I was right and win! Even though you’ve lost what thread you originally were talking about.
Maybe we need to reframe this. How is this not possible or a bad idea to do on shader units? That’s what you were mad about. How is this totally different from tv tech but also the same and less compute heavy as tv tech bad to run on shader units?