Imagine if we could hook this into game rendering as well. Have super high resolution models, textures, shadows, etc near where the player is looking, and use lower LoDs elsewhere.
It could really push the boundaries of detail and efficiency, if we could somehow do it real-time for something that complex. (Streaming video sounds a lot easier)
Foveated rendering is already a thing. But since it needs to be coded for in the game, it's not really being used on PC games. Games designed for Playstation with the PS VR 2 in mind do use foveated rendering since they know their games are being played with hardware that provides eye tracking.
That's foveated rendering. Foveated streaming, which is newly presented here, is a more general approach which can apply to any video signal, be it from a game, movie or desktop environment.
They are complementary things. Foveated rendering means your GPU has to do less work which means higher frame rates for the same resolution/quality settings. Foveated streaming is more about just being able get video data across from the rendering device to the headset. You need both things to get great results as either rendering or video transport could be a bottleneck.
Not quite: you can use it for games rendering, but with a Wifi adapter you more importantly want to use it for the video signal, and only transfer highres in the area you're looking at. A 4k game (2048*2048*2 screens) is 25gbit uncompressed at 100fps, which would stress even Wifi-7. With foveated rendering you can probably get that down to 8gbit easy.
Valve is applying it to the streamed view from the computer to reduce the bandwidth requirements it's not actually doing foveated rendering in the game itself because not all games support it.
Foveated streaming is just a bandwidth hack and doesn't reduce the graphic requirements on the host computer the same way foveated rendering does.
As a lover of ray/path tracing I'm obligated to point out: rasterisation gets its efficiency by amortising the cost of per-triangle setup over many pixels. This more or less forces you to do fixed-resolution rendering; it's very efficient at this, which is why even today with hardware RT, rasterisation remains the fastest and most power-efficient way to do visibility processing (under certain conditions). However, this efficiency starts to drop off as soon as you want to do things like stencil reflections, and especially shadow maps, to say nothing of global illumination.
While there are some recent'ish extensions to do variable-rate shading in rasterisation[0], this isn't variable-rate visibility determination (well, you can do stochastic rasterisation[1], but it's not implemented in hardware), and with ray tracing you can do as fine-grained distribution of rays as you like.
TL;DR for foveated rendering, ray tracing is the efficiency king, not rasterisation. But don't worry, ray tracing will eventually replace all rasterisation anyway :)
I think you could do foveated rendering efficiently with rasterization if you "simply" render twice at 2 different resolutions. A low resolution render over the entire FOV, and a higher resolution render in the fovea region. You would have overlap but overall it should be less pixels rendered.
I believe the standard way is to downgrade the sampling density outside the area you're looking, see https://docs.vulkan.org/samples/latest/samples/extensions/fr... . Optimally you could attach multiple buffers with different resolutions covering different parts of clipspace, saving vram bandwidth. Sadly this is not supported currently to my knowledge, so you have to write to a single giant buffer with lower sample resolution outside the detail area, and then just downsample it for the coarse layer.
It could really push the boundaries of detail and efficiency, if we could somehow do it real-time for something that complex. (Streaming video sounds a lot easier)