Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And the RT patches are making it to upstream, which together with the userland sched can make a huge difference in frame rates and latency when gaming.


RT is entirely about guarantee regarding response time. It is generally overall slower than non real time in basically any scenario where it doesn’t matter. Video games are purposefully written so as not to be latency critical so RT would be a net loss.


I reckon it might be interesting for esports, those types of games where people turn graphics settings way down and purchase very overpowered hardware go push FPS into hundreds, resulting in lower latencies.


Since RT sacrifices throughput for latency you would likely se a drop in FPS, which inturn would also reduce the input processing frequency. sched_ext is much more interesting in that regard as IIRC lavd[1] is for example designed to (ab)use the frequent repetitive access/processing patterns of game engines

[1]: https://github.com/sched-ext/scx/tree/main/scheds/rust/scx_l...


Generally slower at completing long tasks, sure, but is that the real workload of the Deck?

I feel that there's enough aspects and subtleties to consider that I wouldn't trust anyone, particularly not myself, to guess how things are really going to fare. We will likely have a phase of experimentation and tweaking that will improve things with and without RT patches as more workloads are considered and things are ironed out.


I doubt they wanna run an RT kernel in SteamOS. There are severe costs to doing so.


Are there some benchmarks for that, or a good discussion somewhere?

Because I was wondering about that myself. RT kernel sounds at first like it may help achieve better latency in games, but then you are so deep in the stack and depending on overall performance, that the costs of RT mode might hurt much more than one would gain. But I don't really know how that plays out in practice.


I don’t know. My understanding though is that RT Linux adds predicability at the cost of throughput. For games where CPU throughput isn’t important (like an action game where most of the work is pushed off onto the GPU) this would maybe help. Someone could certainly try. But for other games like many strategy games and simulations where the CPU is a limiting factor, there would likely be a cost.

The problem is that it’s not a free win, and games are very varied. But theoretically, on something like a console, you could offer choice.


My understanding is that real-time isn't so much about lower latency (though that is also desirable), but rather about predictable, bounded latency. For the purposes of gaming, no amount of predictability at the kernel level will reduce the amount of work you need to perform to draw a frame, and that is the real limiting factor for most games.


> For the purposes of gaming, no amount of predictability at the kernel level will reduce the amount of work you need to perform to draw a frame, and that is the real limiting factor for most games.

It all depends on what you're trying to optimize for. You can maximize raw framerate numbers by having an unlimited frame queue depth, but it'll feel like shit because you won't have even frame pacing and your input latency will vary wildly. The ideal scenario in gaming is for your draw time to be perfectly deterministic, and be able to schedule things so you can read input and do your render calls just before vsync.


I assume the realtime patches brought some improvements you'll get even without compiling a hard realtime kernel. There's a whole bunch of CONFIG_PREEMPT_* options.


Yeah, I've been running my Fedora workstation with preempt=full for years now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: