Rollback networking is essentially event sourcing. Game states are immutable, and new game states are derived from adding inputs (events).
You keep the last dozen game states around in memory, and if you receive an input from the past, you rewind to the last game state prior, add it to your input stream, and fast forward to the present.
It has the same base advantages and drawbacks as RTS networking - the core logic is written as though the game is single player, and complexity can be scaled arbitrarily without bloating bandwidth requirements.
But in addition, you get the benefit of zero input latency (play a multiplayer RTS game and send a unit around - they won't move for 200ms or so), and the drawback of an absolute clusterfuck time rewind debugging madness if any inadvertent mutation of your immutable data happens.
The reason you do rollback with something like this is it gives you zero latency, and you can retrofit it on to an emulator without changing any game code just by using memcpy() on the game state.
Source: I've developed about a dozen titles using rollback networking.
I find this hard to conceptualize/unite with the players view of the game - so if an input arrives out of order the engine can essentially just reapply the new adjusted stream of events to correct itself? From a data modelling perspective that seems fine.
However, in those situations what does the player see in game? IIRC rollback was popularised in fighting games like Street Fighter - so does the player see one "universe" only for that branch to suddenly rewind and replay to an alternate universe where a tiny action happened/does not happen?
That's exactly what happens. If you are writing the game yourself, you can do interpolation to fix things up gradually.
You can also delay significant events such as death until the rollback threshold has been passed, so you don't run in to knife edge situations where, e.g., it looks like you died and your character starts to ragdoll but then you snap back when it turns out you killed the enemy instead.
The key to it not being too disruptive is keeping the maximum rollback threshold fairly low. If you add inputs and your ping is greater than the threshold, they get delayed to a later frame, and your inputs start to feel sluggish (the server would enforce the delay, but you'd also add it client side).
Thank you these types of comments are why I frequent HN! Really insightful, first time I came across rollback I had one of those loving CS/SWE moments. So I'm grateful that you're so obliging to my curiosity!
Out of interest are there any toy projects out there you can point to that can explore the concepts here with no first hand experience with game dev?
Hmm, I haven't come across any, although you can probably dive in and build a prototype system without too much trouble.
My recommendation would probably be to build it without netcode to start (two local clients connected over a virtual pipe), and using a system where you can easily serialize the game state - C with memcpy(), JavaScript reading/writing to json, Clojure or similar. I use C# with compile time generated codes to store data in slots - it's not fun.
While not rollback, the original AOE networking writeup is probably the best I've come across as an introduction to deterministic multiplayer. There's the GGPO framework that you can get off the shelf, but it's pretty heavy weight.
There are some real head scratching moments with debugging rollback, but in general for games that aren't too performance intensive it shines. I actually developed an entire strategy game prototype over a period of three weeks in single player before bothering to test it worked in multiplayer. It did first try. Four days later, it was live in public beta (starjack.io if you're interested, which peaked at around 400 concurrent players).
No idea what "some of the input" means, or why you thought "Low Resolution Input" is disingenuous?
It uses color, depth and subpixel motion vectors of 1-4 previous frames. All things that modern game engines can easily calculate.
You didn't even need to read the paper to get this info, it's literally in a picture on the blog post.
Right - so a single low-res image should not be paired with the high-res one and labelled as input and output, because that implies the algorithm turned the one data into the other, which it did not do.
This isn't about upsampling low-res bitmaps. It is a technique for upsampling the output of a game engine.
The low-res image is itself output generated from a lot of other input the game engine generates. That same input that is already being generated anyway can also be fed into this to improve the post-processing. Finding ways to productively reuse already existing/generated data is the hallmark of any top (graphically) game.
I read a detailed write-up on the graphics pipeline for GTA:V on Xbox 360. It blew my mind how many different ways they reused every single bit that ever hit the RAM. Which explains how they pulled off those graphics on a system with half as much RAM as an Apple Watch.
In contrast to DLSS1, the output of the NN is not color values, but sampling locations and weights, to look up the color values from the previous low-resolution frames.
There is a big difference between latency and throughput. FPS is throughput. If you assume the entire system is producing only the current frame then those numbers are directly correlated. But most systems, especially game engines/hardware, always have multiple things going in parallel simultaneously.
The H.264 encoder on my CPU introduces >16.7ms of latency into a video stream, but it can encode hundreds of frames per second of SD video all day. Adding ~1 more frame of latency may be worth a quadrupling in image quality/resolution in most circumstances.
I remember watching that talk and thinking it was interesting but also that the guy was a little harsh on Sony. Sure the engineers at Sony may have done some weird choices in some places, but I think they likely had their reasons for doing things the way that they did. And at the end of the day, Sony delivered a console that works well for its intended purpose.