Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the big promises of HTMX is that the client doesn't have to understand the structure of the returned data since it's pre-compiled to the presentation layer, and it feels like this violates that quite heavily since now the calling page needs to know the IDs and semantics of the different elements the server will return.

This isn't really a criticism of Datastar, though: I think the popularity of OOB in HTMX indicates that the pure form of this is too idealistic for a lot of real-world cases. But it would be nice if we could come up with a design that gives the best of both worlds.



The calling page knows nothing, you do an action on the client, the server might return an update view of the entire page. That's it.

You send down the whole page on every change. The client just renders. It's immediate mode like in video games.


That doesn't seem to be the ‘standard’ way to use Datastar, at least as described in this article?

If one were to rerender the entire page every time, what's the advantage of any of these frameworks over just redirecting to another page (as form submissions do by default)?


It's the high performance way to use Datastar and personally I think it's the best DX.

1. It's much better in terms of compression and latency. As with brotli/zstd you get compression over the entire duration of the connection. So you keep one connection open and push all updates down it. All requests return 204 response. Because everything comes down the same connection brotli/zstd can give you 1000-8000x compression ratios. So in my demos for example, one check is 13-20bytes over the wire even though it's 140k of HTML uncompressed. Keeping the packet size around 1k or less is great for latency. Redirect also has to do more trips.

2. The server is in control. I can batch updates. The reason these demo's easily survive HN is because the updates are batched every 100ms. That means at most a new view gets pushed to you every 100ms, regardless of the number of users interacting with your view. In the case of the GoL demo the render is actually shared between all users, so it's only rendering once per 100ms regardless of the number of concurrent users.

3. The DX is nice and simple good old View = f (state), like react just over the network.


> Because everything comes down the same connection brotli/zstd can give you 1000-8000x compression ratios.

Isn't this also the case by default for HTTP/2 (or even just HTTP/1.1 `Connection: keep-alive`)?

> The server is in control. I can batch updates.

That's neat! So you keep a connection open at all times and just push an update down it when something changes?


So even though HTTP/2 multiplexes each request over a single TCP connection, each HTTP connection is still compressed separately. Same with keep alive.

The magic is brotli/zstd are very good at streaming compression thanks to forward/backward references. What this effectively means is the client and the server share a compression window for the duration of the HTTP connection. So rather than each message being compressed separately with a new context, each message is compressed with the context of all the messages sent before it. What this means in practice is if you are sending 140kb of divs on each frame, but only one div changed between frames, then the next frame will only be 13bytes because the compression algorithm basically says to the client "you know that message I sent you 100ms ago, well this one is almost identical apart from this one change". It's like a really performant byte level diffing algorithm, except you as the programmer don't have to think about it. You just re-render the whole frame and let compression do the rest.

In these demos I push a frame to every connected client when something changes at most every 100ms. What that means, it effectively all the changes that happen in that time are batched into a single frame. Also means the server can stay in charge and control the flow of data (including back pressure, if it's under to much load, or the client is struggling to render frames).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: