Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For compression, has the world settled with zstd now?


I think it’s a pretty common choice when you want compression in a new format or protocol. It works better for compressing chunks of your data rather than large files where you want to maintain some kind of index or random access. Similarly, if you have many chunks then you can parallelise decompression (I’m not sure any kind of parallelism support should have been built in to the zstd format, though it is useful for command line uses).

A big problem for some people is that Java support is hard as it isn’t portable so eg making a Java web server compress its responses with are isn’t so easy.


Java can just use native libraries, there are plenty of Java projects that do that.

It's not like it's 1999 and there is still some Sun dogma against doing this.


Sure, I don't want to make a big deal about this but I have observed Java projects choosing to not support zstd for portability (or software packaging) reasons.


Well, convenience is also a factor in some cases. Much easier to schlep a "pure-Java" jar around.


Depends on the use-case. For transparent filesystem compression I would still recommend lz4 over zstd because speed matters more than compression ratio in that use case.


Most definitely not settled, but it's a good default




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: