Documents written in the 1980s in LaTeX still compile and look great today. Good luck doing that with an old MS Word file, especially if it has equations in it.
> And I found a fascinating pattern: the AI gives artificially high scores to reports written with AI [...] it was giving very high marks to poorly reasoned, error-filled work simply because it was elegantly written. Too elegantly... Clearly written with ChatGPT.
This is an interesting phenomenon, but I would have liked to see some quantitative evidence for this N=24 sample, e.g. would a paper ordinarily get an 80% score but the LLM gives it a 95%?
I also wonder how accurate a professor's perception of style is. I tend to write in a formal style, even in online forums like this one, and I wonder if people assume I use LLMs as a result (I don't).
Zstd decompresses faster, perhaps 2x faster, but Brotli is fast enough. Often a little faster than gzip/deflate.
Brotli can compress more because of context modeling, about 5% more without the static dictionary and even more with it. Brotli works better with very short data.
Brotli is a bit more streamable than zstd, i.e., hides less data during transfer.
Zstd has better encoder implementations, but basically there is no technical difference in the demands of the format on the encoding algorithms. Compression could be equally fast, Zstd just saw more love and specialization for encoding. As a result, Zstd libs are 2x heavier than Brotli.