>Note that the best-case scenario is the elimination of the overheads above to 0, which is at most ~10% in these particular benchmarks. Thus, it's helpful to consider the proportion of GC overhead eliminated relative to that 10% (so, 7% reduction means 70% GC overhead reduction).
Wow. amazing to see of off-heap allocation can be that good
Meanwhile Java and .NET have had off-heap and arenas for a while now.
Which goes to show how Go could be much better, if being designed with the learnings of others taken into account.
The adoption of runtime.KeepAlive() [0], and the related runtime.AddCleanup() as replacement for finalizers are also learnings from other languages [1].
Recently used MemorySegment in Java, it is extremely good. Just yesterday i implemented Map and List interface using MemorySegment as backing store for batch operations instead of using OpenHFT stuff.
Tried -XX:TLABSize before but wasnt getting the deserved performance.
Not sure about .NET though, havent used since last decade.
Because JSON serialization is built into the browser.
I'm obviously a huge fan of binary serialization; I wrote Cap'n Proto and Protobuf v2 after all.
But when you're working with pure JS, it's hard to be much faster than the built-in JSON implementation, and even if you can beat it, you're only going to get there with a lot of code, and in a browser code footprint often matters more than runtime speed.
> TLS servers now prefer the highest supported protocol version, even if it isn’t the client’s most preferred protocol version.
>Both TLS clients and servers are now stricter in following the specifications and in rejecting off-spec behavior. Connections with compliant peers should be unaffected.
Nah ! I am not convinced that context engineering is better (in the long trem) than prompt engineering. Context engineering is still complex and needs maintainance. Its much lower level than human level language.
Given that domain expertise of the problem statment, we can apply the same tactics in context engineering on higher level in prompt engineering.
Early in the game when context windows were very small (8k, 16k, and then 32k), the team I was working with achieved fantastic results with very low incidence of hallucinations through deep "context engineering" (we didn't call it that but rather "indexing and retrieval").
We did a project for Alibaba and generated tens of thousands of pieces of output . They actually had human analysts reviews and grade each one for the first thousand. The errors they found? Always in the source material.
This whole industry is complex and needs constant maintenance. APIs break all the time -- and that's assuming they were even correct to begin with. New models are constantly released, each with their own new quirks. People are still figuring out how to build this tech -- and as quickly as they figure one thing out, the goal posts move again.
This entire field is basically being built on quicksand. And it will stay like this until the bubble bursts.
Its areally great effort, but i would stick with PostgreSQL with PL/SQL procedure and tons of extensions.
Lowering down the TRSACTION LEVEL, Async transctions on disk backed tables and UNLOGGED Tables can help you go a long way.
Also, in next major version (v18) of PG, they are trying to implment io_uring support which will further drastically improve the random read/write performance.
Although experimental as of now, but use of arena package is a natual fit here.