This post mixes up “easy for compilers and assemblers to transform and easy for cpus to execute” with “easy for LLMs to understand” and assumes that anything in the first category must also be in the second category since they’re both computers. In reality, the tools that help humans think are also useful for LLMs.
Yes; because - is on the keyboard and — isn't. (Don't tell me how to type —, I know how, but despite that it is the reason, which is what the parent comment asks about.)
It's not acceptable to post in this inflammatory style on HN. The guidelines make it clear we're here for curious conversation, not battle. It's also not acceptable on HN to scour through someone's past activity (whether on HN or elsewhere) in order to attack them, and that kind of belittling language is never OK here. Please take a moment to read the guidelines and make an effort to observe them in future.
It's strange to me that you chose to not cite the guidelines to elsewhere such as:
> Please don't use Hacker News for political or ideological battle.
Are anti-immigrant sentiments acceptable if they are simply well-articulated?
And if not, then why do you feel it necessary to be more critical of the language I or lalaithion use for explicit dismissals of racists and xenophobes than the actual racism or xenophobia itself?
People continually try on this “gotcha” that we moderate more for tone than substance and that HN freely allows hateful rhetoric as long as it is smuggled in a Trojan Horse of “civility”.
This is of course a non-starter and nothing in the guidelines allows it. The first words in the “In Comments“ section of the guidelines are Be kind, and there is nothing “kind” about xenophobia or other hateful ideas, by definition.
But as longtime forum moderators we can't be so naïve as to succumb to an attempt to characterize any discussion about immigration laws and their enforcement as “anti-immigration” and ”hateful”. From what I can see this is discussion is not one of “pro” or “anti” immigration but about how laws should be interpreted and enforced, which is always something that should be able to be discussed in a spirit of curiosity. The guidelines cover this too:
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
Nailer is a user who has crossed the line before and we've appealed to him multiple times to observe the guidelines, and will certainly do so again if and when it is warranted. In this case I don't see how he is the one who has crossed the line, but even if he was, that doesn't excuse you from doing so.
If you see guidelines-breaking comments on HN, just flag them or if they're especially egregious, email us (hn@ycombinator.com). If you escalate, then you become the one in the wrong, by making a bad situation even worse.
The US-UK Mutual Legal Assistance treaty imposes obligations on Ofcom which they have not met, 4chan claims:
“None of these actions constitutes valid service under the US-UK Mutual Legal Assistance Treaty, United States law or any other proper international legal process.”
MLAT applies only to a narrow set of legal procedures, essentially around criminal activity. I’m a lawyer but this is very specialist stuff. I’m not expert enough to opine on whether MLAT applies here but - simply judging by the quality of their respective legal work on display - I’m minded to believe that Ofcom knows what they are doing. OTOH 4chan’s rhetoric reeks of FUD.
>Lawyers representing controversial online forums 4chan and Kiwi Farms have filed a legal case against the UK Online Safety Act enforcer, Ofcom.
Drumming up public support is a no-go. Rather, I think the intent is to make the stance that if the UK wants to prevent citizens from accessing sites if they are underage, then the UK can do just that, rather than expect random companies around the world to comply.
The way the UK has chosen to do that is to ask companies to find the way that works best for them, rather than impose a single government-owned firewall solution. Those that don't will face UK fines in UK jurisdictions, which they may or may not care about.
Protocol buffers suck but so does everything else. Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.
Just with those two criteria you’re down to, like, six formats at most, of which Protocol Buffers is the most widely used.
And I know the article says no one uses the backwards compatible stuff but that’s bizarre to me – setting up N clients and a server that use protocol buffers to communicate and then being able to add fields to the schema and then deploy the servers and clients in any order is way nicer than it is with some other formats that force you to babysit deployment order.
The reason why protos suck is because remote procedure calls suck, and protos expose that suckage instead of trying to hide it until you trip on it. I hope the people working on protos, and other alternatives, continue to improve them, but they’re not worse than not using them today.
> Typical offers a new solution ("asymmetric" fields) to the classic problem of how to safely add or remove fields in record types without breaking compatibility. The concept of asymmetric fields also solves the dual problem of how to preserve compatibility when adding or removing cases in sum types.
That's a nice idea... But I believe the design direction of proto buffers was to make everything `optional`, because `required` tends to bite you later when you realize it should actually be optional.
My understanding is that asymmetric fields provide a migration path in case that happens, as stated in the docs:
> Unlike optional fields, an asymmetric field can safely be promoted to required and vice versa.
> [...]
> Suppose we now want to remove a required field. It may be unsafe to delete the field directly, since then clients might stop setting it before servers can handle its absence. But we can demote it to asymmetric, which forces servers to consider it optional and handle its potential absence, even though clients are still required to set it. Once that change has been rolled out (at least to servers), we can confidently delete the field (or demote it to optional), as the servers no longer rely on it.
> My understanding is that asymmetric fields provide a migration path in case that happens, as stated in the docs:
If you can assume you can churn a generation of fresh data soonish, and never again read the old data. For RPC sure, but someone like Google has petabytes of stored protobufs, so they don't pretend they can upgrade all the writers.
This seems interesting. Still not sure if `required` is a good thing to have (for persistent data like log you cannot really guarantee some field's presence without schema versioning baked into the file itself) but for an intermediate wire use cases, this will help.
I've never heard of Typical but the fact they didn't repeat protobuf's sin regarding varint encoding (or use leb128 encoding...) makes me very interested! Thank you for sharing, I'm going to have to give it a spin.
Varint encoding is something I've peeked at in various contexts. My personal bias is towards the prefix-style, as it feels faster to decode and the segregation of the meta-data from the payload data is nice.
But, the thing that tends to tip the scales is the fact that in almost all real world cases, small numbers dominate - as the github thread you linked relates in a comment.
The LEB128 fast-path is a single conditional with no data-dependencies:
if ! (x & 0x80) { x }
Modern CPUs will characterize that branch really well and you'll pay almost zero cost for the fastpath which also happens to be the dominant path.
Seems like a lot of effort to avoid adding a message version field. I’m not a web guy, so maybe I’m missing the point here, but I always embed a schema version field in my data.
The point is that its hard to prevent asymmetry in message versions if you are working with many communicating systems. Lets say four services inter-communicate with some protocol, it is extremely annoying to impose a deployment order where the producer of a message type is the last to upgrade the message schema, as this causes unnecessary dependencies between the release trains of these services. At the same time, one cannot simply say: "I don't know this message version, I will disregard it" because in live systems this will mean the systems go out of sync, data is lost, stuff breaks, etc.
There's probably more issues I haven't mentioned, but long story short: in live, interconnected systems, it becomes important to have intelligent message versioning, i.e: a version number is not enough.
> Lets say four services inter-communicate with some protocol, it is extremely annoying to impose a deployment order where the producer of a message type is the last to upgrade the message schema
i don't know how you arrived at this conclusion
the protocol is the unifying substrate, it is the source of truth, the services are subservient to the protocol, it's not the other way around
also it's not just like each service has a single version, each instance of each service can have separate versions as well!
what you're describing as "annoying" is really just "reality", you can't hand-wave away the problems that reality presents
I think I see what you’re getting at? My mental model is client and server, but you’re implying a more complex topology where no one service is uniquely a server or a client. You’d like to insert a new version at an arbitrary position in the graph without worrying about dependencies or the operational complexity of doing a phased deployment. The result is that you try to maintain a principled, constructive ambiguity around the message schema, hence asymmetrical fields? I guess I’m still unconvinced and I may have started the argument wrong, but I can see a reasonable person doing it that way.
Yes thats a big part, but even bigger is just the alignment of teams.
Imagine team A building feature XYZ
Team B is building TUV
one of those features in each team deals with messages, the others are unrelated.
At some point in time, both teams have to deploy.
If you have to sync them up just to get the protocol to work, thats an extra complexity in the already complex work of the teams.
If you can ignore this, great!
It becomes even more complex with rolling updates though: not all deployments of a service will have the new code immediately, because you want multiple to be online to scale on demand. This creates an immediate necessary ambiguity in the qeustion: "which version does this service accept?" because its not about the service anymore, but about the deployments.
Ah, I see. Team A would like to deploy a new version of a service. It used to accept messages with schema S, but the new version accepts only S’ and not S. So the only thing you can do is define S’ so that it is ambiguous with S. Team B uses Team A’s service but doesn’t want to have to coordinate deployments with Team A.
I think the key source of my confusion was Team A not being able to continue supporting schema S once the new version is released. That certainly makes the problem harder.
> one cannot simply say: "I don't know this message version, I will disregard it" because in live systems this will mean the systems go out of sync, data is lost, stuff breaks, etc.
You already need to deal with lost messages, rejected messages, so just treat this case the same. If you have versions surely you have code to deal with mismatches and e.g. fail back to the older version.
Idk I generally think “magic numbers” are just extra effort. The main annoyance is adding if statements everywhere on version number instead of checking the data field you need being present.
It also really depends on the scope of the issue. Protos really excel at “rolling” updates and continuous changes instead of fixed APIs. For example, MicroserviceA calls MicroserviceB, but the teams do deployments different times of the week. Constant rolling of the version number for each change is annoying vs just checking for the new feature. Especially if you could have several active versions at a time.
It also frees you from actually propagating a single version number everywhere. If you own a bunch of API endpoints, you either need to put the version in the URL, which impacts every endpoint at once, or you need to put it in the request/response of every one.
I think this is only a problem if you’re using a weak data interchange library that can’t use the schema number field to discriminate a union. Because you really shouldn’t have to write that if statement yourself.
We use protocol buffers on a game and we use the back compat stuff all the time.
We include a version number with each release of the game. If we change a proto we add new fields and deprecate old ones and increment the version. We use the version number to run a series of steps on each proto to upgrade old fields to new ones.
The code becomes hard to read. You might need to change int health to float health. In that case “health” properly describes the idea. We’d change this to int DEPRECATED_health and float health.
Folks can argue that’s ugly but I’ve not seen one instance of someone confused.
This. Plus ASN.1 is pluggable as to encoding rules and has a large family of them:
- BER/DER/CER (TLV)
- OER and PER ("packed" -- no tags and
no lengths wherever
possible)
- XER (XML!)
- JER (JSON!)
- GSER (textual representation)
- you can add your own!
(One could add one based on XDR,
which would look a lot like OER/PER
in a way.)
ASN.1 also gives you a way to do things like formalize typed holes.
Not looking at ASN.1, not even its history and evolution, when creating PB was a crime.
The people who wrote PB clearly knew ASN.1. It was the most famous IDL at the time. Do you assume they just came one morning and decided to write PB without taking a look at what existed?
Anyway, as stated PB does more than ASN.1. It specifies both the description format and the encoding. PB is ready to be used out of the box. You have a compact IDL and a performant encoding format without having to think about anything. You have to remember that PB was designed for internal Google use as a tool to solve their problems, not as a generic solution.
ASN.1 is extremely unwieldy in comparaison. It has accumulated a lot of cruft through the year. Plus they don’t provide a default implementation.
> Strange that at the same time (2001) people were busy implementing everyting in Java and XML, not ASN.1
Yes. Meanwhile Google was designing an IDL with a default binary serialisation format. And this is not 2025 typical big corp, over staffed, fake HR levels heavy Google we are talking about. That’s Google in its heyday. I think you have answered your own comment.
> Do you assume they just came one morning and decided to write PB without taking a look at what existed?
Considering how bad an imitation of 1984 ASN.1 PB's IDL is, and how bad an imitation of 1984 DER PB is, yes I assume that PB's creators did not in fact know ASN.1 well. They almost certainly knew of ASN.1, and they almost certainly did not know enough about it because all the worst mistakes in ASN.1 PB re-created while adding zero new ideas or functionality. It's a terrible shame.
PB is not a bad imitation of 1984 ASN.1. ASN.1 is choke full of useless representations clearly there to serve what a committee thought the need of the telco industry should be.
I find it funny you are making it looks like a good and pleasant to use IDL. It’s a perfect example of design by committee at its worst.
PB is significantly more space efficient than DER by the way.
I agree that saying that no-one uses backwards compatible stuff is bizarre. Rolling deploys, being able to function with a mixed deployment is often worth the backwards compatibility overhead for many reasons.
In Java, you can accomplish some of this with using of Jackson JSON serialization of plain objects, where there several ways in which changes can be made backwards-compatibly (e.g. in the recent years, post-deserialization hooks can be used to handle more complex cases), which satisfies (a). For (b), there’s no automatic linter. However, in practice, I found that writing tests that deserialize prior release’s serialized objects get you pretty far along the line of regression protection for major changes. Also it was pretty easy to write an automatic round-trip serialization tester to catch mistakes in the ser/deser chain. Finally, you stay away from non-schemable ser/deser (such as a method that handles any property name), which can be enforced w/ a linter, you can output the JSON schema of your objects to committed source. Then any time the generated schema changes, you can look for corresponding test coverage in code reviews.
I know that’s not the same as an automatic linter, but it gets you pretty far in practice. It does not absolve you from cross-release/upgrade testing, because serialization backwards-compatibility does not catch all backwards-compatibility bugs.
Additionally, Jackson has many techniques, such as unwrapping objects, which let you execute more complicated refactoring backwards-compatibly, such as extracting a set of fields into a sub-object.
I like that the same schema can be used to interact with your SPA web clients for your domain objects, giving you nice inspectable JSON. Things serialized to unprivileged clients can be filtered with views, such that sensitive fields are never serialized, for example.
You can generate TypeScript objects from this schema or generate clients for other languages (e.g. with Swagger). Granted it won’t port your custom migration deserialization hooks automatically, so you will either have to stay within a subset of backwards-compatible changes, or add custom code for each client.
You can also serialize your RPC comms to a binary format, such as Smile, which uses back-references for property names, should you need to reduce on-the-wire size.
It’s also nice to be able to define Jackson mix-ins to serialize classes from other libraries’ code or code that you can’t modify.
Protobufs are better but not best. Still, by far, the easiest thing to use and the safest is actual APIs. Like, in your application. Interfaces and stuff.
Obviously if your thing HAS to communicate over the network that's one thing, but a lot of applications don't. The distributed system micro service stuff is a choice.
Guys, distributed systems are hard. The extremely low API visibility combined with fragile network calls and unsafe, poorly specified API versioning means your stuff is going to break, and a lot.
Want a version controlled API? Just write in interface in C# or PHP or whatever.
This sort of comments doesn't add anything to the discussion unless you are able to point out what you believe to be the best. It reads as an unnecessary and unsubstantiated put-down.
> Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.
The article covers this in the section "The Lie of Backwards- and Forwards-Compatibility." My experience working with protocol buffers matches what the author describes in this section.
This is always the thing to look for; "What are the alternatives?", and/why aren't there better ones.
I don't understand most use cases of protobufs, including ones that informed their design. I use it for ESP-hosted, to communicate between two MCUs. It is the highest-friction serialization protocol I've seen, and is not very byte-efficient.
Maybe something like the specialized serialization libraries (bincode, postcard etc) would be easier? But I suspect I'm missing something about the abstraction that applies to networked systems, beyond serialization.
> And I know the article says no one uses the backwards compatible stuff but that’s bizarre to me – setting up N clients and a server that use protocol buffers to communicate and then being able to add fields to the schema and then deploy the servers and clients in any order is way nicer than it is with some other formats that force you to babysit deployment order.
Yet the author has the audacity to call the authors of protobuf (originally Jeff Dean et al) "amateurs."
What about Cap’n Proto https://capnproto.org/ ? (Don't know much about these things myself, but it's a name that usually comes up in these discussions.)
Cap'n'proto is not very nice to work with in C++, and I'd discourage anyone from using it from other programming languages, the implementations are just not there yet. We use both cnp and protobufs at work, and I vastly prefer protobufs, even for C++. I only wish they stayed the hell away from abseil, though.
The developer experience of capnproto is pretty darn miserable. I replaced my Rust use of it with https://rkyv.org/ -- probably the biggest ergonomic improvement was a single validation after which the message is safe to look at, instead of errors on every code path. The biggest downside was loss of built-in per-message schema evolution; in my use case I can have one version number up front.
The thing is a huge pain to manage as a dependency, especially if you wander away from the official google-approved way of doing things. Protobuf went from a breeze to use to the single most common source of build issues in our cross-platform project the moment they added this dependency. It's so bad that many distros and package managers keep the pre-abseil version as a separate package, and many just prefer to get stuck with it rather than upgrade. Same with other google libraries that added abseil as a dependency, as far as I'm aware
> Just with those two criteria you’re down to, like, six formats at most, of which Protocol Buffers is the most widely used.
What I dislike the most about blog posts like this is that, although the blogger is very opinionated and critical of many things, the post dates back to 2018, protobuf is still dominant, and apparently during all these years the blogger failed to put together something that they felt was a better way to solve the problem. I mean, it's perfectly fine if they feel strongly about a topic. However, investing so much energy to criticize and even throw personal attacks on whoever contributed to the project feels pointless and an exercise in self promotion at the expense of shit-talking. Either you put something together that you feel implements your vision and rights some wrongs, or don't go out of your day to put down people. Not cool.
The blog post leads with the personal assertion that "ad-hoc and built by amateurs". Therefore I doubt that JSON, a data serialization language designed by trimming most of JavaScript out and to be parses with eval(), would meet the opinionated high bar.
Also, JSON is a data interchange language, and has no support for types beyond the notoriously ill-defined primitives. In contrast, protobuf is a data serialization language which supports specifying types. This means that for JSON, to start to come close to meet the requirements met by protobuf, would need to be paired with schema validation frameworks and custom configurable parsers. Which it definitely does not cover.
You must be young. XML and XML Schemas existed before JSON or Protobuf, and people ditched them for a good reason and JSON took over.
Protobuf is just another version of the old RPC/Java Beans, etc... of a binary format. Yes, it is more efficient data wise than JSON, but it is a PITA to work on and debug with.
> You must be young. XML and XML Schemas existed before JSON or Protobuf, and people ditched them for a good reason and JSON took over.
I'm not sure you got the point. It's irrelevant how old JSON or XML (a non sequitur) are. The point is that one of the main features and selling points of protobuf is strong typing and model validation implemented at the parsing level. JSON does not support any of these, and you need to onboard more than one ad-hoc tool to have a shot at feature parity, which goes against the blogger's opinionated position on the topic.
TLV style binary formats are all you need. The “Type” in that acronym is a 32-bit number which you can use to version all of your stuff so that files are backwards compatible. Software that reads these should read all versions of a particular type and write only the latest version.
Code for TLV is easy to write and to read, which makes viewing programs easy. TLV data is fast for computers to write and to read.
Protobuf is overused because people are fucking scared to death to write binary data. They don’t trust themselves to do it, which is just nonsense to me. It’s easy. It’s reliable. It’s fast.
A major value of protobuf is in its ecosystem of tools (codegen, lint, etc); it's not only an encoding. And you don't generally have to build or maintain any of it yourself, since it already exists and has significant industry investment.
I prefer a little builtin backwards (and forwards!) compatibility (by always enforcing a length for each object, to be zero-padded or truncated as needed), but yes "don't fear adding new types" is an important lesson.
Protobufs aren’t new. They’re really just rpc over https. I’ve used dce-rpc in 1997 which had IDL. I believe CORBA used IDL as well although I personally did not use it. There have been other attempts like ejb, etc. which are pretty much the same paradigm.
The biggest plus with protobuf is the social/financial side and not the technology side. It’s open source and free from proprietary hacks like previous solutions.
Apart from that, distributed systems of which rpc is a sub topic are hard in general. So the expectation would be that it sucks.
Backwards compatibility is just not an issue in self-describing structures like JSON, Java serialization, and (dating myself) Hessian. You can add fields and you can remove fields. That's enough to allow seamless migrations.
It's only positional protocols that have this problem.
You can remove JSON fields at the cost of breaking your clients at runtime that expect those fields. Of course the same can happen with any deserialization libraries, but protobufs at least make it more explicit - and you may also be more easily able to track down consumers using older versions.
For the missing case, whenever I use json, I always start with a sane default struct, then overwrite those with the externally provided values. If a field is missing, it will be handled reasonably.
> Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.
> LLM training & data storage: This study specifically considers the inference and serving energy consumption of an Al prompt. We leave the measurement of Al model training to future work.
This is disappointing, and no analysis is complete without attempting to account for training, including training runs that were never deployed. I’m worried these numbers would be significantly worse and that’s why we don’t have them.
No, because you don't incentivize the training of the next version of LLama, and the current version was not trained because you wanted to run that query.
2. I'm pretty sure I do more than 10 vertical cuts; there's no easy image in the link above and the video cuts before he does all the vertical cuts, but I think he's doing at least 20.
3. In real life, an onion starts flexing and bending when you cut. With a very sharp knife, I'm sure you do get a bunch of the small pieces which throw off the standard deviation for the "more cuts" method, but a bunch of the small pieces won't actually be cut as a layer of the onion is pushed out of the way instead of a tiny piece cut off.
What if your program depends on library a1.0 and library b1.0, and library a1.0 depends on c2.1 and library b1.0 depends on c2.3? Which one do you install in your executable? Choosing one randomly might break the other library. Installing both _might_ work, unless you need to pass a struct defined in library c from a1.0 to b1.0, in which case a1.0 and b1.0 may expect different memory layouts (even if the public interface for the struct is the exact same between versions).
The reason we have dependency ranges and lockfiles is so that library a1.0 can declare "I need >2.1" and b1.0 can declare "I need >2.3" and when you depend on a1.0 and b1.0, we can do dependency resolution and lock in c2.3 as the dependency for the binary.
> If that version doesn’t work, you can try another one.
And how will this look like, if your app doesn't have library C mentioned in its dependencies, only libraries A and B? You are prohibited from answering "well, just specify all the transitive dependencies manually" because it's precisely what a lockfile is/does.
Maven's version resolution mechanism determines which version of a dependency to use when multiple versions are specified in a project's dependency tree. Here's how it works:
- Nearest Definition Wins: When multiple versions of the same dependency appear in the dependency tree, the version closest to your project in the tree will be used.
- First Declaration Wins: If two versions of the same dependency are at the same depth in the tree, the first one declared in the POM will be used.
Well, I guess this works if one appends their newly-added dependencies are appended at the end of the section in the pom.xml instead of generating it alphabetically sorted just in time for the build.
It's not "all the transitive dependencies". It's only the transitive dependencies you need to explicitly specify a version for because the one that was specified by your direct dependency is not appropriate for X reason.
Alternative answer: both versions will be picked up.
It's not always the correct solution, but sometimes it is. If I have a dependency that uses libUtil 2.0 and another that uses libUtil 3.0 but neither exposes types from libUtil externally, or I don't use functions that expose libUtil types, I shouldn't have to care about the conflict.
This points to a software best-practice: "Don't leak types from your dependencies." If your package depends on A, never emit one of A's structs.
Good luck finding a project of any complexity that manages to adhere to that kind of design sensibility religiously.
(I think the only language I've ever used that provided top-level support for recognizing that complexity was SML/NJ, and it's been so long that I don't remember exactly how it was done... Modules could take parameters so at the top level you could pass to each module what submodule it would be using, and only then could the module emit types originating from the submodule because the passing-in "app code" had visibility on the submodule to comprehend those types. It was... Exactly as un-ergonomic as you think. A real nightmare. "Turn your brain around backwards" kind of software architecting.)
I can think of plenty situations where you really want to use the dependency's types though. For instance the dependency provides some sort of data structure and you have one library that produces said data structure and a separate library that consumes it.
What you're describing with SML functors is essentially dependency injection I think; it's a good thing to have in the toolbox but not a universal solution either. (I do like functors for dependency injection, much more than the inscrutable goo it tends to be in OOP languages anyways)
I can think of those situations too, and in practice this is done all the time (by everyone I know, including me).
In theory... None of us should be doing it. Emitting raw underlying structures from a dependency coupled with ranged versioning means part of your API is under-specified; "And this function returns an argument, the type of which is whatever this third-party that we don't directly communicate with says the type is." That's hard to code against in the general case (but it works out often enough in the specific case that I think it's safe to do 95-ish percent of the time).
It works just fine in C land because modifying a struct in any way is an ABI breaking change, so in practice any struct type that is exported has to be automatically deemed frozen (except for major version upgrades where compat is explicitly not a goal).
Alternatively, it's a pointer to an opaque data structure. But then that fact (that it's a pointer) is frozen.
Either way, you can rely on dependencies not just pulling the rug from under you.
I like this answer. "It works just fine in C land because this is a completely impossible story in C land."
(I remember, ages ago, trying to wrap my head around Component Object Model. It took me awhile to grasp it in the abstract because, I finally realized, it was trying to solve a problem I'd never needed to solve before: ABI compatibility across closed-source binaries with different compilation architectures).
So you need to test if the version worked yourself (e.g. via automated tests)? Seems better to have the library author do this for you and define a range.
Yeah but GET doesn’t allow requests to have bodies (yeah, I know, technically you can but it’s not very useful), and this is a legitimate issue preventing its use in complex APIs.