Rather interesting solution to the problem. You can't test every possibility, so you pick one and get to rule out a bunch of other ones in the same region provided you can determine some other quality of that (non) solution.
I watched a pretty neat video[0] on the topic of ruperts / noperts a few weeks ago, which is a rather fun coincidence ahead of this advancement.
Not that coincidental. tom7 is mentioned in the article itself, and in his video's heartbreaking conclusion, he mentions the work presented in the article at the end. tom7 was working on proving the same thing!
And he tried to disprove the general conjecture, that every convex polyhedron has the Rupert property, by proving that the snub cube [1] doesn't have it. Which is an Archimedean solid and a much more "natural" shape than the Noperthedron, which was specifically constructed for the proof. (It might even be the "simplest" complex polyhedron without the property?)
So if he proves that the snub cube doesn't have the Rupert property, he could still be the first to prove that not all Archimedean solids have it.
Wouldn’t this problem be related to the problem of finding whether two shapes collide in 3d space? That would probably be one of the most studied problems in geometry as simulations and games must compute that as fast as possible for many shapes.
A test for this one is a bit simpler, I think, because you just have to find a 2D projection of the shape from multiple orientations so one fits inside the other. You don't technically have to do any 3D comparisons beyond the projections.
It's pretty easy to brute force most shapes to prove the property true. The challenge is proving that a shape does not have the Rupert property, or that it does when it's a very specific and tight fit. You can't test an infinite number of possibilities.
The web is open and is famously very competitive. We have three whole browser engines and only two of them are implemented by for-profit corporations whose valuations have 13 digits. I mean other ones exist, but the average modern developer claims it's your fault when something doesn't work because you use firefox or safari and also demands the browser rewrap all the capabilities the operating system already provides for you because they can't be assed to do the work of meeting users where they are.
I'm not sure what the number of people in the world has to do with whether an open standard does or doesn't promote innovation. The user asked for a case where an open standard didn't do that and I provided one. Whether you think it's a great counterpoint is entirely irrelevant to me.
The presumption that started this thread is that open standards are always good for competition. I think browsers are a good counter example where open standards led to three browser vendors, we have less competition rather than more.
Without open standards, we would need to pick a browser and provide for it.
If we needed to support another browser we'd need to provide a new solution built to its specification.
Open standards have allowed the possibility of multiple browser vendors, without making the life of browser consumers (i.e. developers and organisations providing apps and sites) a living hell.
Without this, we'd be providing apps and sites for a proprietary system (e.g. Macromedia Flash back in ancient history).
Furthermore, when Flash had cornered a market, it had absolutely no competition at all. A complete monopoly on that segment of the market.
It took Steve Jobs and Apple to destroy it, but that's a different story.
--
The reasoning for only three engines, isn't the fault of open standards.
There are many elements of our economic system that prevent competition. Open standards is not one of them.
Browser engines are extremely difficult to start today because of the extensive, complicated, and ever growing list of specifications.
We had a web before open standards. It wasn't the best user experience and each browser was somewhat of a walled garden, but there was heavy competition in the space.
I imagine there's most likely a subset of the population who believe that open standards are aligned conceptually to regulation, and that any form of regulation in a free market is wrong.
This subset of the population is misguided at best, and delusional at worst.
My original demonstration wasn't actually the browser question. Auto manufacturers did show much higher levels of competition before standards and shared components.
Though it is worth noting that there was heavy competition in the browser space prior to the specs we have today. Part of the reason we ended up with a heavily spec-driven web is precisely because the high level of competition was leading to claims of corporate espionage, and it was expected that end user experience would be better with standards.
I absolutely agree the end user experience is better. I disagree that has anything to do with competition.
I'm curious if anybody has used this for their own systems and if the savings were substantial. Fedora used something seemingly equivalent (deltarpms) by default in dnf until last year[1] and the rationale for removing it seemed to be based at least in part on the idea that the savings were not substantial enough.
Well, like everything else: Is bandwidth your primary problem, or is CPU? Whenever I run apt now, download time is nearly nil but installation time is forever. Increasing installation time (and complexity, and disk space and cacheability on the mirrors) for saving some download time is unlikely to be a good tradeoff. Of course, if you are stuck with severely metered 2G somewhere in the woods, you may very well think differently. :-)
Similarly, I've turned off pdiffs everywhere; it just isn't worth it.
It plays well with the Debian reproducible builds stuff to weed out as much non-essential variation as possible.
For certain packages, I'm guessing the byte-savings could be near-infinite. Already programs are encouraged to ship `foo` (potentially arch dependent) and `foo-data`, but imagine updating "just one font" in a blob of fonts, and not having to re-download _all_ other fonts in the package.
For some interpreted-language packages, these deltas would be nearly as efficient as `git` diffs. `M-somefile.js`, A-new file.js` and just modify the build timestamps on the rest...
The answer to your question should be relatively straightforward: just run it on a base/default major version upgrade and see how many MB of files have the same `md5` between releases?
It could have really nice observability properties if the delta is transparent and you can see what is flowing by. In this regard, the space savings would be a nice side effect.
I've used this, I think it depends on what speed the connection to your Debian mirror is. It and the apt meta-data diffs definitely helped when I had slower Internet.
IIRC Google does something similar for Chrome browser updates.
If the ignition and door locks in your vehicle were mistakenly designed in such a way that they are trivially shimmed or could be operated by any key it seems absurd to suggest the customer should pay you to replace these mechanisms with ones that are properly secured. This seems roughly analogous to that situation at least to my understanding.
The story has a bad spin yes. But it’s just as much of a controversy if they had require people themselves pay the cost if they found out the cars where shipped with defective breaks. It’s a product error not wear and tear or user error, they should eat the costs, but the cybersecurity framing of it is being used to attempt to push the cost to the consumer.
> If the output was truncated due to this limit, then the return value is the number of characters (excluding the terminating null byte) which would have been written to the final string if enough space had been available.
The initial call with size 0 tells you the necessary length of the buffer for the string you want, but does not include the null byte.
For clarity, all snprintf calls "return the number of bytes that would be written to s had n been sufficiently large excluding the terminating null byte" [1].
I’d argue this is one of the cursed design choices of the standard library.
Way to easy to use the returned value as the “actual length” of the written string. Sure, that was never the intent, but still…
The standard library also makes appending a formatted string to an existing one surprisingly nontrivial…
What should be a 1-liner is about 5-10 lines of code (to include error handling) and is somewhat hard to read. The “cognitive load” for basic operations shouldn’t be high…
That reminds me of an article where the author was like the most disastrous design choices in all of programming, was the NULL-terminated string. It's telling that no other language since C really does that.
No, there are no protocols intended to implement such a thing at this time. I'm not aware of anybody attempting to spec out such a protocol either, but I do think it's a really interesting idea.
There is about zero chance that Gnome would implement such a protocol which makes the whole endeavor pointless. They can't even agree on server side window decorations.
If you take it as an axiom that the licensing system for mental health professionals is there to protect patients from unqualified help posing as qualified help, then ensuring that only licensed professionals can legally practice and that they don't simply delegate their jobs to LLMs seems pretty reasonable.
Whether you want to question that axiom or whether that's what the phrasing of this legislation accomplishes is up to you to decide for yourself. Personally I think the phrasing is pretty straightforward in terms of accomplishing that goal.
Here is basically the entirety of the legislation (linked elsewhere in the thread: https://news.ycombinator.com/item?id=44893999). The whole thing with definitions and penalties is eight pages.
Section 15. Permitted use of artificial intelligence.
(a) As used in this Section, "permitted use of artificial
intelligence" means the use of artificial intelligence tools
or systems by a licensed professional to assist in providing
administrative support or supplementary support in therapy or
psychotherapy services where the licensed professional
maintains full responsibility for all interactions, outputs,
and data use associated with the system and satisfies the
requirements of subsection (b).
(b) No licensed professional shall be permitted to use
artificial intelligence to assist in providing supplementary
support in therapy or psychotherapy where the client's
therapeutic session is recorded or transcribed unless:
(1) the patient or the patient's legally authorized
representative is informed in writing of the following:
(A) that artificial intelligence will be used; and
(B) the specific purpose of the artificial
intelligence tool or system that will be used; and
(2) the patient or the patient's legally authorized
representative provides consent to the use of artificial
Section 20. Prohibition on unauthorized therapy services.
(a) An individual, corporation, or entity may not provide,
advertise, or otherwise offer therapy or psychotherapy
services, including through the use of Internet-based
artificial intelligence, to the public in this State unless
the therapy or psychotherapy services are conducted by an
individual who is a licensed professional.
(b) A licensed professional may use artificial
intelligence only to the extent the use meets the requirements
of Section 15. A licensed professional may not allow
artificial intelligence to do any of the following:
(1) make independent therapeutic decisions;
(2) directly interact with clients in any form of
therapeutic communication;
(3) generate therapeutic recommendations or treatment
plans without review and approval by the licensed
professional; or
(4) detect emotions or mental states.
I’m not sure why this is a “proposal” for other string to int parsers rather than a function the author wrote themselves. It seems rather trivial to implement on top of something like strtol (or whatever your language’s equivalent is).
You could say that for almost all of most language's standard libraries.
Imagine you ad a standard library string-to-integer parser that didn't know about minus signs. Sure, you could write your own function that wrapped the parser to allow for negative numbers, but wouldn't it be better if the standard library one did it?
I take your general point with the caveat that no negatives leaves half of all values for a given integer type unyieldable whereas lack of scientific notation support does not.
I was operating under an unfounded assumption that the blog post existed instead of the code to do the thing for your particular use case rather than in addition to it, which isn’t entirely fair given we have had no prior interactions and I have not investigated your work at all.
I watched a pretty neat video[0] on the topic of ruperts / noperts a few weeks ago, which is a rather fun coincidence ahead of this advancement.
[0] https://www.youtube.com/watch?v=QH4MviUE0_s