Hacker Newsnew | past | comments | ask | show | jobs | submit | chicom_malware's commentslogin

WebPKI is a maze of fiefdoms controlled by a small group of power tripping little Napoleons.

Certificate trust really should be centralized at the OS level (like it used to be) and not every browser having its own, incompatible trusted roots. It's arrogance at its worst and it helps nobody.


> (like it used to be)

When are you imagining this "used to be" true? This technology was invented about thirty years ago, by Netscape, which no longer exists but in effect continues as Mozilla. They don't write an operating system (then or now) so it's hard to see how this is "centralized at the OS level".


It was true for at least Chrome until around 2020: Chrome used to not ship with any trusted CA list and default to the OS for that.

Firefox has their own trusted list, but still supports administrator-installed OS CA certificates by default, as far as I know (but importantly not OS-provided ones).


Ah, so you're referring to Chrome, which shipped Chrome 1.0 at the tail end of 2008, and was in effect exercising normal root programme behaviour by the time I was on the scene in 2015 (but presumably also before).

Here's an example of them doing just that:

https://security.googleblog.com/2017/09/chromes-plan-to-dist...

I think this is just a lack of perspective on your part.


So your presumptions (no, Chrome did not have their own root program in 2015 or before; they announced [1] it in 2020, as I wrote, and blocking individual CAs out of it on a case by case basis does not quite make a full root program) and unwillingness to spend one minute to fact check them are a lack of perspective on my side?

My point was that Chrome (arguably not an insignificant browser) did use the OS’s root CA lists for 12 years (or 7, if you really want to count individual CA bans as a program; arguably both not an insignificant time span).

[1] https://groups.google.com/g/mozilla.dev.security.policy/c/3Q...


I think this is, at best, a lack of understanding of what's really going on, and at worst just obstinence.

As Ryan explains this was basically the same thing with new paint on it, it reminds me of the UK's "Supreme Court". Let me digress briefly to explain:

On paper historically the UK didn't have an independent final court of appeal, significant appeals of the law would end up at the Lords of Appeal in Ordinary ("Law Lords" for short) who were actual Lords, in principle unelected legislators from the UK's upper chamber. Hundreds of years ago they really were just lords, not really judges at all. The US in contrast notionally has an independent final court of appeal, completely independent from the rest of the US government, much better. Except in reality the Law Lords were completely independent, chosen among the country's judges by independent hiring process while the US Supreme Court are just appointed hacks for the President's party, not especially competent or effective jurists but loyal to party values.

So the fresh coat of paint gave the UK a "Supreme Court" in 2009 by designating a building and sending exactly the same factually independent jurists to go work in that building with their independent final court of appeal and stop requiring them to notionally be Lords (although in practice they are all still granted the title "Lord") who met in some committee room in the Palace of Westminster. The thin appearance changed to match the ideals the Americans never met, the reality was exactly the same as before.

And that's what Ryan is writing about in that post. In theory there was now a Chrome root programme, in practice there already was, in principle now you need a sign-off from Ryan's team in practice you already did. In theory now you're now discussing inclusion on m.d.s.policy because you want Chrome trust programme approval, in practice that's already a big part of why you're there.

When Chrome 1.0 shipped it was important to deliver compatibility to gain market share. Soon it didn't matter, they could and did choose to depart from that compatibility to distinguish Chrome as better than the alternatives.

7 years for one browser compared to 30 years for all browsers does I'm afraid seem a long way from "like it used to be" and much more "how I wrongly thought it should work".


Why should it be centralized at the os level?

Https certificate trust is basically the last thing I think about when I choose an os. (And for certain OSes I use, I actively don't trust its authors/owners)


It is genuinely weird to think Microsoft should get a veto over your browser if that browser stops trusting a CA, right?


The only thing that is genuinely weird is having four different certificate stores on a system, each with different trusted roots, because the cabals of man-children that control the WebPKI can't set aside their petty disagreements and reach consensus on anything.

Which makes sense, because that would require them all to relinquish some power to their little corner of the Internet, which they are all unwilling to do.

This fuckery started with Google, dissatisfied with not having total control over the entire Internet, deciding they're going to rewrite the book for certificate trust in Chrome only (turns out after having captured the majority browser market share and having a de-facto monopoly, you can do whatever you want).

I don't blame Mozilla having their own roots because that is probably just incompetence on their part. It's more likely they traded figuring out interfacing with OS crypto APIs for upkeep on 30 year old Netscape cruft. Anyone who has had to maintain large scale deployments of Firefox understands this lament and knows what a pain in the ass it is.


that’s not what he meant, and you know it. he means use the OS store (the one the user has control over), instead of having each app do its own thing (where the user may or may not have control, and even if he does have it, now has to tweak settings in a dozen places instead of one). they try to pull the same mess with DNS (i.e. Mozilla’s DoH implementation)


I don't understand, because the user has control over the browser store too.

(As an erstwhile pentester, btw, fuck the OS certificate store; makes testing sites a colossal pain).


> I don't understand, because the user has control over the browser store too.

i already mentioned that ("may or may not"). former or latter, per-app CA management is an abomination from security and administrative perspectives. from the security perspective, abandonware (i.e. months old software at the rate things change in this business) will become effectively "bricked" by out-of-date CAs and out-of-date revocation lists, forcing the users to either migrate (more $$$), roll with broken TLS, or even bypass it entirely (more likely); from the administrative perspective, IT admins and devops guys will have to wrangle each application individually. it raises the hurdle from "keep your OS up-to-date" to "keep all of your applications up-to-date".

> As an erstwhile pentester

exactly. you're trying to get in. per-app config makes your life easier. as an erstwhile server-herder, i prefer the os store, which makes it easier for me to ensure everything is up-to-date, manage which 3rd-party CAs i trust & which i don't, and cut 3rd-parties out-of-the-loop entirely for in-house-only applications (protected by my own CA).


It's baffling to me that anyone would expect browsers to make root store decisions optimized for server-herders. You're not their userbase!


neither are pentesters


Right, I don't think the pentester use case here is at all dispositive; in fact, it's approximately as meaningful as the server-herders.


> (As an erstwhile pentester, btw, fuck the OS certificate store; makes testing sites a colossal pain)

Can you please explain? I'm just curious, not arguing.


It's a good question! When you're testing websites, you've generally got a browser set up with a fake root cert so you can bypass TLS. In that situation, you want one of your browsers to have a different configuration than your daily driver.


Unfortunately, OS vendors like microsoft are quite incompetent at running root stores https://github.com/golang/go/issues/65085#issuecomment-25699...


> It would certainly send a strong message, and cause a lot of chaos, if the biggest CA on the Internet found itself being removed from trust stores

Their customers would be forced to give their business to Honest Achmed[1].

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=647959


Thank you, that was an awesome laugh this morning, that request was genius:

> 2. Sub CAs Operated by 3rd Parties

> Honest Achmed's uncles may invite some of their friends to issue certificates as well, in particular their cousins Refik and Abdi or "RA" as they're known. Honest Achmed's uncles assure us that their RA can be trusted, apart from that one time when they lent them the keys to the car, but that was a one-off that won't happen again.


Someone should start a real CA business, complete with all the proper audits and everything to get it into the browser trust stores… and call it “Honest Achmed’s Used Cars and Certificates” (have it buy some random used car dealership in the middle of nowhere so the name is not a lie)

Where’s a billionaire troll when you need one? (And a form of billionaire trolling that would upset a lot less people than Musk’s version of it.)

Would be even funnier if the billionaire’s name actually was Achmed


What has prevented this (a good legit CA co) from happening before now?


Where's the value?

Anyone can get a cert from lets-encrypt. No users of your website care how trustworthy your CA is. And CA's are too big to fail, so you barely need to worry about your CA being removed by the trust store. So you can only compete on price.

If we want to change this, we need a way for a certificate to be co-signed by multiple CA's. So that the certificate can be presented to the user, and they can figure out if any trusted CA of theirs has signed the certificate. This way, revoking trust in a CA becomes easier, because people should have multiple signatures on their certificate. That means, all of a sudden, that the quality of a CA actually matters.

Whilst it might seem this is already possible, it is not. Cross-signing is only a thing for intermediate certificates, and does not work. You can also have multiple certificates for the same key, but when starting a TLS session, you must present a single certificate. (This seems to have changed for TLS 1.3, so perhaps this is already possible?)


I think TLS 1.3 still requires the end entity (server/client) cert first. All the other certs can now be in any order and the verification is suppoaed to figure out a valid path.

In theory, you could make an intermediate CA and get cross signed certs from multiple CAs (hopefully with Name Constraints), your intermediate CA signs your server cert, and you include all your cross certs and the intermediate certs for those. And the client figures out which chains it can make and if it likes any of them.

But experience has shown, verification may find a chain with signatures that line up, but the CA is expired or revoked in the local trust store, and reject the verification, even though another chain could have been found with the provided certificates.

And, because of the limited information in tls handshakes from real world clients, it's difficult (maybe impossible) to know which CAs a client would accept, so that the server could present an appropriate chain.


The obvious solution is for TLS 1.4 to mandate that all certificate chains contain at least one expired certificate.


Time and money. Plus right now even if you bought the infra, the staff, paid for and passed the audits, and then waiting while Apple, Mozilla, Google and Oracle (at least) included your roots...Microsoft aren't taking more right now. So you have to wait for some unknown time in the future when/if they start doing that again. You could purchase a root off an existing CA, subject to the trust stores approving it, and the boatload of cash you'd need to buy it (plus still having the staff and infra to operate it).


> Microsoft aren't taking more right now. So you have to wait for some unknown time in the future when/if they start doing that again.

Interesting, I hadn’t heard that, is there anywhere I can read more about it?

> You could purchase a root off an existing CA,

As well as a sub-CA, I remember in theory you can have two independent CAs with cross-signing… but do browsers actually support that?


"Nothing public to point out to" is probably accurate but it is noted publicly here: https://learn.microsoft.com/en-us/security/trusted-root/new-...


Nothing public to point to, sorry.

Sub-CAs: Not really. Operational risk to the parent CA is huge, you'd be hard pressed to get any current public CA to sign an issuing CA to be operated externally. Cross-signing still works (though it is the stuff of nightmares in many cases) but again you have to have money and a CA willing to do it!


Entrust is/was doing it with ssl.com after they were detrusted.


No, SSLCorp are hosting and managing a CA with Entrust branding. Same as Sectigo are doing. Entrust aren't doing issuance, verification - they're straight reselling from white-labeled issuing CAs.


Let's Encrypt has been around for a decade and is doing pretty well for itself. The manual validation required for EV and OV certificates precludes it from being a passion project, unless managing a call center and a squad of document reviewers is your dream job.


Let's Encrypt doesn't issue EV and OV. Honest Achmed’s Used Cars and Certificates wouldn't have to either.


It wouldn't be fun if Honest Achmed’s Used Cars and Certificates didn't also issue EV and OV certificates. It wouldn't be on brand to miss out on entire segments of the certificate market.


I'm actually curious why that wasn't approved


To my understanding, the point of Honest Achmed was to demonstrate that it was possible to write a facially reasonable and compliant CA inclusion request that was also intuitively unacceptable. It successfully demonstrated that the inclusion policies at the time needed to be made more rigorous, which they were.


Theo has addressed this directly. I cannot find the video at the moment - it is somewhere on YouTube - but his response essentially is okay, so where is 'cat'? Where is 'grep'? Where is Korn Shell?

Everyone is busy jumping up and down and bitching about reinventing the wheel in Rust but no one has even taken the time to rewrite the simplest of Unix tools in Rust.

Not to mention OpenBSD has a rule that "base builds base" and the Rust compiler is a bloated monster that would fail that most basic task.

So where is the benefit?


>no one has even taken the time to rewrite the simplest of Unix tools in Rust.

"The uutils project reimplements ubiquitous command line utilities in Rust. Our goal is to modernize the utils, while retaining full compatibility with the existing utilities."

https://uutils.github.io/

https://github.com/uutils/coreutils


"We are planning to replace all essential Linux tools."

It would be nice if they commit to replacing more than just Linux tools. There are numerous quirks/additions to the GNU utils that the BSDs don't want or need.


The worst part is when you come across something advertised as a replacement and it does something like 80% to 90% of what the original does with a WONTFIX for the rest. That can certainly be a valid choice in some cases, but for core tooling it's not realistic to expect widespread replacement to happen in that scenario.


lol? These have been rewritten several times by various people, it's almost a meme at this point to make "x utility but in Rust".


so where is 'cat'?

https://github.com/sharkdp/bat (Haven't used this one, but it's pretty popular)

Where is 'grep'?

https://github.com/BurntSushi/ripgrep Use this one often. It's fast af to search a directory of source code.

Where is Korn Shell?

https://fishshell.com/blog/fish-4b/ Fish is now entirely in Rust, very popular, and to be frank basically a step above bash or ksh.


None of these is a 1:1 replacement.


Are they posix compliant? (Hints: no)


Fair. I'm not an OS dev, so I don't really know what POSIX compliance with cat or whatever gets you.

All I know is that I'm increasingly replacing classic unix cli tools with rust ones that are just better and faster.


Here is the other part. For the BSD's, it's not as simple as, "someone implement them, then we include them." You can package it in the ports trees, but they won't be apart of the base system. Because BSD and Linux are similar in a lot of ways, they differ in a lot. The BSDs are designed with the idea that each BSD makes a completed "base system". The same team writing kernel code is the same team writing user land. So each BSD's core utils is developed and maintained by their respective development team. It is not like Linux, which is really more like smashing different projects from different teams together. At least for what is considered the "base" system.

Then mentioned elsewhere, but this isn't as big of a problem on OpenBSD, but would be fore NetBSD. The Rust tools don't support all the supported architectures. This is where BSD philosophy diverges. With NetBSD, if you got a PDP-11 or a toaster with a chip, they are more than happy to make NetBSD run on it, and the NetBSD team also don't necessarily have a requirement for physical hardware, if there is an esoteric chip with QEMU support, they will happily try to support it. OpenBSD will maintain support for an architecture so long as someone is willing to maintain it and owns the physical hardware (which is why it supports less than NetBSD).

This is also why NetBSD is sort of "stuck on " gcc. I believe they would like to move to clang, but can't due to architecture support.

Some more addition to the first paragraph: OpenBSD to a degree takes this to a whole other level than the other BSDs. OpenBSD maintains their own fork of X11 called xenocara and window manager, cwm. In theory, you can have a pretty basic and functional system from boot code to window manager with all of that code being code maintained by the same team, the OpenBSD developers. They even have their own version control system called got.


Thanks for this indepth response. It's made me realise that I'm very much in the linux "smashing together different projects" camp. Probably much more so than other linux users seeing as I prefer to use super minimalist distros like Alpine + my favourite utils, rather thant he standard GNU/Linux.


No problem. Like I said, its easy to think that since they are both unixy-systems and a lot of things "rhyme" that a solution that works for one, works for the other. But that they in fact have different design and development philosophies. And this is even true amongst the BSDs themselves. NetBSD, from boot code to the top of the stack is developed independently from FreeBSD, OpenBSD, Dragonfly, and vice-verse. Now, they will take ideas and re-implement them into their own stack, but they don't necessarily share code directly.


https://github.com/uutils/coreutils

Parent wasn't about rust specifically. Just something safer than C


> uutils

Under development for longer than a decade and still unstable


“put up or shut up” is a valid response.

Someone is “putting up”, just need someone to merge uutils and the OpenBSD kernel to see what it starts to look like.

Maybe this is the next part of the “put up or shut up” mantra- but we’re getting closer.

The parents irony is not lost though. C and perl are both quite dangerous in their own ways, lots of implicit assumptions; its ironic that a safety focused operating system would lean in on those languages.


The website says "production ready" for their coreutils.

Maybe catching up to 40+ years of development takes a little bit of time?


> Maybe catching up to 40+ years of development takes a little bit of time?

Sure. But that's not OpenBSD's problem, is it?


Which is the point. 40 years of development is 40 years of development.


It will not be Rust, since this has not happened after so many years of Rust existing. It will be some other language.


OpenBSD is thoughtfully designed because it is one of the best examples of "design by dictator" (Theo) - and a small core team - as opposed to design by committee like every other OS out there. Look me in the eye and tell me 90% of changes and unnecessary features in macOS aren't there because some team needs to justify their existence.


What features in macOS are you referring to?


I'm not OP but renaming IOMasterPort to IOMainPort for the sake of renaming alone drove home what a bunch of backwards-incompatible clowns Apple are


Worth mentioning lack of Bluetooth is only because they felt the existing BT stack was not up their standards and ripped it out rather than let it rot like most software.


There are a grand total of zero valid reasons for not including bluetooth in a desktop OS.


It's pretty easy to avoid Bluetooth, and it'a a complex stack and having code quality standards means sometimes you have to remove features because the code quality isn't there, and nobody had time/interest/motivation to do the work to make an implementation with the proper amount of quality.

If you have a 'must have' device for your desktop environment that's bluetooth, then yes, it makes OpenBSD unviable for you; but OpenBSD isn't viable for every use case.


> isn't viable for every use case

Yes, and desktop, especially laptop, is an example.


Sounds easy to buy one of those bluetooth dongle things that can talk to your external mouse/keyboard and pretend to be a set of wired usb-hid devices to solve that small issue.


I’d prefer not to have something than to have a bad something.

Yeah, it was annoying when I tried to pair my mouse- but you know… a wired mouse isn’t that big of a deal.

One thing that brings me the most displeasure about internet discourse about operating systems is this idea that they all have to do all the same things.

Thats homogeny by another name; the point of different operating systems is different trade-offs.


Sure, and openbsd has traded off being a desktop OS for not tainting their code with the Bluetooth stack


If we're going to be bad faith discussing: as you seem to be should I remind you that your definition of "being a desktop OS" means running a stack that is primarily useful for phones and laptops- definitively not "Desktop" devices?


I haven't used a Bluetooth device on a desktop or laptop in decades now. Not because I'm using OpenBSD, but because while the promise is there, the reality of using Bluetooth has been so disappointing it's not even worth trying for me anymore. Personally, I'm not opposed to wires, because wires usually mean low latency and no dropped connections; but even when using thinks like wireless mice, using them in propriatary modes was so much better than Bluetooth that after a couple attempts, I stopped trying.

You've clearly had a different experience with Bluetooth, and that's good for you, and neither of our experiences is universal, but I think there are plenty of people willing to use a desktop OS without Bluetooth.

Heck, my new car only uses bluetooth to do phone pairing, then it switches to wifi to talk to phones, because that's clearly better than Bluetooth.


Not having developers to work on it seems pretty valid. It's a matter of opinion, but I feel like it's better to have no Bluetooth, compared to having a half-broken and unsupported implementation. Again you could also view is as having a semi-functional Bluetooth is better than none and then hopefully attract developer wanting to fix it.


I can't recall having needed bluetooth for anything else but audio[1] on my laptops so there is a huge YMMV.

[1] for which there is an easy workaround in the form of class compliant usb audio cards that output to bluetooth.


Then make it. Are you waiting for someone else to do the work?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: