I've always been a bit wary of a local trusted CA, especially with the signing cert on the same dev box as the certificates it signs which is how I've seen things done a lot. It feels like opening up a trust issue that could allow an uncooperative entity to play games with me… Maybe that is just paranoia from the practical jokes played back in CompSci at Uni!
Admittedly an external attacker getting close enough to sign a cert using such a CA, in order to trick me into something, means they probably have such high access already that they don't really need the CA to do that or worse, so perhaps it is unnecessary caution.
The way I handle this in the dev tooling I put together is to run a totally separate browser profile that (1) trusts the certificate and (2) can only connect to localhost. It also launches with a totally different colour scheme.
This will launch a separate copy of chrome with a fresh/separate profile. It will have a different colour (RGB values set in there) so it's visually distinct when it's running and I don't get the windows mixed up. Any request made from the browser will be rewritten to connect to 127.0.0.1 (except a couple google font domains).
The danger if someone got their hands on my local key/cert is basically nil. They would only be able to MITM connections from this one specific browser window to localhost. And that browser is incapable of connecting to anything besides localhost. I can never accidentally open my banking site in there. Also fresh profile so no saved passwords, credit cards, or anything else.
(As an added benefit, I don't really need to worry about reconfiguring URLs for projects. If I open "testing.mysite.com" in that browser, it will force the connection to localhost, so I can just run my services at our test URLs and steal configs as-is from the testing environment. Taking it further, I then have a controller set up in k3s/Rancher Desktop that rewrites the service on all annotated ingresses to point to its own service, which runs nginx, which the controller then configures to proxy the requests on to the local service or the actual upstream testing service depending on whether the local container is running. It also configures CoreDNS to point the upstream URLs at the same proxy. End result is that from the browser or anything running in k3s you can hit our testing URLs and it will hit your local container if it's running or fall back to the testing environment if not.)
If you want to try the browser thing, you can generate the fingerprint for a certificate with:
I figure you could create the CA, have your browser trust it, create and sign your localhost cert, and then nuke the CA private key so no other carts may be signed.
It'd be annoying if you need to make a new localhost certificate, but totally manageable.
I think this is primarily FUD down by SSL cert companies.
If you have appropriate permissions on the private keys, it would require the same level of access to read the private key as it would for the attacker to create their own CA and install it on your PC.
My general rule of thumb is to use private certificates unless a) users interact with it directly, cuz they won't install my cert, or b) financial or other highly sensitive data flows through it. I'm not convinced that commercial CAs are more secure, but for the price of an SSL cert, it's worth it to have it not be my fault if something happens.
Then point your server to the output files. If you want, you can also modify `/etc/hosts` to point a "production name" to localhost (something I actually don't do and never wrote a script for). Far fewer moving parts than the OP. (parts of Simpatico uses subtle.crypto and so requires https to run even locally)