> We did have three bugs that would have been prevented by the borrow checker, but these were caught by our fuzzers and online verification. We run a fuzzing fleet of 1,000 dedicated CPU cores 24/7.
Remember people, 10,000 CPU hours of fuzzing can save you 5ms of borrow checking!
(I’m joking, I’m joking, Zig and Rust are both great languages, fuzzing does more than just borrow checking, and I do think TigerBeetle’s choices make sense, I just couldn’t help noticing the irony of those two sentences.)
It's not that ironic though --- the number of bugs that were squashed fuzzers&asserts but would have dodged the borrow checker is much, much larger.
This is what makes TigerBeetle context somewhat special --- in many scenarios, security provided by memory safety is good enough, and any residual correctness bugs/panics are not a big deal. For us, we need to go extra N miles to catch the rest of the bugs as well, and DST is a much finer net for those fishes (given static allocation & single threaded design).
I don't think needing to go "the extra N miles" is that special. Even if security is the only correctness concern - and in lots of cases it isn't, and (some) bugs are a very big deal - memory safety covers only a small portion of the top weaknesses [1].
Mathematically speaking, any simple (i.e. non-dependent) type system catches 0% of possible bugs :) That's not to say it can't be very useful, but it doesn't save a lot of testing/other assurance methods.
Your post reminded me how I could tell my online friend was pissed just because she typed "okay." or "K." instead of "okay". We could sense our emotional state from texting. One of those friendships you form over text through the internet. I wouldn't recommend forming these too deeply since some in person nuance is lost, we could never transition to real life friends despite living close by. But we could tell what mood we were in just from typing. It was wild.
Sure, a different database schema may have helped, but there are going to be bugs either way. In my view a more productive approach is to think about how to limit the blast radius when things inevitably do go wrong.
I love when people do that because they always say "I will push the fix to git later". They never do and when we deploy a version from git things break. Good times.
I started packing things into docker containers because of that. Makes it a bit more of a hassle to change things in production.
Depends on the org, the big ones I've worked for regular Devs even seniors don't have anything like the level of access to be able to pull a stunt like that.
At the largest place I did have prod creds for everything because sometimes they are necessary and I had the seniority (sometimes you do need them in a "oh crap" scenario).
They where all setup on a second account in my work Mac which had a danger will Robinson wallpaper because I know myself, far far too easy to mentally fat finger when you have two sets of creds.
Had a coworker have to drive across the country once to hit a power button (many years ago).
Because my suggestion they have a spare ADSL connection for out of channel stuff was an unnecessary expense... Til he broke the firewall knocked a bunch of folks offline across a huge physical site and locked himself out of everything.
If your remote is set to a git@github.com remote, it won't work. They're just pointing out that you could use git to set origin/your remote to a different ssh capable server, and push/pull through that.
On the flipside, then you have to maintain instances of everything.
For most of what I run these days, I'd rather just have someone else run and administer my database. Same with my load balancers. And my Kubernetes cluster. I don't really care if there is an outage every 2 years.
How about, don't use Kubernetes? The lack of control over where the workload runs is a problem caused by Kubernetes. If you deploy an application as e.g. systemd services, you can pick the optimal host for the workload, and it will not suddenly jump around.
> The lack of control over where the workload runs is a problem caused by Kubernetes.
Fine grained control over workload scheduling is one of the K8s core features?
Affinity, anti-affinity, priority classes, node selectors, scheduling gates - all of which affect scheduling for different use cases, and all under the operator's control.
The nice thing about this solution, its not limited to RDS. I used RDS as an example as many are familiar with it and are known to the fact that it will change AZ during maintenance events.
Any hostname for a service in AWS that can relocate to another AZ (for whatever reason), can use this.
Agree, Kubernetes isn't for everyone. This solution came from an specific issue with a client which had ad hoc performance problems when a Pod was placed in the "in-correct" AZ. So this solution was created to place the Pods in the most optimal zone when they were created.
Sure, but there are scenarios and architectures where you do want the workload to jump around, but just to a subset of hosts matching certain criteria. Kubernetes does solve that problem.
> UK's Online Safety Act 2023 would require us to do a prohibitively complicated risk assessment for our service. We're talking reading through thousands of pages of legal guidelines.
> We're a volunteer operation and would likely be held responsible as individuals. There is talk of fines up to 18 million GBP which would ruin any single one of us, should they get creative about how to actually enforce this.
> Our impression is that this law is deliberately vague, deliberately drastic in its enforcement provisions, and specifically aimed against websites of all sizes, including hobby projects. In other words, this seems to us to be largely indistinguishable from an attempt to basically break the internet for all UK citizens.
> If we could afford to just hope for the best, we'd love to.
The way I understand this is that it's not feasible for them to assess how the legislation impacts them, so they would rather stay safe than risk having their lives destroyed.
Its such a ridiculous law and this outcome is entirely predictable, but bring this up with the proponents of it and they stick their head in the sand, to the point where I think they are perfectly happy with the UK not having a working Internet.
The reaction is entirely reasonable. The only way they can reasonably ensure they protect themselves is to ensure nobody in the UK can access their site.
I'm not deep into this subject, so what's obvious to you isn't so obvious to me. Would you mind explaining a bit more on what's so obvious and why it's particularly unhinged?
reply