Very cool! I implemented some shared-memory atomic snapshots in Rust [0] and also did my best to take automated testing very seriously. I started out using loom [1], the library mentioned in the article, but latter switched to shuttle [2].
Shuttle's approach is to be randomized, instead of exhaustive like loom. However, the scheduler does still give probabilistic guarantees about finding bugs. I found that shuttle was faster and scaled to more complicated test scenarios.
Similar to the article, shuttle also lets you save the random seed if a particular schedule causes the test suite to fail. Being able to quickly reproduce failing tests is really important and enables you write explicit test cases for bugs that were previously caught and fixed [3].
I see this as typical of organizations that value creating a more inclusive environment for their developers.
Also, for what it's worth, Jenkins changed their terminology to use "agent" instead of slave in 2020 [0]. These efforts might seem futile in isolation, but they add up over time.
I've been working on implementations of classic algorithms in distributed computing, and used Turmoil for testing correctness in message-passing / HTTP systems [0].
Overall, my experience has been positive. When it works, it's great. A pattern I've been following is to have a single fixture that returns a simulation of the system under a standard configuration, for example N replicas of an atomic register, so that each test looks like:
1. Modify the simulation with something like`turmoil::hold("client", "replica-1")`.
2. Submit a request to the server.
3. Make an assertion about either the response, or the state of the simulation once the request has been made. For example, if only some replicas are faulty, the request should succeed, but if too many replicas are faulty, the request / simulation should time-out.
One of the things I have found difficut is that when a test fails, it can be hard to tell if my code is wrong, or if I am using Turmoil incorrectly. I've had to do some deep-dives into the source in order to fully understand what happens, as the behavior sometimes doesn't line-up with my understanding of the documentation.
That's great to hear that you've been using turmoil for this type of work. I'm one of the authors and we'd love to hear about your experience and what we can do improve things. Either a github issue or reaching out on discord works great.
We've discussed improving the tracing experience, and even adding visualizations, but it hasn't been prioritized yet.
I've only seen the first four episodes, but they are fantastic. It is incredible how from ages 7 to 14 to 21 to 28 some individuals seem to barely change at all, whereas others appear to be a completely different person in each episode.
I'm not really up to date, but a couple recent papers that I found interesting were:
In theoretical distributed computing, The Space Complexity of Concencus From Swap [0] solves a problem that has been open for a couple decades, and won Best Paper at PODC 2022.
In quantum complexity theory, MIP^* = RE [1] was really big deal when it was published in 2020. It got a (relative) ton of press coverage, and there are lots of articles and blog post available that give a high-level of the result and techniques used. I like this one [2] from Quanta Magazine.
What does the diversity of the groups have to do with anything? Trans people are (in general) oppressed in today's society, and the be perpetrators of that oppression are (in general) cis people. It should follow that trans folk finding a community of their own is admirable, but a community of cis-only folk should have to justify they're existence as something other than a tool to maintain oppression.
I have seen the most positive change from being more compassionate towards myself.
It is so easy to put pressure, shame, and negativity onto yourself, to a degree that you never would for anyone else. It has been very helpful at times to ask "What would I say to a good friend who was in the same situation as me?". Almost always, it is a complete 180 from the aweful things I would tell myself. Knowing that I have the ability to be just as compassionate to myself as I strive to be towards others, has been huge.
I'm old so people ask me for advice from time to time. Most often the first thing I hear myself saying is, "Try to give yourself a break". Usually what people have related to me when asking, is so needlessly hard on themselves. Anyway I think you're on to something positive.
Disjoint-Sets have a very cool implementation whose amortized time complexity is extremely slow growing. It is not quite constant, but even for a disjoint-set with as many elements as there are particles in the universe, the amortized cost of an operation will be less than or equal to 4.
The inverse Ackermann function is one of the slowest-growing functions that you’re ever likely to encounter in the wild. To say that it’s “constant for all practical purposes” is 100% true but doesn’t do justice to how amazingly slow this function is to grow.
Shuttle's approach is to be randomized, instead of exhaustive like loom. However, the scheduler does still give probabilistic guarantees about finding bugs. I found that shuttle was faster and scaled to more complicated test scenarios.
Similar to the article, shuttle also lets you save the random seed if a particular schedule causes the test suite to fail. Being able to quickly reproduce failing tests is really important and enables you write explicit test cases for bugs that were previously caught and fixed [3].
[0] https://github.com/kaymanb/todc/tree/main/todc-mem [1] https://github.com/tokio-rs/loom [2] https://github.com/awslabs/shuttle [3] https://github.com/kaymanb/todc/blob/0e2874a70ec8beed8fae773...