I watch this show, but one of the most annoying things about it is that the traitors are incentivised to murder the smartest, most intuitive players first, leaving people they can manipulate easily. Maybe you could argue the smartest move is to play dumb.
This is at its worst in the second Australian season, which is an incredibly frustrating watch.
That was one of the most frustrating seasons of any television show I’ve ever watched, right up until the finale—which completely redeemed it for me! What an ending.
> Maybe you could argue the smartest move is to play dumb.
Does playing smart advertise you as smart on a popular TV show, while minimizing the tedious reality-TV drama that you have to go through? The expected winnings aren't all that much. And most (desirable) employers are would rate "smart" as a more desirable trait than to "gullible" or "underhanded".
I recently came across (but haven't yet used) Typia, which appears to let you do validation with regular TypeScript syntax: https://github.com/samchon/typia
Although the advantages are real, I can't say I have had much opportunity to implement schemas like this. The extra complexity is usually what gets in the way, and it can add difficulty to migrations.
I think it would be useful in certain scenarios, for specific parts of an application. Usually where the history is relevant to the user. I think using it more generally could be helped by some theoretical tooling for common patterns and data migrations.
You can use Datomic for instance (mentioned already in your article IIRC!?) or SirixDB[1] on sich I'm working in my spare time.
The idea is an indexed append-only log-structure and to use a functional tree structure (sharing unchanged nodes between revisions) plus a novel algorithm to balance incremental and full dumps of database pages using a sliding window instead.
I agree this is more of an application level concern than a database thing. If you need to maintain a history for the user requirement then you will naturally land on a scheme like this.
We also have help from other quarters nowadays.
Databases often provide a time travel feature where we can query AS OF a certain date.
Some people went down the whole event sourcing / CQRS / Kafka route where there is an immutable audit log of updates.
Data warehousing has moved on such that we can implement “slowly changing data” there.
All in all, complicating our application logic, migrations and GDPR in order to maintain history in line of business applications might not be worthwhile.
I'm curious if the author of that blog ever completed their work with temporal Postgres. I know about [1] but unfortunately work on hosted Postgres most often where the extension isn't an option.
Another misconception is that they somehow scale horizontally "better". They do let you scale components independently of each other, but this isn't as useful as a lot of people seem to think it is.
If you're already using Postgres, you can avoid increasing operational complexity by introducing another database. Less operational complexity means better availability.
You can atomically modify jobs and the rest of your database. For example, you can atomically create a row and create a job to do processing on it.
But one of the golden rules of databases is to not use them as queues/integration.
Granted I didn't even read the main article because it seems like such a casual headline.
Edit post-read: yeah, using it as a CI jobs database. He lists the alternatives, but seriously, Kafka? Kafka is for linear scaling pub/sub. This guy has a couple CI jobs infrequently run.
Sure this works if the entire thing is throwaway for a non critical pub/sub system.
"It's possible to scale Postgres to storing a billion 1KB rows entirely in memory - This means you could quickly run queries against the full name of everyone on the planet on commodity hardware and with little fine-tuning."
Yeah just because it can does not mean it is suited for this purpose.
Don't do this for any integration at even medium scale.
Avoiding increasing operational complexity is really important, but for pub/sub we are using Redis. While this does add complexity, it is very little, because it is incredibly easy to install and maintain.
Obviously you're in a better position to evaluate the trade-offs for your application than I am, so I'm not saying your decision is wrong, but this can potentially decrease availability if your application depends on both PostgreSQL AND Redis to be available to function.
I'm a type enthusiast, but dynamic languages make a lot of sense for small programs that either give you an immediate answer or fail to run (i.e. not long running, few branches).
In those cases, the difference between a compile-time check and a run-time check is much smaller.
I'll write up my experience in a blog post