Doesn't make me very excited, since I strongly feel standard cron implementations should've been deprecated long time ago anyway. I mean, consider dkron, for example. Forget k8s and web-UI and all that nonsense, its YAML configs are simply way more clear, readable and powerful, than the usual crontab syntax. Why cannot I have the same with plain simple non-distributed cron?!
Also, just as a sidenote I'm not willing to seriously discuss: I seriously doubt I'd personally ever use random ranges in production. I understand what problem it's supposed to solve, but generally I just really don't want anything random in my systems. If it conflicts with some other cronjobs or whatever, I'd like it to break down deterministically — preferably, all the time, so it's easier to spot, track down and fix it. If it causes any load spikes, I'd like these spikes to be regular, so that I can see that and manually tweak run times so that it'll be more even. If any problems arise, I'd prefer them to arise after somebody changed something, and not just magically one Saturday evening a couple of months later.
The only situation I can think of right away when this is acceptable, is if I have lot of nodes with the same cron config, so it's my attempt to spread out workers of the same type that I know would start at the same time otherwise. But then, why the fuck do I have such a degenerate architecture in the first place?! Maybe I should think about replacing that by something a little more sustainable, like, uh, a centralized scheduler? No, I mean, it's definitely a solution — a quick and easy one, at that — but even then it seems like a solutions to a problem that shouldn't have existed in the first place.
Also, just as a sidenote I'm not willing to seriously discuss: I seriously doubt I'd personally ever use random ranges in production. I understand what problem it's supposed to solve, but generally I just really don't want anything random in my systems. If it conflicts with some other cronjobs or whatever, I'd like it to break down deterministically — preferably, all the time, so it's easier to spot, track down and fix it. If it causes any load spikes, I'd like these spikes to be regular, so that I can see that and manually tweak run times so that it'll be more even. If any problems arise, I'd prefer them to arise after somebody changed something, and not just magically one Saturday evening a couple of months later.
The only situation I can think of right away when this is acceptable, is if I have lot of nodes with the same cron config, so it's my attempt to spread out workers of the same type that I know would start at the same time otherwise. But then, why the fuck do I have such a degenerate architecture in the first place?! Maybe I should think about replacing that by something a little more sustainable, like, uh, a centralized scheduler? No, I mean, it's definitely a solution — a quick and easy one, at that — but even then it seems like a solutions to a problem that shouldn't have existed in the first place.