In sum, the author proclaims that without human death, nothing people do has a time limit so people wouldn't have any incentive to do.
But this is false - even if we were a sovereign observer only, the universe is constantly changing and evolving, species go extinct, the seasons are never the same. And we are not just observers, we are also actors - we have opportunities to create today which will not be available in the future. You cannot create the Internet today, it already happened. You cannot spend arbitrary time traveling to and fro across the galaxy to talk to friends, the molten iron geyser you wanted to see at Betelgeuse will no longer be running by the time you get there. Perhaps time motivates us, but our death is not the only thing which limits time.
Already addressed: "While the family spells its last name with a B, the New York Times stylebook spells Hapsburg with a P, which brought no shortage of scolding emails upon publication."
These idiosyncrasies are amusing but they do have a habit of dominating the conversation. E.g. the New Yorker (I think) uses diaeresis to represent distinct syllables resulting in words like “Cooperate” being decorated.
Or that chap here who insists on using 5 digit years.
But some go missed. My personal favourite is that I prefer to fully close clauses within quotations.
> The President said, “I will never bomb the moon!”.
Like the way the French and English write "Cologne" for the city Germans call "Köln". The NYT is just adhering to a different standard, not making up anything new.
They did use a different language with different pronunciation rules, though. I'm not convinced that sticking with the original spelling is obviously better. Bibis’s first name certainly doesn't accurately transliterate to Benjamin, and that's probably fine.
But it's not any more anglicized than Habsburg would be. I mean, English does have a letter B, doesn't it? And as far as I know, the difference between B and P is the same as in German.
Not where I live (which is near Habsburg). I would pronounce it the same as the B in -burg. Where are you from (if you care to answer, no problem if not)
That's interesting, I'm not originally German but went to school in Berlin since I was 7 and always heard it pronounced as "Haaps-burk".
I'd think this is just because of how final obstruent devoicing works in German, to me it just seems like the obvious natural way to pronounce the word.
My understanding is that it's an expected value based on coverage in each of the ensemble scenarios, not quite as simplified as "how many scenarios was there rain in this forecast cell".
At least for the US NWS: if 30 of 100 scenarios result in 50% shower coverage, and 70 out of 100 result in 0%, this is reported as 15% chance of rain. Which is exactly the same as 15 with 100% coverage and 85 with 0% coverage, or 100 with 15% coverage.
Understanding this, and digging further into the forecast, gives a better sense of whether you're likely to encounter widespread rainfall or spotty rainfall in your local area.
Why? Well, I really admire Jonas Heitala's documentation of his journey to find a layout that fit his aesthetic: https://www.jonashietala.se/blog/2023/11/02/i_designed_my_ow... . My layout isn't as extreme, it's still qwerty-ish, but I've been heavily inspired by his thorough analysis.
Wireless keyboards (like the one linked) typically use ZMK instead to my knowledge. It's similar to QMK—so much of the knowledge still applies—but it isn't 1:1.
Super minimal finger travel. I have a 34-key layout personally, and while I give up the F-keys, everything else is not very difficult to access and I really love how little my fingers move.
Having tried a few, I think the Kinesis contoured keyboards are a sweet spot. Plenty of keys, but finger movement feels really close. Keep coming back to my old Kinesis Advantages or similar custom builds.
Yea, I've got a Corne and a Kinesis Advantage2 and I mostly use the Advantage2.
If it was available earlier (and a bit cheaper) I'd have almost certainly bought the 360 and I'd probably be happier with it - I think I need a bit more width and a bit more tenting... but the Advantage2 + the Corne has completely convinced me that curved bowls are massively better than flat and I have almost no interest in spending time on flat unless strictly necessary. Ortho and stagger when split are rather clearly the correct choice, but flat just feels wrong and uncomfortable in a way that the Advantage2 never did (though it absolutely felt weird for a while when learning it).
It's OSS and self-hostable. And it's got a great UI and the most joyous technology I've ever had the pleasure of using. https://zulip.com/self-hosting/
> When you self-host Zulip, you get the same software as our Zulip Cloud customers.
> Unlike the competition, you don't pay for SAML authentication, LDAP sync, or advanced roles and permissions. There is no “open core” catch — just freely available world-class software.
The optional pricing plans for self-hosted mention that you are buying email and chat support for SAML and other features, but I don't see where they're charging for access to SAML on self-hosted Zulip.
I mean, we're already having democratized geoengineering: Make Sunsets (https://makesunsets.com/) allows you or anyone else to fund deploying high-altitude clouds for geocooling.
I tried k3s but even on an immutable system dealing with charts and all the other kubernetes stuff adds a new layer of mutability and hence maintenance, update, manual management steps that only really make sense on a cluster, not a single server.
If you're planning to eventually move to a cluster or you're trying to learn k8s, maybe, but if you're just hosting a single node project it's a massive effort, just because that's not what k8s is for.
I use k3s. With more than more master node, it's still a resource hog and when one master node goes down, all of them tend to follow. 2GB of RAM is not enough, especially if you also use longhorn for distributed storage. A single master node is fine and I haven't had it crash on me yet. In terms of scale, I'm able to use raspberry pis and such as agents so I only have to rent a single €4/month vps.
I'm laughing because I clicked your link thinking I agreed and had posted similar things and it's my comment.
Still on k3s, still love it.
My cluster is currently hosting 94 pods across 55 deployments. Using 500m cpu (half a core) average, spiking to 3cores under moderate load, and 25gb ram. Biggest ram hog is Jellyfin (which appears to have a slow leak, and gets restarted when it hits 16gb, although it's currently streaming to 5 family members).
The cluster is exclusively recycled old hardware (4 machines), mostly old gaming machines. The most recent is 5 years old, the oldest is nearing 15 years old.
The nodes are bare Arch linux installs - which are wonderfully slim, easy to configure, and light on resources.
It burns 450Watts on average, which is higher than I'd like, but mostly because I have jellyfin and whisper/willow (self hosted home automation via voice control) as GPU accelerated loads - so I'm running an old nvidia 1060 and 2080.
Everything is plain old yaml, I explicitly avoid absolutely anything more complicated (including things like helm and kustomize - with very few exceptions) and it's... wonderful.
It's by far the least amount of "dev-ops" I've had to do for self hosting. Things work, it's simple, spinning up new service is a new folder and 3 new yaml files (0-namespace.yaml, 1-deployment.yaml, 2-ingress.yaml) which are just copied and edited each time.
Any three machines can go down and the cluster stays up (metalLB is really, really cool - ARP/NDP announcements mean any machine can announce as the primary load balancer and take the configured IP). Sometimes services take a minute to reallocate (and jellyfin gets priority over willow if I lose a gpu, and can also deploy with cpu-only transcoding as a fallback), and I haven't tried to be clever getting 100% uptime because I mostly don't care. If I'm down for 3 minutes, it's not the end of the world. I have a couple of commercial services in there, but it's free hosting for family businesses, they can also afford to be down an hour or two a year.
Overall - I'm not going back. It's great. Strongly, STRONGLY recommend k3s over microk8s. Definitely don't want to go back to single machine wrangling. The learning curve is steeper for this... but man do I spend very little time thinking about it at this point.
I've streamed video from it as far away as literally the other side of the world (GA, USA -> Taiwan). Amazon/Google/Microsoft have everyone convinced you can't host things yourself. Even for tiny projects people default to VPS's on a cloud. It's a ripoff. Put an old laptop in your basement - faster machine for free. At GCP prices... I have 30k/year worth of cloud compute in my basement, because GCP is a god damned rip off. My costs are $32/month in power, and a network connection I already have to have, and it's replaced hundreds of dollars/month in subscription costs.
For personal use-cases... basement cloud is where it's at.
To put that into perspective, that's more than my entire household including my server that has an old GPU in it
Water heating is electric yet we still don't use 450W×year≈4MWh of electricity. In winter we just about reach that as a daily average (as a household) because we need resistive heating to supplement the gas system. Constantly 450W is a huge amount of energy for flipping some toggles at home with voice control and streaming video files
Remember that modern heating and hot water systems have a >1 COP, meaning basically they provide more heat than the input power. Air-sourced heat pumps can have a COP of 2-4, and ground source can have 4-5, meaning you can get around 1800W of heat out of that 450W of power. That's ignoring places like Iceland where geothermal heat can give you effectively free heat. Ditto for water heating, 2-4.5 COP.
Modern construction techniques including super insulated walls and tight building envelops, heat exchangers, can dramatically reduce heating and cooling loads.
Just saying it's not as outrageous as it might seem.
And yet it's far more economical for me than paying for streaming services. A single $30/m bill vs nearly $100/m saved after ditching all the streaming services. And that's not counting the other saas products it replaced... just streaming.
Additionally - it's actually not that hard to put this entire load on solar.
4x350watt panels, 1 small inverter/mppt charger combo and a 12v/24v battery or two will do you just fine in the under $1k range. Higher up front cost - but if power is super expensive it's a one time expense that will last a decade or two, and you get to feel all nice and eco-conscious at the same time.
Or you can just not run the GPUs, in which case my usage falls back to ~100w. I You can drive lower still - but it's just not worth my time. It's only barely worth thinking about at 450W for me.
I'm not saying it should be cheaper to run this elsewhere, I'm saying that this is a super high power draw for the utility it provides
My own server doesn't run voice recognition so I can't speak to that (I can only opine that it can't be worth a constant draw of 430W to get rid of hardware switches and buttons), but my server also does streaming video and replaces SaaS services, so similar to what you mention, at around 20W
Found the European :) With power as cheap as it is in the US, some of us just haven't had to worry about this as much as we maybe should. My rack is currently pulling 800W and is mostly idle. I have a couple projects in the works to bring this down, but I really like mucking around with old enterprise gear and that stuff is very power hungry.
Perhaps. Many people in America also claim to care about the environmental impact of a number of things. I think many more people care performatively than transformatively. Personally, I don't worry too much about it. It feels like a lost cause and my personal impact is likely negligible in the end.
Then offsetting that cost to a cloud provider isn't any better.
450W just isn't that much power as far as "environmental costs" go. It's also super trivial to put on solar (actually my current project - although I had to scale the solar system way up to make ROI make sense because power is cheap in my region). But seriously, panels are cheap, LFP batteries are cheap, inverters/mppts are cheap. Even in my region with the cheap power, moving my house to solar has returns in the <15 years range.
If you provide for yourself (e.g. run your IT farm on solar), by all means, make use of it and enjoy it. Or if the consumption serves others by doing wind forecasts for battery operators or hosts geographic data that rescue workers use in remote places or whatnot: of course, continue to do these things. In general though, most people's home IT will fulfil mostly their own needs (controlling the lights from a GPU-based voice assistant). The USA and western Europe have similarly rich lifestyles but one has a more than twice as great impact on other people's environment for some reason (as measured by CO2-equivalents per capita). We can choose for ourselves what role we want to play, but we should at least be aware that our choices make a difference
In America, taxes account for about a fifth of the price of a unit of gas. In Europe, it varies around half.
The remaining difference in cost is boosted by the cost of ethanol, which is much cheaper in the US due to abundance of feedstock and heavy subsidies on ethanol production.
The petrol and diesel account for a relatively small fraction on both continents. The "normal" prices in Europe aren't reflective of the cost of the fossil fuel itself. In point of fact, countries in Europe often have lower tax rates on diesel, despite being generally worse for the environment.
Americans drive larger vehicles because our politicians stupidly decided mandating fuel economy standards was better than a carbon tax. The standards are much laxer for larger vehicles. As a result, our vehicles are huge.
Also, Americans have to drive much further distances than Europeans, both in and between cities. Thus gas prices that would be cheap to you are expensive to them.
Things are the way they are because basic geography, population density, and automotive industry captured regulatory and zoning interests. You really can't blame the average American for this; they're merely responding to perverse incentives.
How is this in any way relevant to what I said? You're just making excuses, but that doesn't change the fact that americans don't give a fuck about the climate, and they objectively pollute far more than those in normal countries.
If you can't see how what I said was relevant, perhaps you should work on your reading comprehension. At least half of Americans do care about the climate and the other half would gladly buy small trucks (for example) if those were available.
It's lazy to dunk on America as a whole, go look at the list of countries that have met their climate commitments and you'll see it's a pretty small list. Germany reopening coal production was not on my bingo card.
I run a similar number of services on a very different setup. Administratively, it’s not idempotent but Proxmox is a delight to work with. I have 4 nodes, with a 14900K CPU with 24 cores being the workhorse. It runs a Windows server with RDP terminal (so multiple users can get access windows through RDP and literally any device), Jellyfin, several Linux VMs, a pi-hole cluster (3 replicas), just to name a few services. I have vGPU passthrough working (granted, this bit is a little clunky).
It is not as fancy/reliable/reproducible as k3s, but with a bunch of manual backups and a ZFS (or BTRFS) storage cluster (managed by a virtualized TrueNAS instance), you can get away with it. Anytime a disk fails, just replace and resilver it and you’re good. You could configure certain VMs for HA (high availability) where they will be replicated to other nodes that can take over in the event of a failure.
Also I’ve got tailscale and pi-hole running as LXC containers. Tailscale makes the entire setup accessible remotely.
It’s a different paradigm that also just works once it’s setup properly.
I have a question if you don't mind answering. If I understand correctly, Metallb on Layer 2 essentially fills the same role as something like Keepalived would, however without VRRP.
So, can you use it to give your whole cluster _one_ external IP that makes it accessible from the outside, regardless of whether any node is down?
Imo this part is what can be confusing to beginners in self hosted setups. It would be easy and convenient if they could just point DNS records of their domain to a single IP for the cluster and do all the rest from within K3s.
Yes. I have configured metalLB with a range of IP addresses on my local LAN outside the range distributed by my DHCP server.
Ex - DHCP owns 10.0.0.2-10.0.0.200, metalLB is assigned 10.0.0.201-10.0.0.250.
When a service requests a loadbalancer, metallb spins up a service on any given node, then uses ARP to announce to my LAN that that node's mac address is now that loadbalancer's ip. Internal traffic intended for that IP will now resolve to the node's mac address at the link layer, and get routed appropriately.
If that node goes down, metalLB will spin up again on a remaining node, and announce again with that node's mac address instead, and traffic will cut over.
It's not instant, so you're going to drop traffic for a couple seconds, but it's very quick, all things considered.
It also means that from the point of view of my networking - I can assign a single IP address as my "service" and not care at all which node is running it. Ex - if I want to expose a service publicly, I can port forward from my router to the configured metalLB loadbalancer IP, and things just work - regardless of which nodes are actually up.
---
Note - this whole thing works with external IPs as well, assuming you want to pay for them from your provider, or IPV6 addresses. But I'm cheap and I don't pay for them because it requires getting a much more expensive business line than I currently use. Functionally - I mostly just forward 80/443 to an internal IP and call it done.
We used to pay AU$30 for the entire house which included everything except cooking but it did include a 10 year 1RU rack Mount server. Electricity isn't particularly cheap here.
How do you deal with persistent volumes for configuration, state, etc? That’s the bit that has kept me away from k3s (I’m running Proxmox and LXC for low overhead but easy state management and backups).
Yeah, but you have to have some actual storage for it, and that may not be feasible across all nodes in the right amounts.
Also, replicated volumes are great for configuration, but "big" volume data typically lives on a NAS or similar, and you do need to get stuff off the replicated volumes for backup, so things like replicated block storage do need to expose a normal filesystem interface as well (tacking on an SMB container to a volume just to be able to back it up is just weird).
Sure - none of that changes that longhorn.io is great.
I run both an external NAS as an NFS service and longhorn. I'd probably just use longhorn at this point, if I were doing it over again. My nodes have plenty of sata capacity, and any new storage is going into them for longhorn at this point.
I back up to an external provider (backblaze/wasabi/s3/etc). I'm usually paying less than a dollar a month for backups, but I'm also fairly judicious in what I back up.
Yes - it's a little weird to spin up a container to read the disk of a longhorn volume at first, but most times you can just use the longhorn dashboard to manage volume snapshots and backup scheduling as needed. Ex - if you're not actually trying to pull content off the disk, you don't ever need to do it.
If you are trying to pull content off the volume, I keep a tiny ssh/scp container & deployment hanging around, and I just add the target volume real fast, spin it up, read the content I need (or more often scp it to my desktop/laptop) and then remove it.
I do things somewhat similarly but still rely on Helm/customize/ArgoCD as it's what I know best. I don't have a documentation to offer, but I do have all of it publicly at https://gitlab.com/lama-corp/infra/infrastructure
It's probably a bit more involved than your OP's setup as I operate my own AS, but hopefully you'll find some interesting things in there.
"Basement Cloud" sounds like either a dank cannabis strain, or an alternative British rock emo grunge post-hardcore song. As in "My basement cloud runs k420s, dude."
Or microk8s. I'm curious what it is about k8s that is sucking up all these resources. Surely the control plane is mostly idle when you aren't doing things with it?
There are 3 components to "the control plane" and realistically only one of them is what you meant by idle. The Node-local kubelet (that reports in the state of affairs and asks if there is any work) is a constantly active thing, as one would expect from such a polling setup. The etcd, or it's replacement, is constantly(?) firing off watch notifications or reconciliation notifications based on the inputs from the aforementioned kubelet updates. Only the actual kube-apiserver is conceptually idle as I'm not aware of any compute that it, itself, does only in response to requests made of it
Put another way, in my experience running clusters, in $(ps auwx) or its $(top) friend always show etcd or sqlite generating all of the "WHAT are you doing?!" and those also represent the actual risk to running kubernetes since the apiserver is mostly stateless[1]
1: but holy cow watch out for mTLS because cert expiry will ruin your day across all of the components
I've noticed that etcd seems to do an awful lot of disk writes, even on an "idle" cluster. Nothing is changing. What is it actually doing with all those writes?
Almost certainly it's the propagation of the kubelet checkins rippling through etcd's accounting system[1]. Every time these discussions come up I'm always left wondering "I wonder if Valkey would behave the same?" or Consul (back when it was sanely licensed). But I am now convinced after 31 releases that the pluggable KV ship has sailed and they're just not interested. I, similarly, am not yet curious enough to pull a k0s and fork it just to find out
1: related, if you haven't ever tried to run a cluster bigger than about 450 Nodes that's actually the whole reason kube-apiserver --etcd-servers-overrides exists because the torrent of Node status updates will knock over the primary etcd so one has to offload /events into its own etcd
I deployed CNPG (https://cloudnative-pg.io/ ) on my basement k3s cluster, and was very impressed with how easy I could host a PG instance for a service outside the cluster, as well as good practices to host DB clusters inside the cluster.
Oh, and it handles replication, failover, backups, and a litany of other useful features to make running a stateful database, like postgres, work reliably in a cluster.
Note also that, on Linux, RSS is not guaranteed to be accurate: "For making accounting scalable, RSS related information are handled in an asynchronous manner and the value may not be very precise." [1]
But this is false - even if we were a sovereign observer only, the universe is constantly changing and evolving, species go extinct, the seasons are never the same. And we are not just observers, we are also actors - we have opportunities to create today which will not be available in the future. You cannot create the Internet today, it already happened. You cannot spend arbitrary time traveling to and fro across the galaxy to talk to friends, the molten iron geyser you wanted to see at Betelgeuse will no longer be running by the time you get there. Perhaps time motivates us, but our death is not the only thing which limits time.