Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Hetzner Introduces ARM64 Cloud Servers (hetzner.com)
507 points by j4nek on April 12, 2023 | hide | past | favorite | 332 comments


I implemented a Virtual Private Cloud (VPC) on AWS with Elastic Container Service (ECS) using Terraform so that I could run Docker, and it ended up being about 2 orders of magnitude more expensive than Hetzner after all of the services were configured. For example, something as simple as AWS NAT Gateway costs $30 per month, and it can be challenging to get everything right if you don't use one. I haven't tried Heroku, but expect similar prices and nearly as high of a configuration burden.

So what's the point of all that? Seriously, one server with 16 cores and the memory bandwidth and storage speed of SSD drives should be able to serve many thousands of simultaneous requests and millions of users per month. I can't help but feel that the cloud infrastructure and microservice movement of the 2010s was.. a scam.

I just need a place to run Docker, similar to what we had with Linode for running a shell 20 years ago. And don't tell me how to set it all up. Just give me a turnkey Terraform (or Ansible-inspired declarative configuration management tool) setup that has stuff like load balancing and some degree of more advanced features like denial of service protection out of the box. What we used to think of as managed hosting, but open source, and with sane defaults for running standard suites like Laravel, Rails, etc.

I have to assume that I'm just terribly out of it and something like this already exists. Otherwise I can't understand why someone doesn't just offer this and provide a way for us to pay them the millions of dollars we lose to spinning our wheels on cloud infrastructure with 1% utilization.


> So what's the point of all that? Seriously, one server with 16 cores and the memory bandwidth and storage speed of SSD drives should be able to serve many thousands of simultaneous requests and millions of users per month. I can't help but feel that the cloud infrastructure and microservice movement of the 2010s was.. a scam.

You're just missing the whole point of why you need n>1 instances. You're focusing on what might be the least compelling reason for the majority of applications, which is scalability.

Reliability and fault tolerance is a major selling point, and you can't simply expect to have adequate performance in more than one region if you're providing your services out of a single box in a single data center. The laws of physics ensure your quality of service will suck.

Operations also compell you to have multiple instances, ranging from blue/green deployments to one-box deployment strategies. And are you going to allow your business to be down for significant amounts of time whenever you need to update your OS? How would you explain all that downtime to your boss/shareholders? Are the tens of dollars you save on infrastructure a good tradeoff?

Also, security is also a concern. You're better off isolating your backend from frontend services, and software-defined networks aren't a silver bullet.

And in the end, are the tradeoffs of not designing your web services the right way really worth it?


A simple single node is often more reliable than a complex fault-tolerant setup. Fewer moving parts mean fewer broken pieces.

Similar with security. A simple monolith is more secure than isolated front- and backend.


Agreed entirely for most. I have seen so many insanely complicated and eye wateringly expensive architectures for what are simple web apps on the big clouds. Often the team don't really understand the intricacies of all these services and how they interrelate which causes often cause a lot of stability problems.

Even if they don't, there is usually a huge tax on developer productivity with these architectures, which often outweighs any improved stability over the long term.

Let's keep in mind that modern hardware and software is very stable, generally.


Agreed - I worked for a company with 30 people dedicated entirely to managing the mess they'd created in AWS - and their AWS account manager somehow managed to keep convincing them to use more and newer AWS services to dig them in even further.


People talk about resume-driven development, which is definitely a factor, but there's also fear-driven development - you have a hunch the architecture could be a lot simpler and cheaper, but you don't want to make any criticism in case the others in your team think you are technically ignorant, inexperienced, out of touch etc.


>Let's keep in mind that modern hardware and software is very stable, generally

Not at any significant scale. dimm will fail, power will be down, disks will need replacement.

It is all about risks after all. If you are ok to have couple of hours of downtime if one of the memory sticks stops working - good for you. But generally any large enough business won't tolerate such risks.

I'm not saying that the cloud is the answer, but I don't see any future for single instance solution. And if you design your system like this, you are taking much more risks than necessary.


> A simple single node is often more reliable than a complex fault-tolerant setup.

...except when your single piece breaks and you get your whole service down. Then what?


You get it back up and running quickly because it's so simple and easy to deploy a new server and load the backups on to it.


Cluster the node?


My pet theory is that VCs also hold a lot of Amazon stock or perhaps more reasonably: are friendly with people who own a lot of Amazon stock.


It's less insidious than that. Engineers like using AWS at startups because it benefits their future careers at larger companies. Startups also get $100k-$500k in free AWS credits. So there's no short term downside.

Ultimately it's the CTO/Technical Founders fault that this goes on. Avoid cloud services like the plague unless you really actually need them. They will slow you down and eat your wallet in the long run.


Cloud is about elasticity. At ${work} we spin up a ton of EC2 instances in anticipation of huge traffic spikes that happen predictably several times a day, in different regions, and spin them down afterwards.

It's also about simplicity. Many services come pre-configured on.AWS, with clustering, failover, backups, etc already present.

When you are small, you can sysadmin your server fine, and you don't yet need the cloud.

When you are colossal, you want everything custom, and the cloud would cost you a huge amount, so you go for your own servers (and possibly datacenters).

But for the midrange, the expense of hiring several more SREs to handle dedicated servers is usually higher than the entire AWS bill, and the cloud looks very reasonable.


I can’t help but think that a 10x user spike causing a 10x CPU spike is mostly caused by poorly performing web apps written in dynamic scripting languages and the ethos “Just add more servers, servers are cheaper than developers.”

Turns out, developers are cheaper than DevOps engineers and AWS budgets. So if a linear spike in servers necessitates a superlinear cost in operational complexity, maybe working highly performant web app backends is actually cheaper?


The cost is also never just the scaling of servers. Scaling introduces are different class of bugs / issues:

- database connection limits

- caches

- state changes

- handling graceful shutdown

- speed of provisioning new instances

... the list goes on ...


Counterpoint: for some years we had clients who happily lived on cheap as fuck plans but requested more at the anticipated calendar events (NY and the like).

Most of the time we just removed GHz limits, sometimes adding vCPUs in advance, they fared well.

NB: IaaS provider with a guarenteered resources


Yes, if your elasticity requirements are not within hours but within weeks or months, adding and removing capacity can be done by traditional means, even by renting physical servers and putting them into your on-premises racks.


You missed the point - for some the 'elasticity' is just being able to consume 12-20GHz instead of ~1-2GHz 350 days of the year. Sure, you can deploy some advanced hyper-scaling arch to do so... or just move to a fatter instance.


How much would it cost to handle your peak load with dedicated servers and no scaling?


With the peak load being about 10x the still load, the dedicated servers should cost 10% of the cloud version.

We'd likely have to have some reserve just in case our marketing and product do so well that the next peak of the load rises higher in absolute terms.


Hybrid approaches are also really good, but I'm gonna be honest: If capital is not an issue and the growth is to be expected, I wouldn't steer into dedicated server region. It's just such an headache if it's not the core business


Dedicated servers are way less of a headache than cloud in my experience. Unless you are talking about buying and building your own hardware, which is a completely separate thing and not generally recommended. You can lease dedicated servers these days as easily as launching a cloud compute instance but way less than half the price. I pay around $265/mo for 32 cores and 128gb of ram with local, dual, mirrored 1tb SSD drives. Looks like a reserved instance with similar performance on AWS would be somewhere around $700/mo.


Last time I ran the math, it made sense for us to provision bare metal for 70% of our baseload, but our regions skew EU/US


> I can't help but feel that the cloud infrastructure and microservice movement of the 2010s was.. a scam.

I think the general consensus is that the most-root cause for why cloud infrastructure and microservice architectures matter is more People, less Technology. Its hard to hire highly experienced people, especially at the hundreds-to-thousands of engineers scale that many companies operate at.

That, in isolation, is reasonable. The actually uncomfortable question that no one wants to address is: was that a scam. Put another way, how much of that was a zero-interest rate phenomena, and how much "state of art best way to engineer" things was influenced from that, and has root causes in US monetary policy.

Between AI and high interest rates: if you're not ready to spend the 2020s questioning every single thing you think you know about software engineering; you won't be long for the industry.


My thought on this is we've all been trained to follow the Google approach: thousands of machines, horizontally scalable, to handle any load. But this model was developed in 2004 when they wanted to hold the entire web in memory and could afford the enormous hardware outlay.

Computers are so powerful now this may be the wrong instinct. You can mind boggling amounts of processing for say... $30,000 in hardware, and even with redundancy it would be much simpler to manage.


> I think the general consensus is that the most-root cause for why cloud infrastructure and microservice architectures matter is more People, less Technology.

Indeed the main selling point of microservices is People, more specifically how to set hard boundaries on responsibilities and accountability while ensuring independent decision-making.

Another important selling point of microservices is flexibility. It's trivial to put together an entirely new service from scratch, written in whatever technology crosses your mind. With microservices you don't even need access to a repo.


> set hard boundaries on responsibilities and accountability while ensuring independent decision-making

This isn't really true as most of the time the services interact heavily and need to be aligned with each other not to break and work properly.

And all what you're speaking of can be achieved by properly modularizing the code of a monolithic application. And microservices if you want them can be run on a single machine, you don't need "the cloud" or multiple computers in general to run a bunch of REST apps if your single instance is powerful enough.

And for most web apps it is with the currently available hardware.


> This isn't really true as most of the time the services interact heavily and need to be aligned with each other not to break and work properly.

I'm not sure you got the point. The need to provide interfaces and establish contracts and pledge SLAs is irrelevant and a non-issue. Naturally, your goal is to have a perfect uptime and never break interfaces, just like your goal when working on a monolith is to not add broken code that crashes the service.

The whole point of microservices is that you are entirely free to make any decision regarding the things you own, and your only constraint is the interfaces you provide and consume, and SLAs. From those interfaces inward, you are totally free to make any decision you think is best without having to bother about anyone else's opinions and technical choices and tastes and permissions. You can run one service behind your interface or hundreds. As long as your interfaces are up, you can do whatever you like. That greatly simplifies your problem domain.

> And all what you're speaking of can be achieved by properly modularizing the code of a monolithic application.

No, no you can't. Think about it for a second: can you pick your software stack? Can you pick what dependencies you adopt? Can you rewrite your code in any other language? Etc etc etc. Your hands are tied left and right.


An even more important consideration is being able to release features (and even more importantly roll back) in isolation when the team is ready. The more teams contributing to a monolith, the harder this becomes. I am sure someone here has seen it done with a 10 million line monolith built by 100 geographically distributed teams, but I suspect that is exceedingly rare.

Microservices aren’t a silver bullet for this, of course. They can be part of a winning recipe though.


While picking any stack you want may sound wonderful to the individual developer, it's less so for the company as a whole that has to maintain different languages and frameworks, when that developer moves on and now they have to find or hire someone else to keep that service running.


> While picking any stack you want may sound wonderful to the individual developer, it's less so for the company as a whole that has to maintain different languages and frameworks

This is orthogonal issue.

You can pick any stack, but you don't have to. There still can be general company wide rules about what is acceptable and what is not. It's just that this is informed and enforced by the actual needs (not being too diverse in the technologies deployed), that can change in time independently from contents of any code repository.

Plus you can still make strategic choices here and there, that don't comply with general rules, and have, say 95% of teams working under stricter limits of what's allowed, and this couple of teams with special needs, where you were free to decide otherwise.


> While picking any stack you want may sound wonderful to the individual developer, it's less so for the company as a whole (...)

Nonsense. Each team owns it's services, and they know best what works and doesn't work. It makes no sense to claim organizational inertia as a valid reason to prevent you from picking the best tool for the job.

To add to that, microservices allow you to gradually upgrade services without having to be blocked by the idea of having to go through major rewrites. You just peel off a concern, get it up and running, and direct traffic to it. Done.


Isn’t this entire site run off a laptop in someone’s shed?

And yet it’s got basically perfect uptime. Whereas the impenetrable cloud goes down every week and costs billions a year.


> And yet it’s got basically perfect uptime.

Some monitors indicate that hackernews has a 99.84% uptime in the past 30 days. Hardly "perfect".

https://hacker-news.statuspal.io/


And that still doesn't include the very short term soft-failures where HN tells you "something happened, try again" - those happen every week or so.


This site is also quite an easy service to run - it's all text, with very few customizations that affect the frontpage that everyone sees.


"I haven't tried Heroku, but expect similar prices and nearly as high of a configuration burden."

Are you kidding me? Prices, sure, but comparing the configuration burden using AWS or GCP alphabet soup to a dead-simple Heroku setup is ridiculous. You set up something like hirefire.io for autoscaling, you connect your Github repo and that's it. You're done. Compared to weeks worth of headaches setting up Terraform Cloud and CI actions and massaging your state file and spending days tearing your headout over insane obtuse NAT configuration parameters? It's literally night and day.


Ya I think you're right. I regret not trying Heroku first.

I'm glad to have learned AWS, although I don't know that I'll build another cloud server. I think the biggest problem I found is that services tend to require each other. So I ended up needing an IAM for everything, a security group for everything, a network setting for everything.. you get the idea. A la carte ends up being a fantasy. Because of that interdependency, I don't see how it would be possible to maintain a cloud server without Terraform. And if that's the case, then the services just becomes modules in a larger monolith. Which to me, looks like cloud servers are focusing on the wrong level of abstraction. Which is why preconfigured servers like Heroku exist, it sounds like.

An open source implementation of Heroku running on Terraform on AWS/GCP could be compelling. Also a server that emulates AWS/GCP services so that an existing Terraform setup could be ported straight to something like Hetzner or Linode with little modification. With a migration path to start dropping services, perhaps by running under a single identity with web server permissions (instead of superuser). And no security groups or network settings, just key-based authentication like how the web works. More like Tailscale, so remote services would appear local with little or no configuration.

Also this is a little off-topic, but I'd get rid of regions and have the hosting provider maintain edge servers internally using something like RAFT combined with an old-school cache like Varnish to emulate a CDN. The customer should be free from worrying about all static files, and only have to pay a little extra for ingress for the upload portion of HTTP requests for POST/PUT/PATCH and request headers and payloads.

Oh and the database should be self-scaling, as long as the customer uses deterministic columns, so no NOW() or RAND(), although it should still handle CURRENT_TIMESTAMP for created_at and updated_at columns so that Laravel/Rails work out of the box. So at least as good as rqlite, if we're dreaming!

Edit: I don't mean to be so hard on AWS here. I think the concept of sharing datacenter resources is absolutely brilliant. I just wish they offered a ~$30/mo server preconfigured with whatever's needed to run a basic WordPress site, for example.


The price disparity is getting worse. Cloud price reductions haven’t kept pace with the amazing pace of hardware performance improvements, especially in I/O and RAM quantity.

For $400 you can buy an SSD with 1.5 million sustained IoPS.

It is no big deal to build a 1TiB RAM box these days.


If there were no microservices, a lot of companies would run their monoliths on a few VMs, with a small fraction of the cost ;)


If it wasn't for a good portion of tech leads FAANG aspirational goals I bet a lot more companies would be on monoliths. It's simply staggering the productivity and cost differences (up to of course a certain point) being on monoliths vs microservices. The discussions on authorisation alone.....


> If it wasn't for a good portion of tech leads FAANG aspirational goals I bet a lot more companies would be on monoliths.

I think you inadvertently pointed out one of the advantages of microservices: the ability to bootstrap ideas without having to be hindered by artificial barriers to entry. If you have an idea and want to show it off, you just implement it and make it available to the public. Call it "aspirational goals", but I don't see any advantage in barring people from providing value without having to wrangle bureaucracies, red tape, and office politics.


If you ignore the costs from stuff like vmware, vcenter, windows server, etc... sure. But it's weird to forget that managing VMs was a huge expense even before the shift to the cloud. No one was just hosting a few dozens or a few hundreds VMs without very expensive management software.


I’m not saying that you shouldn’t use the cloud for deploying your software. But instead of deploying hundreds of small services, you may just deploy a handful bigger ones, and are still good. And you may also not need 30 different AWS services, you may be good with a database, maybe a message broker and a container runtime.

If you keep the infrastructure simple, it’s also much easier to switch cloud provider and to set up testing systems.


I don’t know that this is terribly off base.

The cloud almost always ends up costing a lot more at scale and increasingly at the beginning.

In some part this is because the cloud is someone else’s computers that you have to help pay for.

The current incarnation of cloud was designed in part to work around scaling issues with Linux networking, which have long since been solved. What a $5 Linode/VPS can do today vs 10 years ago is much different from a linux standpoint alone before additional horsepower.

At the same time tooling to host (docker, ansible, terraform) is making traditional devops heavier at times.


> cloud infrastructure and microservice movement of the 2010s was.. a scam.

Unfathomably based.



I think one disadvantage of putting everything (nginx, web server, database, monitoring tools, etc.) in one machine is that suddenly your machine is exposing a myriad of ports to the internet and one mistake on your side (e.g., misconfigured auth module) is all what's needed to compromise your entire service.

Having some sort of vpc where you can safely put your db server that only listens to requests from servers within the same vpc (e.g., web server) sounds like good practice to me.


> I think one disadvantage of putting everything (nginx, web server, database, monitoring tools, etc.) in one machine is that suddenly your machine is exposing a myriad of ports to the internet and one mistake on your side (e.g., misconfigured auth module) is all what's needed to compromise your entire service.

All the Linux distributions I got to know use sensible defaults so that critical services don't bind to a public-facing interface / bind only to localhost, e.g. mariadb and mysql on Debian.

Besides that, Hetzner's "Robot" interface allows to configure which ports/IP addresses you allow access to your Hetzner server.


Hm, why? You can run services without exposing their ports to the whole world. Or use unix sockets..


Hetzner is great but it's not a good choice for companies which really care.

Your data is not encrypted in rest.

Their security is not bad but someone can easily plugin whatever they like.

Try this at Google. Google not just has much stricter physical access control but also FULL control over the hardware. They know the firmware of the Mainboard. The server don't even unencrypt and start if you move them out of their Datacenter.

Google doesn't just has ingress egress they have undersea cable.

There is so much difference between hetzner and google it's not the same thing.


> Hetzner is great but it's not a good choice for companies which really care. Your data is not encrypted in rest.

It's up to you to set up "encryption in rest": E.g setup LUKS on a Hetzner machine.

> Their security is not bad but someone can easily plugin whatever they like.

That's not how it is. Check e.g. this six-year old security video from Hetzner: Hetzner is great but it's not a good choice for companies which really care. Your data is not encrypted in rest.

Their security is not bad but someone can easily plugin whatever they like.


I know the video quite well and I also have stuff on hetzner.

But it's just levels different to Google.

I have a small startup at hetzner.

But my main job has everything at the big cloud providers because of audit logs, high slash, global network etc. And no the hetzner video shows already that hetzner can't fullfil all required security certifications.

They have rows and rows of desktop PCs there.

With 'really' care I mean if you process millions or billions you just don't go to hetzner.

And doing encryption on rest with what decryption method? Manual unlock through a remote console?

At Google you literally have encryption on rest by default with key rotation build in and you can use your own keys.

Dimensions of difference


You are really comparing Google, which sells Cloud products, against a company which have a variety of products?

If you only compare the cloud products. How do you know that they aren't encrypting their drives?

And BTW, they own undersea cables as well ;) https://www.hetzner.com/presse-berichte/2016/01/156818

Please educate yourself before you make such wild claims.


They don't own undersea cables.

They invested in one and it's not stated how much.

And yes I'm comparing google with hetzner as answer to the op to tell him that cloud is more than just cheap compute.

I'm trying to explain why people pay a premium for cloud services and what the difference is.

And I use and like hetzner a lot. I'm happy to use hetzner for plenty of things just not for everything.


Never figured out how to use Hertzner. Wanted to try since they get such a good reputation on here, but they banned my account when their system presented no way for me to verify my identity.

The required either PayPal or passport. I have no PayPal account, and their 3rd party verification system only allows passport from your country of residence (signup requires providing a contact address and they pre fill using this address; you can’t change the passport country). I am a British citizen living in Japan, and therefore hold a British passport; there was no way for me to provide a Japanese passport. I asked what I should do to comply, and they banned my account 6 hours later.

I can’t be the only one to experience this, can I?


I’m currently setting up infrastructure for a startup and it’s been very interesting how the threat model of data loss and disaster recovery is no longer hardware failure: it’s account lock out.

I’ve got streaming replication of my core data going from one cloud company to other company as that way if one has some antifraud system go rogue on me I still have access.

As somebody who used to spend a lot of time thinking about drives breaking it’s an interesting shift.


That's a very perceptive comment. It happens again and again and it's much harder to control for that, as compared to say making sure you are running in different amazon availability zones or something. If you wanted to destroy someone's service, probably getting them banned like you describe could even be easier than a DOS now. I worry about the day that google kills my gmail account for some random and never to be explained reason.


Well, this site is a big reason I got the insight to focus on account redundancy over disk redundancy. Lots of posts over the years of people locked out from all the big clouds in a panic trying to see if an employee will see their cry for help on HN.

I do NOT want to be making that post!


Exactly. My main infrastructure is on Hetzner, but I have a live replication via Wireguard at another hoster in Austria. With less resources there, but for accessibility "in the case of".


I’m also thinking about buying a second hand server and racking it in a colocation joint just so I can physically get the disk. The client data I have is super important, and there is some level of comfort you only get from bare metal.


I would go further and have a small server/NAS running locally in an office. The best backup is one you sit on the top off.


I literally have a Linux machine that I’m ready to rig up were it not for Comcast being my internet provider. Maddening to live in “Silicon Valley” and be dealing with dog shit data caps and speeds.


This does not pay out in Germany due to the high energy costs in private house holds or office spaces.


With 0.40 EUR/kWh, running a beefy 100 Watt NAS setup will cost 0.96 EUR per day.


Yes, around 30 Euro per month. Now add the hardware costs.. And the internet connection costs (partly).. Anti-theft options.. In my opinion it is cheaper and more reliable to rent another dedicated server at another hoster for this. If you are sitting in Germany.


For a "small server/NAS"?


> The required either PayPal or passport

I had similar struggles with some non-IT service providers in Germany. They couldn't fathom why I have non-German nationality, German address and driving license from third country. Passport, German address and driving license all have different address on them (all three being EU addresses). It is apparently huge red flag in EU in 21st century. Incredible


You have an address on your passport? I can see how that could cause issues if the address isn't valid. If I'd have to determine if your order was fraudulent or not, I'd err on the side of caution rather than risk having you abuse my server infrastructure (in the case of Hetzner).


A lot of people confuse the passport (Pass) with the national identity card (Personalausweis). There's no place of residence stated on the German passport, but it is part of the Personalausweis. Since German citizens by law are required to own a Personalausweis (there's no mandate to carry it around, just having it somewhere in a drawer suffices), practically all businesses in Germany do rely on the Personalausweis to validate identities. And of course if the place of residence stated on the Personalausweis doesn't match the actual place of residence this is going to trigger some red flags.


This is a very German thing. I’ve never seen ID cards from other countries that include an address of residence. Also the obligation to own an ID card is a very foreign concept in most other countries.

I had that issue with German companies before, especially those who rarely deal with international customers. For example in Austria only a few people own an ID card (most people have a passport, but not everyone). So the german companies were very surprised when I gave them a copy of the Austrian ID card, that also doesn’t have an address on it.


> the obligation to own an ID card is a very foreign concept in most other countries.

Is it? From a quick search it looks like a very large number of countries have compulsory ID cards:

https://en.wikipedia.org/wiki/List_of_national_identity_card...


> I’ve never seen ID cards from other countries that include an address of residence.

In the USA the driver's licenses are ID cards and have addresses. Or if you don't have a license you can get a regular ID card, which has an address. The laws vary from state to state about how long you can go after you move before it gets updated. I'm not aware of a legal requirement to have one though, unless you're driving.


I thought the only way in the US to prove your address of residence is collecting your last few phone bills and bank statements. It was a huge LOL moment, when I first heard about that.


Yeah, it's really odd and circular. You can often present a lease agreement if you're renting an apartment or house. If you own, you can usually present a mortgage statement. If you own and don't have a mortgage (because you purchased with cash), you can use bank statement and/or phone statement. But what's funny is, how do you get the bank statement and/or phone statement? You usually have to show a lease agreement, mortgage statement, or some other proof you own the place you live. It's fairly difficult for someone coming from another country to get those things when they first arrive because all the documents they have are from another country and the entry-level employees that have to take them would have no idea what to do with them. My foreign friends and neighbors have stories about having to use a secured credit card (basically a debit card but the machines think it's a credit card) for 6-12 months before they could get a regular credit card. It's very confusing (and arbitrary and broken) if you're not on the easy path.


Similar problems in many countries, there is usually no problems for the majority who lives there so the pressure to fix it is low.


This is true in the UK - driving licenses have your address on them, but for some reason they're not usually accepted as proof of address. I assume this is because you only need to renew the license (i.e. get a new copy of the physical piece of plastic with an updated photo and expiry date) every ten years, so in theory your driving license could be evidence that you lived somewhere nine years ago but not evidence that you still live there today.

Providing a recent utility bill or bank statement is a much more common ask.


Its definitely not the only way but it is an option for some people who don't have other types of id.


Spain also has the address in the DNI (national document of identity), and as the Germans Spain's citizens are also required to have such a document. Many public and private services will ask for your DNI if they need to validate your identity.


We have address printed on ID cards here in Italy, just as an extra data point in addition to the other people who have replied already.


US does not have such cards, that is why they have issues with voter verification. Most countries have free IDs for citizens.


Yes we do have government provided ID cards. Your driver’s license (or if you don’t drive there’s a non-driver version that’s just a card). There is no mandate to get one or carry it (except for driving).

We have issues with voter registration because it’s been politicized. Poor people are disproportionately less likely to have ID cards (because it costs money, takes time, and most people get it to drive which is expensive), less likely to have it up to date. It’s disenfranchising to mandate IDs. Whether or not it’s a problem falls down party lines and your favorite statistician’s analysis. But that’s why it’s “an issue”.


South American countries with per capita incomes that sometimes are an order of magnitude smaller than the US managed to have national IDs for free to their entire population, and have required voter IDs for decades. This is a mind boggling American superstition, it is not reasonable to suppose that the poors in America are so poor that they can’t have access to something the poor in Peru or Paraguay have.


The issue is a little more nuanced than that.

Part of the issue is that you don't need an ID in America for most of your daily life. Most people get it to drive - if you don't have a car (which is actually expensive) then you may not get an ID at all. Yea most people will get an ID, but it's not something people need.

When it comes to the cost of the ID, part of the cost is taking time out of work to sit in a crummy office and fill out paperwork. They require certain proof of identity paperwork that can be hard to get for certain walks of life. Its quite an edge case in society that can't produce a small amount of paperwork to self-identify, of course.

Its a small cohort that truly don't have the resources to get an ID, but there is almost no observed downside to not requiring IDs to vote - American elections are and have been perfectly legitimate (or until 2020, depending on who you ask...). Why would we put up extra barriers to vote when we could just... not?


We have government issues ID cards, but we don’t have free government issued ID cards as far as I know, which is one of the problem with these voter ID laws.


In Austria it is fine to vote without an ID, if the members of the voting committee know you. On the countryside they usually do know their people. It’s mostly minor politicians (from all parties) that are in the voting committee, and they sit there all day to verify the election is not tampered with. And those politicians usually know everybody from their neighborhood.

You need to be on their list though, but you get added automatically by the city to the list. When you move you need to register your new address, so they know that you live there.


In the US the states decide these rules, the federal government just sets limits. So, in my state, I just tell them my address, if I’m registered to vote they’ll have my name in the list, no need to show an ID or anything. They have a list of registered voters, so I guess if multiple people tried to give my name at my address, it would get flagged when the second person showed up. I actually don’t know what the procedure is. I think it almost never happens, at least, I’ve never heard of it happening to anyone.

Individual voter fraud is a silly way to influence an election, you have to get away with it thousands of times to make a dent.


It ranges from $10 to $90 depending on the state (average is ~$30) and most states also have a discounted rate for people under the poverty line.


It is unacceptable to charge people any amount of money for a voting requirement.


True, but in Europe usually everybody has an ID at some point, because you need it for traveling. And an expired ID also works to prove your identity.

But in theory you could also bring two or more „identity witnesses“ that either have an ID or are known to the people at the voting station, that can vouch for your identity.


Seems to work fine in Canada. We also pay for our IDs, at least here in quebec.


Practically it works fine in many areas of the US as well—just, good areas that don’t want to put up hurdles for voters in the first place. But, give an extremism a means to disenfranchise anybody and they’ll go for it

Morally, even if it works out OK, it ought to bug you if you don’t have a cost-free way to vote.


I don't disagree at all to be honest. I don't think voter fraud is an actual concern (if bare mininum measures are taken, like voter registration or even just "checking in" individuals with their adresses), but I think it's weird that it is considered to be such a huge issue in the US when it is actually the norm in most of the world.

Especially since it's usually white people using POCs as almost "noble savages" who can't figure out how to vote, get an ID, or have a drivers license.


I agree and think that some of the “it is just impossible for minorities to get IDs” stuff is, like, uncomfortably low expectations. I think this is sort of condescending and not really helping our case. If you took the average person from a minority community, I happy to believe that they are equally able to get an ID as a person from a majority community.

But in every community there is a range of willingness to deal with bureaucratic annoyances to vote. Adding more hurdles bumps some people from the voter to non-voter bucket. The reason it is a big issue in the US is that we have a well documented history of adding those hurdles selectively in order to suppress votes from particular communities. This is part of a really dark chapter in our history so people have a visceral reaction to it. I mean, since you are Canadian—I guess people would be a little skeptical if someone tried to start a conversation like “Well lots of countries have boarding schools so here’s my plan for education in some underserved communities…,” right?


Apparently, it isn't obligatory to have an ID card here im Germany. My friend researched this last time his ID was about to expire and decided to go with passport only. The only problem is when you have to verify your address, since that's usually done via ID.


True. But it’s still obligatory to have an ID. In a lot of countries this is not the case.


Strange for me - the CZ ID cards work the same way as described.


True. I just checked the ID cards for a few EU countries, and most of them don't seem to include an address of residence (I checked Poland, Netherlands, Belgium, Finland and Switzerland). France includes an address too.

https://www.consilium.europa.eu/prado/en/search-by-document-...


Spain does that


Is the ID requirement a leftover from the Nazi era? I don't suppose Germany ever issued a blanket repeal of all the laws passed during that error.


The law has been changed numerous times since then. However, the requirement to possess an ID was introduced with the start of WW II. So it's not completely wrong to call it a Nazi invention.


German law requires you to own a Personalausweis or Reisepass (passport). If you have a passport, you are not required to also have an ID card/Personalausweis.


To add to this, it is actually forbidden to have a different main residency than the one on your Personalausweis if you live in Germany. If you move, you have to change it within a few weeks.


This is dumb. Should I go to my home country and update my personal ID/passport every time I change apartment in Germany?


If you have residency permit, that gets updated. If you don't have (EU citizen) then having done the registration is enough. (I think GPs wording isn't particularly precise: what matters is the registration - and if you have a German ID document, then that gets updated at that point)


> Since German citizens by law are required to own a Personalausweis

This is not true. You must have an Identitätsnachweis, but if you have a valid passport you do not need a Personalausweis.


There is a common misconception about the law. Germans are only required to have either a passport or a NIC.


Yes, there's address on all three - passport, national ID and driving license. Passport and national ID are from southern EU country, driving license is northern EU country and I live in Germany. I have no intention on changing any of addresses on any of my IDs or driving license until legally required (10+ years from now).


German Passports have the City/Region you life in on them. The ID cards have your full address. So if the address of your ID card doesn't match your passport something is definitely fishy as you would have to get them updated when you move.


German (and most other?) passports feature the name of the issuing authority and place of birth, no other location, and remain valid irregardless of your current registered address.

Side note: It seems like many African ID cards list the profession and I've had to explain to several suspicious border guards why my passport doesn't.

EDIT: I've always wondered about this and apparently the body that sets passport specifications is ICAO, the International Civil Aviation Organization, as mandated by the UN. Specifically through ICAO document 9303.[1]

[1]https://www.icao.int/publications/documents/9303_p4_cons_en....


The German Passport has field for your Residence, which is mandatory and needs to be updated when you move. It is generally the general location you live in, usually your city.

You can see the field in this official example passport (field 11): https://www.gesetze-im-internet.de/normengrafiken/bgbl1_2017...


Ah, you're right. I guess I never looked at page 2. Never updated it either.


Technically you would need to do that I think (it is free at your local Bürgeramt). I just did it together with the registration.


Their compliance process is quite abysmal.

This is actually exactly what happened with us. After creating a new account with the intention of exploring the ARM64 services, our account was unexpectedly suspended. I contacted them to have information on the specific concerns regarding my customer information and the reason for deactivating my account. Unfortunately, we did not receive any response to our inquiries.


> Never figured out how to use Hertzner. Wanted to try since they get such a good reputation on here, but they banned my account when their system presented no way for me to verify my identity.

Is this something new? I (Norwegian) have been using Hetzner for 10+ years, and never had a problem, and never had to attest my identity. CurrentlyI have a four servers running there. The last one was set up approx. a year ago, IIRC.


>I (Norwegian)

That may be the difference. Some nationalities get KYC'd easier than others and they seem to take it very seriously


+1 anecdatum


There is a reason :)


Norwegian living in the UK, and have used Hetzner for years, both via a personal account and more recently via a corporate account (UK company, Norwegian citizen), and it's not been an issue. For one of the other corporate accounts I used it for the managing director at the time (UK company, UK citizen) did have to provide ID, though. Not clear exactly which criteria has been in place when and for whom.


It's definitely not new, they required an id 20 years ago. (From EU citizens, so it's not about the country either).


For what it's worth, I have a VM on their cloud offering and I've never had to provide them with an ID.


So how do you explain that I never have had to show an ID?


Just because Hetzner has had processes that can involve ID checks if deemed necessary for a long time doesn't mean that they check ID for everyone.


I understand that. I'm trying to figure out why I'm - and several others - are "special."


A large statistical model, likely run by some third-party attestation service, said so. It's mostly used to refuse service where governments require that (say, no service for North Korea), or to people known to repeatedly do fraud, etc. But false positives occur; sometimes an inspection by a human helps (send an ID).


I'm Norwegian and have used hetzner for years - i had to send a copy of my passport to get started.


They probably suffer from a lot of sign ups with fake IDs and with criminal intent. So I get that they are rather strict.

Another thing to consider: cloud providers are not very interested in individuals as customers. They usually want companies as customers, that also buy more than a 3$ vserver. A solution for this problem could be a sign-up fee (50 or 100$), to pay for an extended manual vetting of customers, that is then added to the account balance.


> Another thing to consider: cloud providers are not very interested in individuals as customers

A key theme in the "cloud vs data center" story is that most public cloud providers (AWS, etc...) were really easy to sign up, requiring a CC and nothing else.

Meanwhile, hardware vendors wouldn't even talk to you as an individual / small business.


Tried to sign up for gcloud a couple of months ago using my >12 year old google account. Long story short, while I technically managed to sign up, I never managed to get my GPU quotas increased to anything above 0, support is non-existent and contacting sales (which seems to be the 'official' way, but is only really intended for business customers?) never got a response...

Meanwhile, my small server (not Hetzner) has been running for many years without any issues, never had to send them anything after I signed up...


> Never figured out how to use Hertzner. Wanted to try since they get such a good reputation on here, but they banned my account when their system presented no way for me to verify my identity.

I ran into the same thing as well, maybe my real name sounds a bit funny to their system? It was very discouraging to move forward after being instantly banned. I reached out to them how I could verify and the only way was sending them an unencrypted mail with my Passport copy. Upon request they suggested I could simply send them a fax.

Note, this has been some years ago and I've never gave Hetzner a new try. As long as I can see from professional experience, you will have a lot of back and forth with Hetzner support, which becomes quite bad the moment your team is international because they'll always manage to sneak in some sort of German text. It really feels antiquated having to go through support for basic server hardware debugging. Eventually they'll often resort to replacing your instance.


This has come up a few times on HN, if you search comments for "Hetzner fraud". The solution is to use a different provider if you can't use Hetzner.


I've seen the other side of this. Our SaaS (a data API) has a number of "customers" who attempt to use us (or our competitors; we're not special here) to power some data displays on their phishing-scam websites, to make them seem more legitimate.

We ban these people — they're violating our ToU by engaging in illegal activities. But they come back. With different names, different IPs, different browsers, different credit cards. They have complete identities to burn. (We spot the correlations anyway, along other dimensions I won't disclose here, and so can keep them out pretty effectively.)

And guess what? Very often, their requests are coming from Hetzner IP blocks.

I don't think the scammers have a direct business relationship with Hetzner, mind you. I think Hetzner tries just as hard as we do to stop these people from making use of their services. But I believe that these Hetzner boxes are either set up as exit nodes of one or more common VPN providers; or they're being registered for other purposes by other parties, and then resold on the secondary market on dark-web forums.

If I were a VPS provider, and I didn't want to support illegal activity, I'd probably just give up on providing service to individuals altogether, only taking corporate customers; and even then, requiring a DUNS number or something as an additional proof-of-work for that corporation, so that people can't just keep spinning up corporations in places where that's essentially free.

Hetzner hasn't gone that far; but it makes sense to me that if a user account is flagged as needing extended verification, and the ops person responsible for verifying the account takes a look at the user-lifecycle activity logs for the user, and sees that this user has: their IP coming from multiple places during registration vs login, their browser locale and timezone bouncing around between requests and set for settings uncommon to the country their IP is originating from, etc. — that the answer would be "ban" rather than "ask the user why the heck that's happening."

One time out of ten, the user is a real person doing something weird. The other nine times out of ten, the user is a scammer and is going to make up some story about being a real person doing something weird. Every scammer has their very own pool of man-hours, and if you're in the critical path for their scam, they can burn a number of those man-hours being really insistent that they're authentic. Until you let them in, and see that they immediately start up the same dumb phishing-scam bot script that all the other scammers purchased.


We crawl through data like this professionally and from what we see, Hetzner isn't actually that bad at combating fraud. They are not GCP or AWS but there are other hosters of similar scale that have significantly worse response times and leave up clearly compromised machines for a lot longer.


I'm very curious to know who are the worst offenders! (If you can/want to share the details, obviously :) )


I cannot really comment on this because its A) a multi-dimensional problem (Hosters like Oracle have slightly longer mean removal time than Hetzner but less of their IPs end up in our aggregated blocklist, so does that make them worse or better idk) and B) we're trying to coax at least some of these hosters into using our service to support their fraud team so its probably best not to call out potential customers ;)


It’s likely they’re just using hacked sites. I’ve seen a WordPress site used as a Viagra botnet. The owner of the business thought it was good for them because they would get more traffic so they had given the other party root access. :sigh: the shit you see as a contractor…

But I’d be willing to bet you’re seeing hacked servers, not necessarily Hetzner’s fault. Hell, they didn’t even have ipv6 firewalls until recently (like the last six months).


I have pretty good reason to believe that scammers are using purchased Hetzner credentials — which is that some scammers are just right out there in the open, talking about how they do what they do: https://teletype.in/@slivmens/LjPaei8pMTT

Translated quote:

> To do this, we go here: [link to carding forum] and create a topic in the section "verified Hetzner accounts."

> Offer price — no more than 400 rubles is needed. The priority is people from Ukraine, as they have benefits. GEO of the person who verifies the account - any, excluding Russia due to sanctions.

> Another important detail: the seller must register a fresh GMail account, use that account to create an account on Hetzner, and verify it themselves.

> After verification, we wait 3 days before the creation of the new server — otherwise the likelihood of the account being blocked for abuse increases.

> After purchasing the account credentials, we change the password, both on the Gmail account, and on the Hetzner account.


Are you implying that someone (possibly temporarily) living in another country than where they're from means the sensible course of action is to instantly ban them?


I provided them with my passport and all the other documents they requested and they still banned me with no recourse, so yeah, their signup process is the worst I have ever experienced.


They seem pretty good at writing back to people. Have you contacted them? It's normal for new accounts to get suspended pending KYC checks. But they also do the checks pretty fast.


The OP indicated this in their original comment: "I asked what I should do to comply, and they banned my account 6 hours later."


That’s pretty suss. Just send your passport and if they have any issues with it, they’ll ask. If they ask you for your passport and you ask “how do I comply” I’d probably suspend the account to since you seem unable or unwilling to comply.


They asked him for a Japanese passport and he does not have one. Hence the "how do I comply".


Just send the one you have? I don’t see the problem.


And I also don't see the problem with asking a clarifying question when he does not have what they asked for.


It’s obviously a typo or something similar. The customer support person likely has no idea what you’re talking about, they want a passport with your name on it, they don’t care where it is from.


Sadly not, and I'm one of the happy Hetzner customers

They are remarkably well known for having draconian anti-fraud


I think they really care about two things: VAT and cryptocurrency. Passports are required to prove you should or should not pay the 19% VAT, and crypto (namely, SIA) destroy their servers.


Whoa whoa whoa I can opt out of paying VAT? How do I do this!


You prove you don't reside on the EU, and they don't bill you the VAT.

(But I believe the KYC rules aren't only because of this.)


You got my hopes up but it seems I'm stuck paying UK VAT even though I don't think it's necessary any more.

I've run into this with a few different German businesses, I don't know if I'm right or they are, but it seems like they don't _have_ to charge me VAT. Which is underlined by the huge "import VAT" bill I'm dumped with when I have physical goods delivered to me in the UK.


Are you using the computer for business? If so, you should be claiming VAT credits that neutralize VAT (since it's intended as a consumption tax). If they don't charge you VAT on behalf of the UK, the UK should be charging you the equivalent consumption tax by some other means to keep the consumption tax base neutral.


They do have to charge you VAT unless you are a VAT registered limited company


Have a company whose business purpose justifies spending company money on servers. Make a profit so you have company money to spend. Spend it on servers. There you go.


That's exactly why they have draconian KYC measures in place)


> I have no PayPal account, and their 3rd party verification system only allows passport from your country of residence (signup requires providing a contact address and they pre fill using this address; you can’t change the passport country)

Sounds like a good thing to do?

My current hoster in Germany made a surprise call and asked me what the name of the hotel near the address I provided was. This after I submitted the order and before they accepted.


I sent them a photo of my passport encrypted with their pgp key. This was a long time ago, so the process might have changed, but it seemed entirely manual.


You get a good range of options at a low cost with Hetzner but need to identify yourself. Many hosting companies have different levels and types of verification these days. If you want to avoid any KYC, checkout https://kycnot.me/services#VPS.


That's why I don't use Hetzner. I don't really want to handover my passport to any company. How come digital ocean doesn't require that?


I use credit card and was never asked for a passport. That's odd. Been using them for more than 5 years.


Use a VPN connection to the UK when you verify... BOOM solved


Using a VPN is one of the easiest way to get rejected. Most automated systems flag stuff coming from them as highly suspicious due to the sheer amount of crap coming through them.


You can run your own from a vps and an ssh socks proxy.


Just send them a photo of your British passport


Theres a perception that the ARM Altra per-thread performance is really bad.

But according to my Minecraft benchmarks in Oracle's instances, its better than old Skylake cloud instances... by how much is hard to say since other tenants generate so much variation, but there are proper reviews of the whole SoC floating around out there.


I've benchmarked ARM systems vs other processors.

On my workload, using Intel as a baseline. AMD under performs relative to standard benchmarks. ARM over performs relative to standard benchmarks. I have seen good performance from Ampere, particularly for the price.

However, ARM architecture complicates my workflow significantly. I've found that they are not worth using it right now due to compatibility issues, easier to just stick with x86 for most things. Dealing with required missing packages just kills the value.

My system isn't generally compute limited anyway. IOPS and egress dominate rather than MIPS.


Try alpinelinux, you'll be able to find almost everything, coupled with nix for just in case. I've yet to find any package missing.


Introducing new distros to solve the problem is moving in the wrong direction. It's so much easier to deal with one distro in the cloud. Just more reasons not to go multi uArch.


To the downvoter...

I strive for simple infrastructure.

ARM doesn't support the x86 docker eco system. That's a significant strike but not totally unworkable for me.

My primary database isn't prepackaged with the ARM packages nor does the originating org build one. I could build it myself, which I have done plenty over the years. I did try it, but the dependencies aren't up to date on ARM. So now I'm building dependencies and the software package. Building software from scratch can be a rabbit hole that eats untold amount of time. After a one-day effort I gave up and abandoned it. I'll revisit again if Hetzner brings Ampere to the US and I can verify the instances have much higher IOPS ceilings.

Enter another distro and I have to rebuild all of my provisioning and management scripts for all my services with all the testing entailed in the new dependency chains. I'd rather try to build my db from source again, frankly.


ARM cloud hosts typically run distros with good ARM support. Oracle Linux is actually quite good, kinda like a pre optimized RHEL.

Which begs the question: what is Hetzner pushing on these? SUSE maybe?


By default: Ubuntu, Debian, Fedora, CentOS, Rocky and Alma.

You can mount your own ISOs though.


alpine brings it's own issues with the different libc

maybe things have been fixed now, but about a year ago I was having issues with ruby bugs that were due to differences between muslc and gnulibc


I'm curious what distro you're using where there are missing packages. I've never run across missing distro packages on arm64 Debian (which I regularly use at work).


Ubuntu server itself is pretty arm friendly. But there's a LOT of software that doesn't come with the distro's packaging that one might want to run in the cloud. Docker images and vendor hosted package repos. Everyone does this for x86. Very few do it for ARM. And if you want to try to build-from-source, the dependencies of a lot of pieces are often very out of date on ARM.

There is a kind of after market ARM community that occasionally builds these items. But they are generally pretty out of date.

I don't have an issue with the base distro and what it tries to do, but there are so many random pot holes and road blocks that just don't show up for x86.

I have ARM servers running in production now. But I have workloads that aren't reasonable portable to the uArch and I don't force them there.


Have you faced issues with packages being too old on Debian Stable though?

"Missing packages" may be packages missing important features or bugfixes.


I don't think I've ever run into an issue where arm64 packages are older than amd64 packages. Sometimes Debian Stable package versions are old, in which case I use backports, use the company's internal packages (which are often compiled for arm64), or compile it myself. It's no more of an issue on arm64 than on amd64.


I certainly have...

Arch and SUSE are ARM friendly though.


Isn't running Arch in prod considered brave still? Or has it become stable and predictable in the last few years I did not watch it?


I've done a bit of minecraft benchmarking on my Oracle instance. It generates the same number of chunks per second as my 3600x CPU - although minecraft is always a weird benchmark in that benefits from low memory latency and large caches as it is ineffective at using caches well but is memory bound.


Its even more lopsided when modded to the teeth.


We're getting pretty good performance on AWS Graviton3 for our data processing, their m7g is pretty good. Looks like they're banking on ARM taking off, with lots of new options in 2023.


Old Skylake cloud instances is "really bad" in the current day compared to Rome/Milan/Genoa, Sapphire Rapids and Graviton 3 for example.

We've moved on from lakes!


Cooper Lake (2020) is still a Skylake. They aren't that old!


The 1 that was mostly canned?

See: https://www.anandtech.com/show/15631/intels-cooper-lake-plan...

It's limited everywhere.


> The company is set to only make the hardware available for priority scale-out customers who have already designed quad-socket and eight-socket platforms around the hardware.

I work at one of those customers; we use loads of them.

Nevermind Ice Lake / Alder Lake (2021) / Raptor Lake (2022) -- Intel hasn't abandoned the "Lake" naming scheme yet.


> Nevermind Ice Lake / Alder Lake (2021) / Raptor Lake (2022)

i.e. Skylake+++++


No, starting with Ice Lake these are on the first actually-new core micro architecture after Skylake (it’s the “Cove” family, starting with Sunny Cove in Ice Lake). They have Lake in the name but these are actually a new micro architecture, finally, not Skylake family.


Facebook?


Also, by "Skylake" I mean whatever the last popular server CPU with the Skylake derived core was... IIRC Cooper Lake was Facebook only, Cascade Lake was HPC, and I cant even remember beyond that, but they've had the same 28C server die forever that hosts everywhere use.


To be fair Ampere Altra is old now too. Kinda a contemporary with Ice Lake and the tail end of Skylake.

Hetzner offering them right now is curious. Maybe its a pipecleaner for the next Ampere gen?


The new Altra developer kit is also very cheap. I wonder if Ampere is dumping overstocked inventory. Given their track record I don't expect the One to be available any time soon.


Does Ampere have a newer offering yet? Probably not? So Hetzner are offering the "current gen".

Meanwhile AMD and Intel both do have a later offering so they are "old".


Minecraft is inherently single threaded. the success of running big servers is to run it on the fastest single core processor money can buy.


This wisdom is a bit outdated, especially on heavily modded servers.

Minecraft is limited by a single thread, but some stuff does run on other threads. In addition the JDK will do compilation and GC on other threads now, which is especially apparent if you run GraalVM like you should.

Minecraft wont scale to 16 cores. But if you buy a 2C/4T instance, you are going to get less performance than you would get on a wider one.


> Minecraft is limited by a single thread, but some stuff does run on other threads

Hmm, not so outdated. if you’re talking about JVM gc then sure but pretty much everything is built around a single thread ticking the mod. I know some mods can do chunk pregen on another thread but those are rarely used.


I’ve never paid much attention to Hetzner but this interested me. Upon clicking the call to action in the announcement they blocked me with “Request on Hold - Suspicous Activity Detected”.

I have no interest in subjecting myself to this treatment.


Do you get this indigent over every error message you see?


I don’t think you were looking for a serious answer, but I’ve thought this over and there’s a broader point to be made here.

What I think you’re suggesting is that my trouble is with the machine—some version of frustrated human fighting with inanimate objects. In general this is a framing I oppose; automated machines are created by people, and people carry the responsibility for their creations. “The computer did it” is never an excuse.

So in a way, yes; I tend to view “error messages” as you put it as an extension of their creators, who very well may be worthy of indignation.


it’s almost like VPS hosting services have to be careful due to rampant fraud or something


And it's ALWAYS been that way. A very long time ago I implemented a VPS hosting. How long ago? It was implemented using User Mode Linux. I want to say it was 2003. We had most of our success in a specific geographic region, and also within the Python community. We originally had automated sign-up, but found that if we got a signup from someone who's name we didn't recognize, it was basically guaranteed to be fraud.

In the early days it was slightly amusing to get calls from older folks trying to understand what a virtual server was.


There's a level of friction that is too much for some.

I'm attempting to soften that friction when I launch my PaaS

My concern over bots and misuse is a concern. I'm thinking of crippling the free service to allow full functionality but limiting it's use to really tiny files.


It's literally just an alternative to Cloudflares attack protection.


Okay, and responding to their ad is an “attack”?


It might be, it's to filter out would be attackers. Normal users are just hit as well.



This probably won't last, the servers aren't fully loaded with VMs yet.


I regularly get 3-5gbit on their regular cloud servers to consumer ISPs. They definitly don't overbook their vms regarding bandwith.


For comparison, here's an AWS a1.large instance (2cpus, 4GB ram) YABS benchmark I just ran

https://gist.githubusercontent.com/12932/8ba27254846072a43b0...


Jikes. Why is the hdd read so bad? 6 MB/s. Six?


It's gated by the default IOPS limits, you'll note the larger block sizes have a considerably better throughput.

You can increase the IOPS, and thus the throughput for 4k operations, but at a cost, of course:

https://gist.githubusercontent.com/Q726kbXuN/7f03a9c11cab514...


4k block size, it's IOPS limited. No surprise there.



Huh, odd. I opened the URLs twice and got temporarily banned by hastebin. Thank you for the benchmark though. Very nice numbers.


Yeah also got some odd behaviour from the site.

Host wasn't my choice...just sharing the links


They've also been removed, can you please reup them?


Great news. I'd like to see how a vCPU compares between arm64 and amd64 but not sure how to best go about it. I've created VMs for both with 4 vCPU and will run the phoronix benchmark suite. After that I'll probably try 16 arm vs 8 amd vCPU because that's a closer match in terms of price. Any suggestions welcome but I also don't want to spend too many hours on it. Will post results.

Ran the kernel build benchmark (result is seconds, lower is better):

   AMD64:
        272.916
        273.128
        270.477
   ARM64:
        1011.799
        1004.713
        1015.261
So the ARM CAX21 instance for 6.49 EUR/month took roughly 3.7x as long as the AMD CPX31 instance which costs 13.60 EUR/month. A roughly 2.1x price difference. Here the ARM instance did not shine in a kernel-compile-per-eur metric.

Also ran sysbench cpu --time=60 --threads=4 run

   AMD64: events per second: 14681.70
   ARM64: events per second: 13455.11
In this test the both are very close.


Additional benchmarks:

sysbench memory --time=60 --threads=4 run

   AMD64: 5859951.00 per second
   ARM64: 6052749.14 per second
Here ARM had a lead.

Next up I timed compiling nodejs. time make -j4. I ran the test two times and took the faster result for each.

   AMD64: 
      real    28m46.385s
      user    107m48.971s
      sys     5m12.994s
   ARM64:
      real    39m18.443s
      user    146m25.801s
      sys     7m53.271s
Here ARM seems to be roughly 36% slower. This is actually pretty good considering the price difference.

Rescaled the ARM VM to 8 vCPUs:

      real    22m20.624s
      user    162m30.176s
      sys     8m50.104s
So now the ARM offering is noticably faster than the AMD instance but still cheaper. 12.49 Eur vs 13.60 Eur.

What I learn from these benchmarks is that you might get some really good value out of these ARM instances if your usecase is not impacted all too much in terms of performance.


The main problem I'd think is that by being "virtual" you never really quite know if you're seeing how it would run in "real life".

Personally, if I were to do it, and I had some sort of load-balanced application that uses multiple backends/frontends/whatever, is balance across the two and compare over time.

And to be 'fair' you'd want to compare similar pricing.

Or if you had a consistent group maybe try a Minecraft server and move it between the two every week and see what people "feel"?


>I'd like to see how a vCPU compares between arm64 and amd64

That depends on how many vCPUs are they running on each physical core for both x86 and ARM.


Yes and we might never know or maybe it's not even static.

This will never be 100% scientific or correct since there are many factors that come into play, stuff like noisy neighbour for example.

What I'm trying to get is a ballpark idea of how their arm vCPU compares to the amd equivalent.


Did a quick comparison between intel 2 vCPU and Arm 2 vCPU, these are shared CPUs though so YMMV:

sysbench cpu --threads=2 run

Intel events per second: 1864.20

Arm events per second: 6687.05

Coremark

Intel iterations/sec: 20077.633516

Arm iterations/sec: 23625.767837


If you have access to them a much more informative datapoint would be `openssl speed rsa` or `lzbench`. Sysbench is just a stunt it doesn't indicate much at all.


Sysbench is a better synthetic test in my opinion because it does a lot of varied grinding instead of trying to evaluate the entire CPU using just one very narrow and specific task. Even on the same Linux distribution there can be a lot of differences between x86 and Arm builds because OpenSSL cannot be entirely orthogonal in its use of assembly optimizations and CPU crypto acceleration, even for the RSA benchmark.

To offer a somewhat more varied example I've tested the POV-Ray benchmarker on Debian 11 on Oracle's Ampere Altra servers ("A1.Flex") versus an identical setup/build running on a 2 GHz EPYC 7281-based x86-64 VPS, and on that single-threaded test the Arm VPS handily outpaced the EPYC with almost 2x the performance.


`sysbench cpu` basically measures whether a single primitive inside sysbench is correctly optimized for your platform which is almost meaningless. Its run-to-run variation is enormous because there is a lot riding on whether particular data structure is optimally placed and aligned, which sysbench makes no effort to control. On a typical hyperthreaded x86 machine you will get 100% variance or worse depending on whether sysbench's 2 threads are placed on the same core or on different cores, so you must control that with `taskset` if you want the result to mean anything.

On my local machine with 4 threads I get ~10k events per second on cores 0+2+4+6, but on cores 8-11 I get ~13k. Does this mean that Gracemont Atom is 30% faster than Golden Cove Core? No, it is only measuring the fact that the efficiency cores happen to share an L2 cache.


Here's a lzbench

dd if=/dev/urandom of=1GB.bin bs=64M count=16 iflag=fullblock

Intel:

Compressor Compress. Decompress. Compr. size Ratio Filename

memcpy 5062 MB/s 5013 MB/s 1073741824 100.00 1GB.bin

zstd 1.5.2 -2 2083 MB/s 4743 MB/s 1073766410 100.00 1GB.bin

zstd 1.5.2 -5 210 MB/s 4775 MB/s 1073766410 100.00 1GB.bin

zstd 1.5.2 -9 85.5 MB/s 4774 MB/s 1073766410 100.00 1GB.bin

Arm:

Compressor Compress. Decompress. Compr. size Ratio Filename

memcpy 10876 MB/s 10950 MB/s 1073741824 100.00 1GB.bin

zstd 1.5.2 -2 3175 MB/s 11168 MB/s 1073766410 100.00 1GB.bin

zstd 1.5.2 -5 192 MB/s 10967 MB/s 1073766410 100.00 1GB.bin

zstd 1.5.2 -9 146 MB/s 10909 MB/s 1073766410 100.00 1GB.bin


Looks like the ARM one has twice the memory bandwidth, which would help with a lot of workloads.


It's more along the lines of having roughly the same total bandwidth but being able to exploit all of it from a single core, whereas on Intel you need to exercise all or at least several cores to drive the memory to the limits.


Nice. Showing off Neoverse's superior single-core load/store abilities.


Isn't that just measuring hardware support for RSA? Are many systems bottlenecked on RSA perf?


New Hetzner ARM: sign verify sign/s verify/s

rsa 512 bits 0.000070s 0.000006s 14268.2 159885.6

rsa 1024 bits 0.000405s 0.000021s 2468.4 47757.7

rsa 2048 bits 0.002847s 0.000078s 351.3 12830.1

Hetzner x86: sign verify sign/s verify/s

rsa 512 bits 0.000067s 0.000004s 14893.9 240902.8

rsa 1024 bits 0.000127s 0.000009s 7845.9 114300.9

rsa 2048 bits 0.000874s 0.000027s 1144.6 36800.6


If it's of any value to anyone, perhaps as an indication of hardware/environment implementation, I get identical numbers (<1% difference) on Oracle's Ampere Altra servers with OpenSSL 1.1.1n on Debian 11.


Due to a peculiar design issue, Neoverse N1 underperforms when using RSA. It doesn't affect other use cases and is fixed on newer Arm cores.


it would be very helpful to have this benchmark for the same price-point instances as well. e.g. it seems like the 2vCPU + 4GB ARM instance is 4.52eur while the 1CPU + 2GB x86 instance is 4.51eur, therefore it would make sense to do a comparison on this level as well.


Could somebody walk me through why ARM based servers are so inexpensive compared to their x86 counterparts? Is performance much lower? 16 cores & 32gb of RAM for less than 25 euros seems like a bargain.


Firstly ARM processors are simpler to produce, with simpler circuitry (RISC!), although the number of transistors tend to be similar between the two platforms.

Research costs are also very expensive for x86, with 2 vendors (Intel and AMD) competing savagely in many different markets such as high-end gaming, servers, ultrabooks and low cost PCs. However, ARM (the company) focuses on licensing the designs and instruction sets. So their research costs are split among many licensees while ARM focused on what it does best: designing chips.

That in turn translates into many vendors who can focus on improving manufacturing efficiency, procurement, logistics and sales. x86 vendors have a wild range of manufacturing lines to keep up.

Finally, the laws of supply and demand play a huge part. The ARM market is 2 orders of magnitude (!!) greater than x86 as far as raw number of processors shipped, ~370M vs ~32B chips in 2022.


> simpler circuitry

> the number of transistors tend to be similar between the two platforms

sorry, come again?


x86 has more pathways (interconnects), x86 implements a "mesh" design, which is a low-latency, high-power configuration of pathways. ARM uses a neat hierarchical design that favors low-power consumption.

So, given a x86 and an ARM chip with the same number of transistors, the ARM one will still be a simpler, low-power chip with probably a higher number of cores.


ah thanks! I hadn't considered the core count difference.


I would guess simpler CPU core architecture means more cores per die. Presumably it also means lower performance, but perhaps not by as much as you think, since typically the most performant CPUs aren't the most cost-effective.

Quick Googling shows:

  - Intel W9-3495X has 56 cores for $5889
  - AMD EPYC 64 has 64 cores for $4299
  - Ampera Altra M128-30 has 128 cores for $5800
The Altra uses about the same power as AMD, and much less than Intel, despite offering twice as many cores. So it stands to reason that you just get twice as many cores for the same money, even if you factor in power, cooling and maintenance costs.

Hetzer offers a 16 core AMD (v)CPU machine at 225% of the price, so that roughly works out.


>16 cores & 32gb of RAM for less than 25 euros seems like a bargain.

16 cores doesn't say much. It depends how many DMIPS those cores will yield.


Power delivery and cooling are not cheap at datacenter levels.


Power is especially expensive in Europe right now. The war in Ukraine caused price increases between 300% and 1000%


Is that still the case? The prices seem to have peaked around August and are now back to their pre war levels. Before that prices have already been steadily rising since September, 2021 though.

e.g. average price in August was over 400% higher than this March

https://www.nordpoolgroup.com/en/Market-data1/Dayahead/Area-...


I think wholesale prices for electricity in Germany are still around 3x compared to before the pandemic, and around 4-5x compared to 2020.

The big peak (up to 10x) is over, but the energy prices already increased a lot in autumn of 2021, when Russia first reduced the gas deliveries.


With the added environment benefits!


ARM's N-series cores are targeted at a somewhat lower performance point than top line x86 cores. That saves a bunch of transistors since there's a lot of diminishing returns. There's also design compromises not being made in order to hit clock speeds that server parts never try to hit anyways. Not having to decode complicated x86 instructions but simple RISC ones is also a small advantage, somewhere on the order of 5% power savings.

Also, x86 backwards compatibility and SMT don't cost much in terms of power or transistors but both have large costs in of verification work.


Demand and supply. Even if they did offer equal performance most of your docker images and other binaries may need to be rebuilt. The inertia is hard to beat.

AWS has been trying really hard with Graviton 3.


Base docker images, such as Ubuntu have support for multiple architectures.

Building your own based on them isn’t really difficult at all; docker buildx works remarkably well and build tools such as maven, sbt, etc. seem to have rather decent support for building multiarch images.

Even better if you have decent build automation, implement cross building for amd64 and arm64 once and just make sure your FROM images support those.


A lot of things are usually not difficult in the absolute sense.

It's not just the building of it but the testing/verification of what's built.

1 major problem with a lot of use cases is there isn't enough test coverage and hence confidence in the change. Some people / companies find it easier to just pay the difference.


Possibly off topic,

I actually registered the org oci-base (https://github.com/oci-base), when Docker Inc. announced the registry fees.

In part as I've been annoyed with multiarch and docker/registry in the past.

Then to top it off the official registry has/is taking ages to support signing/attestation.

Plan is for it to be contributor driven, and RFC style requirements following best practices.

Not had much time to work on it lately, kids, Easter, etc.


in other words less competition, more market power tax


Here's hoping it's not just an early adopter promo


Hetzner has always had low prices. Should be good.


That's exactly what it is.


Maybe competition? x86 has only 2 competitors left. ARM has more diverse ecosystem of competitors and the environments ha been different.

ARM design had to be adjusted/developed for a cell phones often cost 10% of a normal computers (main use of x86). I can buy a cell phone a display, battery, memory and ARM cpu for $100 total (no idea how much ARM chip alone cost). The phone will have a semi-acceptable performance.

Even cheapest x86 CPU with emi acceptable performance probably cost more than $100 and cheapest x86 cpu more than $50.


Ampere is the only game in town for buying a server ARM chip. It isn't like anyone is rack mounting Samsung Galaxies, so the variety of manufactures working in other spaces doesn't really matter.

Nvidia claims to be working on getting in the server game later this year so maybe ARM will get to x86 levels of competition.


But the makers of the CPUs are still server focused. Then again it's a lot easier to become one with ARM.



Low demand


Wow those are some tempting specs. 4 Euros for 2 core CPU, 4GB of RAM, 40Gb storage, and 20TB of traffic. I always had issues with those $5 droplets with the RAM being abysmally low. Closest comparable equivalent is $18-$24 on DO.


> The new server plans start at the unbeatable prices of just € 3.79 a month, which includes an IPv4 address, and there is no setup fee or minimum contract period. If users’ servers only exist for part of the month, they will only pay for the hours when the servers existed. This hourly billing option gives customers even more flexibility and will help keep costs low

Unless I'm blind, the price clearly says 4.52 EUR/mo with IPv4 at https://www.hetzner.com/cloud for a CAX11 instance, the smallest listed there.

Edit: nevermind, I didn't deselect Germany's 19% VAT. It is indeed 3.79 euros before sales tax.


4.52 is the price with German VAT included, without it, it's 3.79.


2*vCPU, 4GB RAM, 40GB disk and 20TB traffic for €4.52/month is a steal. €3.92/month if you go ipv6 only.


Finally! That's where ARM Alta CPUs shine Try your luck to get them for free in oracle free tier, just make sure to have backups!


Oracle's free tier is cool. I've let my free instance expire and so far haven't noticed them doing anything weird with my credit card info.

Also haven't been sued yet. But let's see.


If your tenancy is unpaid your "always free"-classed instances can be terminated without notice in order to provide capacity for a paid tenancy when necessary. You can upgrade to paid tenancy and simply keep using "always free" resources for a $0.00 monthly bill.


I'm still on unpaid tenancy and oracle docs say that reclaim servers which are under or overutilized.

I use my oracle free servers (and one paid vps on hetzner) as automatic cloud fallbacks if my homelab is inaccessible over the internet for some reason. Which means I fall in the underutilized category as most of the time workload is under 2%. So I "fake" work to prevent reclaimation by oracle and its worked well so far.

``` # Oracle Cloud reclaims idle resources # https://docs.oracle.com/en-us/iaas/Content/FreeTier/freetier...

- name: Create cronjob for minimum CPU utilization

  ansible.builtin.cron:

    name: fake work to prevent re-claimation

    special_time: hourly

    job: timeout 30s sha1sum /dev/zero
```


Oracle did not accept my credit card. The same one AWS has been charging successfully for years.

But I don't really want to do business with Oracle anyway, maybe it's better so.


Can confirm, they are very picky about the credit cards they accept.


Surprises me a little because I feel like I often fall victim to such things and had no issue at all. I used a fairly fresh N26 card - N26 is quite infamous for being easy for criminals to sign up for and being quite shady itself.


Geekbench 5 results for rough comparison, best of 3+ runs:

---

Intel 1-core VPS single: 719

(2 GB RAM)

---

Intel 2-core VPS single: 729

Intel 2-core VPS multi: 1402

(4 GB RAM)

---

AMD 2-core VPS single: 1141

AMD 2-core VPS multi: 2214

(2 GB RAM)

---

Ampere 2-core VPS single: 872

Ampere 2-core VPS multi: 1716

(4 GB RAM)

---

Ampere 16-core VPS single: 880

Ampere 16-core VPS multi: 12143

(32 GB RAM)

---

And for some fun, my iPad Pro from 2018, single: 1135

If Android CPUs are 4 years behind Apple, then Arm on servers is maybe 8 years.


At least till the big boys arrive: https://www.nvidia.com/en-us/data-center/grace-cpu/


They also quietly removed their Apple M1 server option - anyone knows why? not reliable enough?


I hope Hetzner starts offering GPU instances like Vultr.


Hetzner at some point used to offer dedicated servers with GPUs like a nVidia 1080 but stopped offering these because there was a lot of fraud and abuse involving crypto miners.

With all the AI hype I'd assume they'll try to offer GPU instances again with precautions put in place.


They are still available in the server auction section


Yes but what a reader unfamiliar with the server auction at Hetzner should know is that there is very limited supply (and sometimes none) plus it's outdated hardware. At this very moment I see 9 servers with 1080 priced between 105 and 118 euro per month available.


One of the chat support people told me that they were going to do that pretty soon. Haven't seen anything though.


When was this? This one really interests me.


A couple of months ago. They seemed to suggest a launch was imminent, as in "wait around a couple of weeks, rent GPUs just like everything else". But I haven't seen it since, so kinda odd.


Perfect build servers for native AARCH64 compiling (though you can use distcc), really wish AWS had spot pricing for ARM..


They do? Unless you are referring to Fargate. EC2 Graviton does have spot pricing?


We are decommissioning all our dedicated servers at Hetzner. This is not because we didn't like that service but because we are concentrating our efforts in AWS.

It's been a wonderful ride. It's cheap and reliable.

I wish our team was big enough to afford having to deal with dedicated hardware.


Hetzner also has cloud servers and an API to provision them. There is even a terraform module for hetzner cloud. It works very well. But they don’t provide any managed services, like databases or blob storage.


Yup, reason why we stay in AWS


Scaleway and OVH are two other European companies who provide things like managed databases and blob storage, in addition to very competitively priced dedicated hardware and services.

One thing I personally like with OVH is that they have fixed prices for things like bandwidth, not pay-per-GB like most others.


If that’s the only reason, we offer managed databases of many flavors on hetzner. https://www.ayedo.de


That’s awesome, good to know. Do you provide a self service portal and an API for your services, or is it a manual process? Because on the website it always refers me to get in contact with sales. And you know how much developers hate human interaction :D


Hope they'll roll out the dedicated-cores plans for these.


With the energy prices in Europe being what they are (and systemically higher than in North America and especially in Asia), will Hetzner be able to provide low prices that they currently have?


they increased prices a bit some months ago.


Appears to be ARM v8.2, which is important to some.


inelastic rented metal billed by the month is for great good. glad to see more being released.

elastic aws cloud billed by the second is for great good. glad to see this continuing to mature.

neither of these is great when they are idle 90% of the time. one of them is really not great.

why are there so much idle yet paid for resources? it’s complicated, and not for technical reasons.

empty housing is similar.


Do they have servers in US ? Who would be the Hertzner equvelent in US when it comes to low cost dedicated hosting


Yes, the have datacenters in Virginia and Oregon - seems to just be AMD servers though (no Intel or ARM).

Been using a couple VPSes in Oregon, pretty happy with them.


yes, they do have now two Data Centers in the US https://www.hetzner.com/unternehmen/rechenzentrum/

* Oregon

* Virginia


Amazing news. Your move Digital Ocean.


Eagerly awaiting these in Ashburn! Almost double the CPU/RAM in most cases for the same price.


I don't understand the hype around ARM64 in the server space. Is this just due to all the marketing from Apple around their proprietary M processors leaking into the enterprise market?

The latest Xeon and EPYC offerings are very performance competitive, I doubt we need to overhaul and entire processor ISA paradigm for continued improvements.


It's probably pretty rare (though maybe increasing) to have a workload that is ARM-only, so it really comes down to price performance. Amazon claims 40% better price/performance on Gravition2 vs. comparable x86. While it varies by workload, for scenarios where this plays out and the architecture isn't important (languages increasingly make multi-arch deployment easy) this (and S3 intelligent tiering) are the closest thing to free money I can think of.


The hype is because Intel and AMD have an enforced duopoly of x86, because only they own the licenses to x86-64 and you'll take it from their cold dead hands.

So, if you want to compete, you'll have to go ARM. Amazon graviton and Apples M series have only made it more appealing.

The main reason to switch is cost.


They own the licenses to SSE3/SSE4/AVX. For now.

Currently anyone can make an x86-64 processor.


The ARM server hype was 10 years ago. Every Linux distro wanted to be first, but then they all waited for the year of the ARM server.

Nice to see that things seem to start moving a bit now. Energy consumption and cooling are major concerns for data centers. ARM is just so much better when it comes to energy consumption. So the potential should be there.


> Is this just due to all the marketing from Apple around their proprietary M processors leaking into the enterprise market?

That doesn't seem likely, as the release of AWS Graviton based instances predates the Apple M1 release by about two years


It's really just a matter of time at this point. X86 is a bloated nightmare that is a pain to deal with and has a rich history of vulnerabilities where the patches typically affect performance.


With Arm Graviton on AWS, for our specific environments and workloads, we get better performance for less money. Same goes for the hosts, who pay a smaller upfront investment per core and less in electricity per core. Bigger bang for the buck when the glove fits.


Same here, generally 10-40% better performance, depending on workload, at 15%+ lower costs, on AWS.

We have seen performance improvements in PHP, Java, and various CPU-bound tasks such as video transcoding.

This goes for either EC2 m7g vs m6i and also for Fargate x64 (from my experience, perfomance seems comparable to m5 instances) vs ARM (Graviton2).


I use an M1 Mac, and I'd prefer to develop and deploy on the same CPU architecture. For M1 users the x86 is an alien cross-compilation target.


For me at least, if I knew I was getting same performance for less energy I’d actually be willing to pay a bit more. Data centers are big polluters still. I think we likely can get to nearly 100% renewables in the non-distant future, and lower energy per compute unit is part of that.


> For me at least, if I knew I was getting same performance for less energy I’d actually be willing to pay a bit more.

Well if they're spending less to power my ARM machine, then they should damn well be charging me less to host it.

That's the best, and only, long term path towards more efficient compute.


Interesting parallel is I have an electric car, which I pay more for and has limitations. But for me I think the societal benefit is great enough I’ll eat the cost. So if it gets data centers to invest in energy savings because they pocket the difference I’m ok with it.


> But for me I think the societal benefit is great enough I’ll eat the cost.

I don't mind eating the cost to save the planet if there is a cost.

What I am objecting to is businesses making a larger profit off people who want to save the planet.

They're literally taking your goodwill and turning it into profit. If they aren't interested in saving the planet, then why are you giving them money, instead of giving it to their competitors?

I guess the question is, why not pay less to save the planet? Your extra money doesn't go towards saving the planet, it goes towards making you feel like you're saving the planet.


Does anyone know if there is ssh/root access and what OS the servers are running?


These are VPSs. You can SSH, use the root user, and when setting up you can choose what OS to use, they give a list or iirc they let you upload your own ISO.


Any recommendations for an uncomplicated server OS which will just run Docker, SSH and not much more?

I'm currently running Ubuntu but am open to alternatives. My local Raspberry Pi uses a fun, but little maintained OS called Hypriot which is just this: Start Docker and not much more.


Just stick Debian on it. It'll be familiar enough from your Ubuntu experience, and if all you're going to do is slap Docker on it and run everything in containers you can just use Debian Stable, not care that things in the repo might not be bang up to date, stick unattended-upgrades on it, and never really worry about it until dist-upgrade time.


Seconding this.


RHEL, Debian.

If you're serious about running _everything_ in containers, then maybe RHEL Edge? RHEL + microshift? Fedora Silverblue? openSUSE MicroOS?


You probably want podman, not Docker, on RedHat (and SuSe).


I'm quite a large fan of Debian, it's my go to choice for anything server personally. It's pretty good for what you want here IMHO, it's stable so you won't really have to worry about packages screwing you over and like another comment said you can stick unattended-upgrades on it :)


Dietpi is what I switched to.

Maintained, light on logging unless you ask for it (saves your SD card from wear and tear), and focused on low host-os resource usage while still being Debian-based.


Ubuntu is fine for that use case.


Thanks.


I love Hetzner compute offerings (not just price, but the CPUs they offer).

But their hardware design could improve. See picture below.

https://twitter.com/PetrCZE01/status/1637122488025923585


Why? What is the advantage of running arm over x86 on servers?


What I want to know is why migrate to ARM when RISC-V is around the corner?


Eventually RISC-V will put them out of business but until that time comes, everyone is free to try to convince people otherwise.


Somebody down voted this. I really would like to understand why did they do it? What is wrong with this question?


Those aren't the prices I'm seeing when I try to create a new server. I'm getting:

    CAX11 - 3.95eur/m
    CAX21 - 7.19/m
    CAX31 - 14.39/m
    CAX41 - 28.79/m
1 ipv6 costs me 0.6/m


I think you meant to say IPv4. IPv6 is free.


presumably that's prices with the appropriate-to-your-location VAT applied?


These are the prices w/o IPv4 added aren't they.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: