Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The (IBM) mainframe today is a distributed system (many LPARs/VMs and software making use of it)

Not really. While you can partition the machine, you can also have one very large partition and much smaller ones for isolated environments. It also has multiple redundancy paths for pretty much everything, so you can just treat it as a machine where hardware never fails. It’s a lot more flexible than a rack of 2u servers or some blade chassis. It is designed to run at 100% capacity with failover spares built in. This is all transparent to the software. You don’t need to know a CPU core failed or some memory died - that’s all managed by the software. You’ll only noticed a couple transactions failed and were retried. You are right in that mainframe operations are very different from Linux servers, and that a good mainframe operator knows a lot about how to write performant software.



And incidentally all documentation recommends not extending your LPARs beyond what is available on a single CPC-"node" (see [0]-2-23 for a nice (and honest...) block-diagram). If you extend your LPAR across all CPCs I doubt that many of the HA and hotswap-features continue to work (also there is bugs...). E.g.: you won't hotswap memory when it's all utilized: > Removing a CPC drawer often results in removing active memory. With the flexible memory option, removing the affected memory and reallocating its use elsewhere in the system is possible.

So while you can have single-system-images on a relatively large multinode setup I doubt many people are doing that (at the place I know, no LPARs have TB of memory...). Also in the given price-range you easily can get SSI-images for Linux too: https://www.servethehome.com/inventec-96-dimm-cxl-expansion-...

If you don't need the single-system-images, VMWARE and Xen advertise literally the same features on a blade chassis minus redundant hardware per blade, which is not really necessary when you just migrate the whole VM...

Also if you define the whole chassis as having 120% capacity, running it at 100% capacity becomes trivial too. And this is exactly what IBM is doing keeping around spare CPUs and memory in all setups spec'ed correctly: https://en.wikipedia.org/wiki/Redundant_array_of_independent...

You are right though that the hardware was and is pretty cool and that kind of building for reliability has largely died out. Also up until ARM/Epyc arrived maximum capacity was over-average, but that is gone too. Together with the market-segment likely not buying for performance I doubt many people today are running workloads which "require" a mainframe...

[0] https://www.redbooks.ibm.com/redbooks/pdfs/sg248951.pdf


> building for reliability has largely died out.

A real shame, but offloading reliability to software engineers makes the hardware cheaper, something IBM mainframes aren't known for.

> doubt many people today are running workloads which "require" a mainframe...

It seems to me mainframes are built with profoundly different requirements than the ordinary hyperscaler server, with a lot more connectivity and specialized IO processors than CPU power. The CPUs are really fast, but it's the IO capacity that really set them apart from the top-of-the-line Dell or HPE.

If IBM really wanted to make the case for companies to host their Linux workloads on LinuxONE hardware, they'd make Linux on s390x significantly cheaper than x86 on their own cloud. I am sure they could, but they don't seem willing to do so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: