Hacker Newsnew | past | comments | ask | show | jobs | submit | chr15p's commentslogin

google tells me "IBM gross profit for the twelve months ending June 30, 2025 was $36.866B"

Its not that someone writes that sort of stuff, its that it people read and think "yeah! give me some of that!" that makes me worry for humanity.


> To save everybody's time: this is about straight white men making an OS for them and thus terrible for everybody else.

That's not what the talk is about, that's one of the examples of brokenness he's talking about, as is the kill example, and yes they are deliberately troll-y examples.

The talk is about the very last statement "sometimes you have to drop your tools and make new ones" whether that's "everything is a file", "do one thing and do it well" (the ps example), or the way communities are structured. You don't have to agree with him on any of them but to dismiss them as "whitestraightmansplaining" is to duck as the point flies over your head.


> The talk is about the very last statement "sometimes you have to drop your tools and make new ones"

Go ahead and make them. I see no army of white straight men with unix swastikas stopping everybody else from making their kernel/OS exactly the way they want it.

The talk is not about building new stuff. The talk is about attacking some people based on their race, gender and sexual orientation. The talk is about gaining power over a tool and a community, ousting the ones who built it because "it's wrong and we'll fix it". The talk is political with a thin veneer of tech.

The talk is what it is, the points are the one he makes, there's no emperor's clothes that only you can see but fly over my stupid old head. If the talk would be about building there would be a presentation of some code skeleton or at least some design schemas.


> Mir, Unity, now Snap. Ubuntu has a track record of wanting to go it alone.

This. Also bzr. They seem to want to control their projects completely and so even when they have good tech they lose out to more open, community developed, equivalents that build wide engagement and momentum.

I honestly don't understand it, you would have thought they would have learned by now that they don't have the engineering resources to do everything by themselves.

Compare that to Red Hat who always try (and sometimes even succeed!) at developing projects with the community and are far more successful at getting their projects adopted (I know people don't like them, but you cant deny they are effective at it)


> I honestly don't understand it, you would have thought they would have learned by now

The simple answer is that the company culture really, really wants to be "the Apple of Linux", with all that it entails. Whereas RedHat wants to be the Linux of Linux, they've learnt how the opensource game really works and they play it every day.


Bzr is a great counter example.

Bzr "lost" because git had GitHub, whereas Launchpad was one too many things and slow to optimize for modern sensibilities.

(And Linux used a different VCS before git, so that didn't matter in adoption)

Imagine a world without GitHub, and I don't think git would be our go-to VCS. Though maybe not even Baazaar, but there are things like Mercurial too.


Git already had momentum before GitHub became popular. And yes, it is absolutely Linux and other high-profile projects that ensured its success in the OSS world. Claiming that Linux having used a different VCS before git means that Linux's use of git doesn't matter is really odd when git was developed for the Linux project.


It sure had momentum. As did a bunch of other distributed VCSes. If you were a party to voting what VCS to switch to for some of those high-profile projects, I'd very much like to hear about it.

In GNOME, a decision was delayed and bzr and git were pretty evenly matched.

Linux has previously used BitKeeper, but that didn't make it "win out", just like it didn't do so for Git. Sure, it wouldn't have existed if there wasn't a need for Linux.

I am only pointing out that it was GitHub that helped popularize arguably the worst UX among DVCSes: I don't hear people say "your free software contributions portfolio" — they'll just say "your GitHub profile".


bzr lost because it was poorly-architected, infuriatingly slow, and kept changing its repository storage format trying (and failing) to narrow the gap with Mercurial and Git performance. Or, at least that's why I gave up on it, well before GitHub really took off and crushed the remaining competition.

For my own sanity I began avoiding Canonical's software years ago, but to me they always built stuff with shiny UI that demoed well but with performance seemingly a distant afterthought. Their software aspirations always seemed much larger than the engineering resources/chops/time they were willing to invest.


Sure, that's a fair point as well (though bzr came out of GNU arch, which didn't originate at Canonical, and it was finally redesigned into something good at Canonical — not a knock on arch either, it was simply early).

The question then becomes: why not Mercurial which still had a better UX than git itself?

My point is that git won because of GitHub, despite lots of suckiness that remains to this day (it obviously has good things as well).


Another way to look at this situation is that canonical comes up with innovative solutions that are reasonably well engineered out of the box but they are rejected just because they are from canonical.

I'm struggling to find a way to characterize the difference between Red Hat/IBM and canonical's approach to the community. The most succinct I can come up with is that canonical releases projects and assumes that they are the one responsible for their creation., Red Hat releases rough ideas and code. There also seems to be a heavy political/disinformation campaign going on tearing down any solutions by canonical.

In either case, none of us can resolve the conflict. It's a pissing contest between canonical and IBM/Red Hat. I will keep choosing solutions that let me get my job done and get paid which is all that matters.


At an old job, we used probably hundreds of hardware and software vendors. I never had to deal with any of them directly, but I often spoke with those who did. There were complaints about all of them I'm sure, but the only ones that inspired bitch sessions over a drink were Oracle and Canonical. I'm told that both were just thoroughly unpleasant to deal with.


I don't think it is a pissing context between them, they can both happily exist in the same world, it just interesting to see the difference in approach and try and figure out why one seems more successful than the other.

I think you're right that Canonical creates and releases projects and assumes they are in charge of them, but I disagree about Red Hat (honestly not sure what you mean by "rough ideas and code"), I think they tend to see whats already out there and then throw their weight behind that, then only if there isn't do they create their own and even then they are more open about how the project runs. That difference means Red Hat gets more momentum behind its projects, and that is what counts. (of course RH can throw more engineers at stuff as well, and that also helps a lot)

Its not some sort of conspiracy, nothing Canonical has ever done has had the same amount of hate as systemd has, its just a difference in approach.


What I mean by rough idea and code is simple. Is a project something complete you can take and just use or is it a bag of parts. xcp-ng is a take it and use it project. KVM is a bag of parts.

My experience with Red Hat is that it's frequently IKEA level assembly required. Canonical projects tend to be read the docs and just use it. Although there are some exceptions. For example a couple of years ago, cloud-init was not documented well enough for my taste. Took a second look just now and found new documentation that may revise my opinion.


> they are rejected just because they are from canonical.

Or rather because they're proprietary, often closed-source, like Snap server.


Exactly. Canonical's Snap Store service is closed source and the Snap client is designed to only interface with Canonical's proprietary service. It's not "disinformation" to point out that Snap is a locked-down product controlled by Canonical, while most other packaging solutions for Linux are fully free and open source on both the client and server side. Canonical's one-sided approach to interacting with the Linux community will only encourage Linux users to reject Ubuntu and adopt distros with more sensible defaults.


And even if they are open source, the development is not (initial development often closed completely, later development requiring CLAs) and the projects only care about Canonical's use of them to the point where even building them on other distros is often far from trivial.


To me there are a few benefits of doing exams (at least the Red Hat ones that are not cheap, but well respected):

1) it looks good on the resume, which can help you get past the initial sift by people who dont understand what your experience actually means.

2) They give you the chance to fill in the gaps in what you think you know. My experience of doing my RHCE after 10 years of professional sysadmining was of the 14 chapters in the book I knew maybe 10 already and had never touched the other 4 because they never came up in my job, and the prospect of a looming exam gave me a deadline and the motivation to actually sit down and learn them, which then paid off later in other jobs that did use them.

3) to test whether you are as good as you think you are :)

If those don't speak to you then they're probably not super important to do, luckily we mostly work in an industry where experience trumps exams.


Which book are you referring to out of interest?


The coursebook I got as part of the training course, this was 5+ years ago so we got a physical book. I've no idea what they do these days but there's a list of exam objectives on the red hat website and that basically covers what you need to know


Is there still an entire section on vsftpd?


no. I don't think that was even a thing in the RHEL7 version I last did and the whole exam/course has changed a lot since then.

I'm sure there are lots of places that still use vsftpd though (I have a vague memory that it supported kerberos at least), so it might still be useful for some people


To be honest if the worst thing you can say about a company is they changed the distribution model of the thing they were giving you for free from point releases to rolling updates then they could be a lot worse.


That effectively makes it useless. The whole reason to use CentOS is because it's binary-compatible with RHEL; it's merely RHEL without the expensive licensing fees and support. So it's really useful for developing software where your customer is the US government or someone else standardized on RHEL.

Luckily, according to Wikipedia there are two new distros that have popped up to fill this need: RockyLinux and AlmaLinux.


If that is how Red Hat saw CentOS, why the hell did they buy it?


Because CentOS had been languishing, with releases and security fixes taking longer and longer, and the thought at the time was that since CentOS was seen as the entry point to Red Hat Enterprise Linux, it might be leaving a bad impression with potential future customers.

The thinking behind CentOS Stream is different. The idea was not to kill off a free competitor (those were always going to exist, and projects like Rocky and Alma forming was inevitable, and this was obvious). The idea was to create a real community where previously there was not much of one. CentOS was the Android-style "throw it over the wall" model of open source. About the most you could do as an outsider to contribute was file tickets on Bugzilla and package for EPEL. Whereas CentOS Stream provides a place for people to contribute to future versions of RHEL, and therefore, RHEL clones like Rocky and Alma.

So Rocky Linux devs and users, Alma Linux devs and users, CentOS devs and users, Facebook employees (they use CentOS Stream internally), Oracle Linux devs, and whoever else can make and review contributions, which is a more symbiotic relationship than existed before.


But to selectively quote from slightly further down TFA:

> Krishna’s strategy has been focused on bolstering the company’s offerings in hybrid cloud — providing services to customers that run their own data centers in some combination with public cloud

i.e. making it easier to move between data centers and cloud, and back again, so if "companies are taking a look at how much they are spending on cloud." then sounds like IBM are skating towards where the puck is going.


For Oct-Dec 2022 (according to https://www.ibm.com/investor/att/pdf/IBM-4Q22-Earnings-Chart...)

Mainframes: $4.5 billion

Consulting: $4.8 billion (even after selling off Kyndral)

Software: $7.3 billion

Software includes all of Red Hat, a pretty big security business, and "transaction processing" which isn't explained but I guess is running payroll and the like for other companies.

So lots of un-sexy stuff that never makes HN but keeps the world running.


And that is $4.5 billion in Mainframe just in one quarter...


It's an interesting thing they have going on. I worked at a large retailer who depended on an IBM mainframe running DB2. That thing was crazy expensive and their migration to their cloud mainframe offering was even more expensive for the raw compute, but made sense when maintenance and everything was added.

Something like 90%+ of the top 100 banks depend on mainframes and same with the top 100 retailers. Few customers that pay a lot is a solid model.


One of the more surprising metrics I've heard from Andy Jassy is that despite trillion dollar companies generating huge amounts of cash from cloud, 95% of enterprise workloads are still not on cloud.

I think it's more than just a few customers.


Realistically, the cloud is just too expensive compared to most other solutions. AWS, Azure, and G Compute are all easily 10x the cost of hiring a good team of sysadmins. When you require a lot of computing power, it just doesn’t scale financially


>>migration to their cloud mainframe offering was even more expensive for the raw compute

Mainframes were always cheaper. The only reason things like Hadoop and MR even picked up for heavy batch jobs, was because in the last decade lots of cheaper consumer hardware was available, due to companies having over invested in building data centers. The excess capacity was sent to hadoop work. If you have continuous ongoing investments in Mainframe tech or wish to start something new, you are better off using Mainframes.


pretty much anything you purchase at a store goes through ibm


IBM sold their retail technology division to Toshiba a few years ago.

Most of their cash register controllers can run on x86, so, it's not dependent on their hardware for the backend.

Their cash registers can run the 4690 OS or Windows/Linux/whatever, but, most choose to run the 4690 Operating System due to its security features.

I worked at IBM in their retail marketing side in the early 2000's, so, it's possible that everything I've typed here is 20+ years out of date.


> Their cash registers can run the 4690 OS or Windows/Linux/whatever, but, most choose to run the 4690 Operating System due to its security features.

> I worked at IBM in their retail marketing side in the early 2000's, so, it's possible that everything I've typed here is 20+ years out of date.

4690 is pretty close to dead. Maybe not entirely dead, but closer to that status than it has ever been before. From what I understand, even before IBM sold it to Toshiba, the plan already was to discontinue 4690 and replace it with Linux. At the time of the sale, IBM was in the progress of replacing it with "IRES" (IBM Retail Environment for SUSE); Toshiba has instead chosen as the replacement its own Linux distribution, "TCx Sky", which is a customised version of Wind River Linux.

The terminal release of the 4690 OS was V6R5, released 7 years ago. The latest security update was CSD 2010, released in January 2020. There are probably still some people using it even today, but it is very much a legacy system, with zero plans for any further enhancements; it isn't even clear if there will be any further security updates in the future, or if the January 2020 update was the last one. (I guess it may depend on how many people are still using it, what further security bugs might be discovered, how much those people are willing to pay to fix them, etc).


Banks love IBM mainframes/midranges. They might not be blistering fast, but they're a joy to work with if you have the budget.


I find them really interesting because they are a whole other world, the "road not taken". They are very different from Unix/POSIX/Linux/*BSD/macOS/etc, even moreso than Windows is. Windows is not a Unix, but it has borrowed a lot of ideas from Unix–and is arguably growing more Unix-like as time goes by, borrowing more and more ideas from Unix-land–Windows is a lot closer to Unix than most mainframe operating systems are–even MS-DOS is a lot closer to Unix than most mainframe operating systems (indeed, if you read the source code and design docs for MS-DOS 2, they repeatedly mention the influence upon it of Microsoft's Unix, Xenix.)

Consider some ideas which exist in z/OS but not in Unix or Windows:

- record-oriented file system: the filesystem is aware of the record-boundaries of files, whether they contain fixed length or variable length records, etc – and enforces them

- indexed files: key-value type files are built in to the filesystem, not some add-on library

- the catalog: a database mapping file paths (dataset names) to the volumes they are stored on: so you can move a file to another disk volume (or even to tape) without changing its filesystem path. You can kind of approximate this using mount points, symbolic links, etc – but it is a lot messier

- partitioned data sets (PDS/PDSE): like ZIP/TAR files, but again, built-in to the filesystem

- block-oriented terminals: you send/receive data to terminals as a screenful at a time, not a character at a time

- DDnames: your program inherits (the moral equivalent of) file descriptors, but instead of just having numbers, they have names (instead of fd 0, you have a DD called "SYSIN")

You can argue about whether the above features are sensible – the designers of Unix were well-aware of them, but decided they were examples of unnecessary complexity – but sensible or not, they are interesting.


> I find them really interesting because they are a whole other world, the "road not taken".

This is so interesting, having witnessed the commoditization of PCs, the Cloud, and now virtual desktops on the cloud. We're basically reinventing the multiuser mainframe after discovering it actually had good proposals.

The problem is, it's being reinvented with a monstrous degree of complexity and fragility. It's retaking the road not taken, but carrying the other road with as well.


And language environments, which keep oldie programms running on modern hardware without recompilation.

Something that neither UNIX nor Windows have fully embraced to the extent of mainframes/micros.

All the deployments from Java and .NET workloads end up with some kind of compromise to classical UNIX/Windows development.


> And language environments, which keep oldie programms running on modern hardware without recompilation.

I think we've had this discussion before. The Language Environment (LE) on z/OS doesn't do that. The Integrated Language Environment (ILE) on OS/400 [0] does do that. But while one part of ILE (the CEE runtime and its API) is common code between z/OS and OS/400 (and also VM, VSE, TPF and OS/2) – that bit doesn't do any of the "keep oldie programs running by recompilation" stuff. Whereas, the parts of ILE which does do that (the OMI to NMI translator, and the NMI to POWER translator), is 100% OS/400 specific, and has no equivalent on any other IBM platform (z/OS included). You seem to be getting led astray by the fact that "Language Environment" means a lot more in OS/400 than it does in MVS – it is a mistake to assume that just because the later uses the same phrase, that it uses it to mean anywhere near the same thing. LE on MVS is just one (relatively small) part of the capabilities provided by ILE on OS/400.

(Also, this "platform independence" doesn't really have anything to do with anything "Language Environment", since it pre-exists ILE entirely – EPM and OPM before it had the same ability. You are associating "Language Environment" with something which was never actually the selling point of ILE, since it was something OPM and EPM already could do, at least in principle. The real selling point of ILE is that it could support recursive/stack-based HLLs such as C and Pascal more efficiently than EPM, and support mixing code from multiple languages much more easily than OPM and EPM allowed. Plus, a lot of its implementation was shared with z/OS and AIX, which was a bonus for IBM – not just the CEE runtime, but also the compiler backend–the later is not considered part of LE on z/OS, rather part of the compiler products)

[0] Yes, IBM i, I know – but I hate that name, it is confusing. IBM should have just stuck with OS/400, it was far more memorable and distinctive.


Can you expand on why they are joy to work with?


Tried to edit my original comment because two people asked the same, but it didn't update, so I'll resume it here:

- No devops, you have real sysadmins whose priority is security and system administration, not development.

- Not relearning to invent the wheel every six months. The technology changes incrementally. If you get out of the field for a few years, read the "what's new" sections of the new manuals to catch up.

- Documentation. IBM docs are great, remind me of NASA docs. They have everything you need, you can go days without googling a problem.

- Backwards compatibility. Decades old sources can be compiled, decades old binaries can be run. Architecture changes are handled transparently.

- Simplicity. The interfaces might seem primitive compared to PC OSs, but there's also less complexity, and less attack surface.

- Uptime. Almost everything, including CPUs, is hot-swappable. This doesn't add to the joy, but it's remarkable.


Very interesting. Having worked at AWS in the past, the company had very similar features/principles to what you have mentioned except for simplicity.

I think it's things like these which makes me thing AWS will remain the top cloud provider for the next few decades.


You're spot on about simplicity, and it's the reason AWS may not remain the top cloud provider for much longer. There is no effort to implement any sort of consistency across their famed two-pizza teams -- that wouldn't be "Day one". So each service has its own IAM quirks, making basic security at scale a nightmare, and in some cases impossible.

Azure policies are comparatively a breeze, making it the smart and increasingly obvious choice for any regulated (read: large) corporation.


You don’t need to wrestle AWS, Docker, Kubernetes, Terraform, Spark, Aurora, etc… to run a batch job. The kinds of tasks that take integrating multiple different providers and software packages to do in cloud architecture are OS-level features on mainframe. It’s less flexible and more expensive, but an order less accidental complexity.


How is that different from a bare metal Linux server?


It isn’t. It is significantly worse.

IBM is a dinosaur company that exists because of embedded connections in government, banking, and defense. They charge 100x the going rate for ancient tech that’s insecure and an absolute nightmare to develop for.

The sooner everyone drops anything associated with IBM or Oracle, the better for everyone. Absolute cancers.


You can run Kubernetes on mainframes as well as Linux, so developing for mainframes has become easier these last 20 or so years.


> The sooner everyone drops anything associated with IBM or Oracle, the better for everyone. Absolute cancers.

If the only alternative is "AWS, Docker, Kubernetes, Terraform, Spark, Aurora, etc…" I might have to think long and hard about it...


Bad news for you, the mainframe business in 2022 actually increased sales, contrary to the regular PC market.


Bad news for everyone


A free orders of magnitude in performance?


Joy to work with... for whom?


IBM, presumably? /s


"Transaction processing" means CICS. It's a mainframe transaction processor, ensures integrity in banking, airlines, etc.


Or IMS (the other, older mainframe transaction and database system), but yes I immediately had the same thought about the "software" category. A bet a substantial amount of that is software that integrates with the mainframe in some way.


A piece of my soul died just trying to imagine the depths of boredom this would take you.


Poor people, must be soul crushing to have a well-paid, 9-5 job with technologies that don't have an eternal carrot on a stick that prevents them from achieving mastery at their trade.

Where is the excitement of working during the weekend to make sure the dozens of random libraries npm downloaded as dependencies of the one you need to display a neon outline in the list of products are compliant?


If you pay me 250k cash I'll have no issue maintaining whatever obsolete tech you wish.

Work doesn't need to be fun.


One of the best places I ever worked. Really good engineers, and a really calm workplace where you are given the time and space to do good work. No dysfunctional 2 week sprints or hours of "agile ceremony" madness, etc. Of course this was 20 years go, and maybe it depended on the team you were on. I don't know if it is the same now. The only time I had a problem there is when my manager came in and said I had to take down my anti-microsoft sign. Idk why - maybe they were trying to do some deal with them.


Worked there in the 90s. Probably the most professional engineering environment I've ever seen. And compared to what you might expect of the time and place, quite egalitarian. My boss and half the team were women, and there were several people from unusual backgrounds that would be considered economically disadvantaged these days. Everyone had an office. No politics. Just GSD.


Yes - same - one of the most diverse place I worked at and that was 20 years ago. Although my current gig is actually pretty diverse.


Nah it's the same now. IBM was my first tech job straight from construction/concreting. Work environment was a dream, an absolute pleasure. Got opportunities to pump out training and work on many parts of the stack way further up than I expected. I was a remote area hardware tech. This was bout half a decade ago. Best job I've ever had.


Well, now you're selling it pretty good.


There is some serious engineering in what they do -- CPU and memory architecture, OS design and implementation, high speed networking, homomorphic encryption are all areas I've collabed with IBM. I assure you, the average engineer quality at IBM I've worked with has been incredibly high and the people themselves very _very_ creative.


Edit/reply: clearly the top comment did not portray the full picture. Indeed, based on recent comments, it does seem like a good place to work at.


A piece of my soul died just trying to imagine the depths of boredom this would take you.

IBM is a leader in quantum computing. If you find that boring, that's on you.


To make written English you need to take 1 part Old (Anglo-Saxon) German, 1 part old Norse, and 1 part old French, stir together for a couple of centuries, then, just as its going though a major pronunciation shift have its spelling formalised by a random bunch of academics with an unhealthy obsession with latin and greek. Fianlly throw away a bunch of letters because the Germans and Itallians who made block type lettersfor printing presses had never heard of them, and bodge your spelling back together using whatever you can just about get away with. (Bring back Thorn!)

Then sit back and get very upset with anyone who tries to remove the 'u' from colour.


I remember my Germanic linguistics prof saying, regarding English, the statistics of words used in English sentences are roughly 75% as you mentioned above (Germanic), but the dictionary entries are like 75% Latin and Greek variations.

The statistical conjecture of the Germanic linguist was that (roughly) 25% of the English dictionary is used for 75% of words in actual usage.


that was a discussion of a previous blog post by the same author (Mathew Garrett). Its on the same subject but this one is much longer and goes into much more detail.


It doesn’t add much other than a lack of open public docs of a new feature that’s not been widely adopted yet. I’m not panicking.


Its not impossible, but it needs someone to figure out how to monetize desktop linux so they can then put the marketing effort in and pay a few companies to port big name apps such as photoshop that would give it some momentum and mindshare. A single profitable desktop would then give other companies a good target to port their own desktop software too, which would encourage more users and get to a virtuous circle.

But without that (and no one has figured out how to do it so far) I agree, I cant see it happening.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: