You can see this in venerable software which has lived through the times of "designing for the user" and is still being developed in the times of "designing for the business".
Take Photoshop, for example, first released in 1987, last updated yesterday.
Use it and you can see the two ages like rings in a tree. At the core of Photoshop is a consistent, powerful, tightly-coded, thoughtfully-designed set of tools for creating and manipulating images. Once you learn the conventions, it feels like the computer is on your side, you're free, you're force-multiplied, your thoughts are manifest. It's really special and you can see there's a good reason this program achieved total dominance in its field.
And you can also see, right beside and on top of and surrounding that, a more recent accretion disc of features with a more modern sensibility. Dialogs that render in web-views and take seconds to open. "Sign in". Literal advertisements in the UI, styled to look like tooltips. You know the thing that pops up to tell you about the pen tool? There's an identically-styled one that pops up to tell you about Adobe Whatever, only $19.99/mo. And then of course there's Creative Cloud itself.
This is evident in Mac OS X, too, another piece of software that spans both eras. You've still got a lot of the stuff from the 2000s, with 2000s goals like being consistent and fast and nice to use. A lot of that is still there, perhaps because Apple's current crop of engineers can't really touch it without breaking it (not that it always stops them, but some of them know their limits). And right next to and amongst that, you've got ads in System Settings, you've got Apple News, you've got Apple Books that breaks every UI convention it can find.
There are many such cases. Windows, too. And MS Word.
One day, all these products will be gone, and people will only know MBA-ware. They won't know it can be any other way.
Personally, I think the web is 200% to blame. It has wiped out generations of careful UX design (remember Tog in Interface?) and very, very responsive and usable native (and mostly standard-looking) UI toolkits in favor of bland, whitespace-laden scrolling nightmares that are instantly delivered to millions of people through a browser and thus create a low-skills, high-maintenance accretion disc of stuff that isn't really focused on the needs of users--at least not "power" ones.
I know that front-end devs won't like this, but modern web development is the epitome of quantity (both in terms of reach and of the insane amount of approaches used to compensate for the browser's constraints) over quality, and I suppose that will stay the same forever now that any modern machine can run the complexity equivalent of multiple operating systems while showing cat pictures.
Front-end dev is optimising for a different set of constraints than HIG-era UIs.
Primarily that constraint is "looks good in a presentation for an MBA with a 30-second attention span". Secondarily "new and hook-y enough to pull people in".
That said...
HIG UIs are good at what they're good at. But there is an element of a similar phenomenon to how walled-gardens (Facebook most of all, but also Google, Slack, Discord..) took over from open, standards-based protocols and clients. Their speed of integration and iteration gave them an evolutionary edge that the open-client world couldn't keep up with.
Similarly if you look at e.g. navigation apps or recipe apps. Can an HIG UI do a fairly good job? Sure, and in ways that are predictable, accessible, intuitive and maybe even scriptable. But a scrolly, amorphous web-style UI will be able to do the job quicker and with more distinctive branding/style and less visual clutter.
Basically I don't think a standardised child-of-HIG formalised UI/UX grammar could keep up with the pace of change the last 10-15 years. Probably the nearest we have is Material Design?
> walled-gardens (Facebook most of all, but also Google, Slack, Discord..) took over from open, standards-based protocols and clients. Their speed of integration and iteration gave them an evolutionary edge that the open-client world couldn't keep up with
Seems to me to be a combination of things, none of which indicate that the new products are implicitly better than the old. The old products could’ve incorporated the best elements of the new. But there are a few problems with that:
- legacy codebases are harder to change, it’s easier to just replace them, at least until the new system becomes legacy. slack and discord are now at the “helpful onboarding tooltip” stage
- the tooling evolved: languages, debuggers, IDEs, design tools, collaboration tools and computers themselves all evolved in the time since those HIG UIs were originally released. That partially explains how rapidly the replacements could be built. and, true, there was time for the UX to sink in and think about what would be nice to add, like reactjis in chat
- incentive structures: VCs throw tons of money at a competing idea in hopes that it pays off big by becoming the new standard. They can’t do that with either open source or an existing enterprise company
I'm not arguing that the new products are better, just that they were evolutionarily successful.
I think the issue was less one of legacy codebases, and more that getting consensus on protocols and so on is _always_ slow and difficult, and that expands exponentially with complexity. And as the user-base expands, the median user's patience for that stuff drops. "What the hell is an SMTP Server and why should I care what my setting is" kind of stuff.
Meanwhile the walled gardens can deliver a plug-and-play experience across authentication, identity, messaging, content, you name it.
And this against a background of OS platforms (the original owners of HIGs) becoming less relevant vs Web2 property owners, and that strict content/presentation separation on the Web never really caught on (or rather, that JS single-pagers which violate those rules are cheaper and sexier). Plus a shift to mobile which has, despite Apple's efforts, never strictly enforced standards - to the extent that there's no real demand from users that apps should adhere to a particular set of rules.
I also think the web is 200% to blame, but for a different reason: ad-tech in general and Google+Apple in particular taught users that software should cost $0. Once that happened they didn't go back, and it torpedoed the ISV market for paid programs. You used to go to CompUSA and buy software on a CD for $300; that can't happen now. Which would be fine, except adware filled the revenue gap, which by necessity brought a new set of design considerations. Free-as-in-beer software fucked us over.
It even happens in the FOSS world. Open Source theorists tell us all the time that "free" only means "free-as-in-freedom". That we can share the code and sell the builds.
But whenever someone actually wants to charge users money for their own FOSS apps, even if it's only a few bucks to pay for hosting and _some_ of the work, outraged users quickly fork the project to offer free builds. And those forks never, ever contribute back to the project. All they do is `git pull && git merge && git push`.
Maybe the Google+Apple move was a strategy against piracy. Or maybe it was a move against the FOSS movement. And maybe the obsession with zero-dollars software was a mistake. Piracy advocates thought they were being revolutionaries, and in the end we ended up with an even worse world.
We need to get back to Native Software aka "Apps". As I have said before we have these powerful machines, with cheap storage. Why do I need to connect to a remote host through a bloated web interface just to read a document, it would be better, faster, and smaller locally.
When I was learning embedded design, the general rule of thumb was to aim for 10ms polling for user input, because human tolerance for delay is around 100ms so 10ms appears instantaneous. Then I see products like Nest come out, with big endcap displays at home improvement stores, and I'm like how do people not just immediately write this off as janky trash.
Then again maybe the extra lag (and jitter!) is gets a pass because its part of these products positioning themselves in the niche of "ask the controlling overlord if you may do something" rather than "dependable tool that is an extension of your own will".
One example in MS Word is the ribbon. It is relatively recent invention, and when it was introduced, _at least_ they went to the effort of using telemetry to guess which features where actually often utilized versus which ones where not, and designed the ribbons in accordance.
Nowadays every new "feature" introduced in MS Word is just randomly appended to the right end of the main ribbon. As it is now you open a default install of MS Word and at least 1/3 of the ribbon is stuff that is being pushed to the users, not necessarily stuff that users want to use.
At least I can keep customizing it to remove the new crap they add, but how long until this customization ability is removed for "UI consistency" ?
> At the core of Photoshop is a consistent, powerful, tightly-coded, thoughtfully-designed set of tools for creating and manipulating images. Once you learn the conventions,[...]
Photoshop has a terrible set of conventions. I'd take Macromedia Fireworks any day of the week instead. But Adobe bought Macromedia and gradually killed Fireworks in 8 years due to "overlap in functionality" between it and 3 other Adobe products.
That action pretty much enabled enshi*tification of Photoshop which wrapped the terrible core of Photoshop with terrible layers of web and ads.
I still run Fireworks under WINE in Fedora. The only real pain point is that the fonts are stupefyingly small in comparison to other apps due to the way modern GUIs and resolutions have grown (this is fixable, but not everywhere).
I think you have a bit of rose tinted glasses, I remember back in the day how much people complained about every feature being shoved into the title bar menus of word and photoshop. Eventually growing the menus too long with a bunch of features no one cared about and obscuring the actual useful ones.
Yeah. This (and Fitt's Law) is actually why the ribbon UI came about. 80% of the most common stuff is right there, and you can customize or search for the rest.
Yeah, I actually saw a presentation at Microsoft's Mix (the web-oriented conference they ran for a while) about the ribbon. I never loved the ribbon but it was really an attempt to deal with all the features that maybe 1% (at most) of users ever utilized but that those (many different) 1%s REALLY cared about.
One of the reasons I like Google Workplace. I'm mostly not a power user these days so the simpler option set really works for me even if I very rarely run into something I can't quite do like I'd prefer.
Even before the ribbon they had identified and were trying to deal with the problem. I think either office 2000 or xp had the 'traditional' drop down menus but would hide less-used items until you clicked a little ≫ expander at the bottom, the downside being that would change the layout of the menu a bit as less/most-used changed which works against learning what items are located where.
Even if you were the 99% user, the ribbon solved the problem with toolbars.
Word had like 20 different toolbars. If you wanted to work with tables, you had to open the Table toolbar. If you wanted to work with WordArt, you opened the WordArt toolbar. You could be the 99% user and still end up with 10 toolbars open even if you only needed one button on a certain toolbar. On a small monitor, half your screen consisted of your document and the other half toolbars.
I can’t remember if you could customize toolbars but not a lot of people would spend the time to do that. I sure never did.
Not only that, but in photoshop and word dialogs for features released in 1995 had different design conventions from dialogs from feature released in 1999. Not UI controls and stuff like that (everything was still using win32), but the design language itself. Like how to display things to user or how to align buttons and stuff like that.
If anything this kind of layout patterns are better today than they were back then. What the OP is complaining about was ALWAYS a problem for any long-lived software. But given how much _older_ some software is today it is no wonder it is way more noticeable.
Win95 had a bunch of Win3.1 settings dialogs just like Win11 still has a bunch of Windows98 setting dialogs (the network adapter TCP/IP configuration one comes to mind).
Photoshop used a custom UI toolkit which automatically generated the dialogs layout based on a description of their contents (string field, boolean, color picker, ...)
> One day, all these products will be gone, and people will only know MBA-ware. They won't know it can be any other way.
Just like vinyl made a comeback 'real' software will come back as well, maybe running in an emulator in a browser. Yes, there will be copyright problems, yes there will be hurdles but if and when MBAware becomes the norm software will persevere. Free software for sure, commercial software most likely even if legally on shaky foundations. The tighter they grip, the more users will slip from their clutches.
According to a study in 2023 half of vinyl buyers in the US doesn't have a record player. It seems like for a lot of people it's less of a music media and more a combination of supporting their favorite artists and home decor or collecting cards. My guess is that a majority of all vinyl buyers listen to a lot of digital music as well.
It went from 'nearly dead, scrap the factories' to 'we need more manufacturing capacity to supply the increasing demand'. Not a bubble, i don't do vinyl but I know several people who do. The same will happen to 'real software' once the current iteration of productivity tools can no longer be distinguished from advertisements for such, once you need to really dig down to do simple things but have "AI enhanced autoclippy" under every "button". It is just the way things go no matter the field: 'craft' beer, vinyl, sourdough, vintage whatever. In some cases it is actually rational, in others (vinyl) it is mostly driven by emotional factors. The 'craft software' revival would be an example of a rational move.
> At the core of Photoshop is a consistent, powerful, tightly-coded, thoughtfully-designed set of tools for creating and manipulating images. Once you learn the conventions, it feels like the computer is on your side, you're free, you're force-multiplied, your thoughts are manifest.
It's funny that today there still isn't a free image editing software comparable with the Photoshop from 2000. Krita is close, but still cumbersome to use.
Indeed. The trouble with subscriptions is that you don't need to make the new version actually good enough to convince people to upgrade, you just need to make it not bad enough for people to abandon the subscription entirely.
I think the same thing happened to Windows and Office.
Having said that, there's probably another elephant in the room, namely the current generation that grew up with phones and tablets and didn't really learn to use traditional computers fluently
I would love to jettison Photoshop/Illustrator and just use Affinity. Illustrator has recently gone from taking 30 seconds to over 2 minutes to launch on my M1. It's an atrocity. But Adobe software is so entrenched in printing that, even though print media is only 10% or less of what I do these days, it would just be an endless headache to deal with file conversions to and from other designers, print shops and publishers. And anyway I expect Affinity will go the same way soon, now that they're owned by Canva.
I'm not particularly surprised: who wants to try to hold up GIMP as an exemplar of a good interface? Maybe by arguing "it's not as bad as it was, and now it hardly sucks at all".
I'm an old Photoshop user who has GIMP now. I'd like to do a breakdown of everything that's wrong with its interface behavior, but analysing exactly how it does behave would be a major mission. There's something - several things - wrong with how it selects, moves, deselects, selects layers, and zooms, compared to what I expect for the workflow I try to have. Possibly this is just a matter of needing to learn new conventions, but possibly I have learned the GIMP conventions and they're just clunky.
Interesting, though, since this is organic, grass roots, free software interface crappiness, not the coercive corporate kind.
I have used Photoshop and GIMP a lot. I do not see how GIMP interface is that clunky.
It seems clear to me that most people get used to some interface and anything different is wrong. I saw that a lot with Windows users changing to Mac or linux interfaces. Anything 100% identical is "interface crappiness". Apple interfaces have always run circles around Microsoft's, that copied things from Apple without understanding the fundamentals or hiring proper designers.
My problem with GIMP was always that it was not as powerful for the professional, like the support for color spaces with lots of bits per channel. That looks solved now, although I had not time to test is personally.
On the other hand, you could program GIMP much easier with Script Fu (a Lisp dialect) with less restrictions, like with most open source software.
I did change from Mac to Windows shortly after the millennium, and it was fine. Window buttons were in different places, but that was trivial. Windows would minimize properly instead of the "windowshade" thing Mac had at the time (folding up into the titlebar), because Windows had a taskbar to minimize to, and that was better. I disliked the prevalence of installers (generally you could just drag an executable to another Mac and it would work) and I disliked the creepy labyrinth of user-hostile system files and system folders, which has only gotten worse. But that isn't really interface. I thought the start menu was stupid, but I could just ignore it, and I still ignore it today (Explorer all the way).
On the whole I didn't think there was much difference, apart from a general vibe of crassness on Windows, which had no clear cause. But that's switching from Classic Mac OS: you're probably talking about the new OSX BSD linuxy one. Besides, it would have been Win2K I switched to, which was one of the best iterations.
I used Linux for a while too, but that was XFCE, so again kind of samey. Mainly I remember constantly having to go through Sudo whenever I wanted it to do anything, that was the distinctive interface difference.
Color spaces, a closed book to me. I can't stand color spaces, they were always some sort of unwieldy mess of interest to other people. People who print things, maybe, I don't know. From my perspective, they caused a lot of pretentious confusion when people were trying to make images on computers for viewing on computers and for some reason didn't do the natural thing and think in three bytes of RGB.
Scripting, I hadn't even thought about, and I'll give you that one. Photoshop had a mechanism for recording actions and playing them back, and it produced a linear list of actions, with no control flow. I definitely wanted a better scripting mechanism.
A while back I was making red-cyan anaglyphs [1] for which you want to have a slider that lets you move the left and right channels relative to each other so you can put objects close to the plane of the screen/paper to minimize all sorts of problems such as Vergence-accommodation conflict.
I wrote a little tkinter program to do it because, not least, tkinter is in the Python standard library.
I found out my stereograms looked OK in tkinter but when I exported the images to the web there where ghosts. What I found out was that modern GUI frameworks do color management in the sense that they output (r,g,b) triples in the color space of your monitor, which is what you get when you take a screenshot on Windows. So if you have a wide gamut monitor you can do experiments where you have a (0, 192, 0) color in an image specified in sRGB and then find it was (16, 186, 15) in a screenshot. [3] Drove me crazy until I figured it out.
What was funny was that tkinter was so ancient that it didn't do any color correction, it just blasted out (0, 192, 0) when you asked for (0, 192, 0).
It also turns out to be a problem in printing, where the CMYK printer presents itself as having an RGB color space where the green is very saturated but never gets very bright, and sRGB green also has red in it, but if you attach the printer's color profile to an image you can force a saturated green. A lot of mobile devices are going in the Display P3 direction so you can get better results publishing files in that format.
That and a few other print projects (so easy to screw up a $150 fabric printing job) have gotten me to care a lot about color management.
Yup. Issues like that have trained me to fear and avoid anything that says "CMYK" or "gamut" or similar, not because they're beyond comprehension, but because for many use cases the whole thing is an unnecessary pantomime. And it's often silently switched on by default, waiting to screw things up for you. Blender does similar things - in order to get what you intended to be pure black produce 0x000000, and what you intended to be pure red produce 0x0000ff, you have to dig down somewhere - "post processing," I think, because they suffer from some fantasy of being Pixar - and switch something to "raw", and as I remember that isn't even the whole story and there's some other tweaks to make too, just to make black output black and red output red.
and something I found amusing was that almost all of them will only take files in sRGB format even though they actually support a gamut which covers some colors better than sRGB and other colors worse. My monitor supports Adobe RGB and covers both spaces pretty well.
I think, however, that they have more quality problems if people send files that have different color profiles so they just take sRGB.
I had some print jobs go terribly bad, turns out yellow flowers are out of gamut for my camera, for sRGB and CMYK, If you try to print something that has out of gamut colors, the printer will do something to put them into the gamut and you might not like it. I learned to turn on the gamut warning for Photoshop and bring the colors into the CMYK gamut before I print, even if I am sending in an sRGB file. It didn't bother me so much when I was printing 'cards' with my Epson ET-8550, but once I had orders come back ruined, I figured it all out.
Indeed, especially because Krita is terrible for a lot of what I think of as "image editing"; I'd much rather use Gimp for work like that. Though I do quite like the recent AI stuff available to Krita at the moment; e.g. there's a plugin that lets you "object select" performed by AI, so e.g. you click on a person's shirt and it selects the shirt, or add other parts of the person (or draw a box) and get the person themselves, separate from the background. Or click on a bird, or speaker, or whatever. And you can use the ai-diffusion stuff to remove it, easier than the old heal tool techniques. (The selection is of course not perfect but a great compliment to the other selection tools that more or less overlap with Gimp's, but I prefer Gimp's knobs and behaviors after the selection. And I'm sure Photoshop has similar AI stuff by now, but I remember over the years it's seemed like a lot of stuff crops up in open source first. e.g. I think for quite a while "heal" / "smart patch" was just a script-fu plugin..)
I just appreciate that there are many options, and can talk to each other pretty well. If one becomes unusable, I have others, and sometimes there are newer ones. I did a stage banner project last December but I had Gimp, Krita, and Inkscape all open at the same time. (With a quick use of an old version of Illustrator in Wine to export into an illustrator template matching the dimension outlines and particular color space the printing company needed...)
Photoshop tried to be the everything tool, and it probably is and will continue to be the best kitchen sink (and if I knew it better and had a license, it probably could have sufficed by itself for my project), but for any specific thing there's going to be something else that's better for at least that thing (maybe even one of Adobe's other products, like Illustrator). Krita isn't competing with Photoshop so much as with Photoshop's usefulness in drawing and making art, and in that space are also Clip Studio Paint or Procreate on iPads, both quite popular with hobbyist and professional artists. Gimp isn't competing so much on the art creation side (or even making simple animations like Krita lets you do more easily) as it is on the editing and manipulation side. And when editing camera raws, you'd use Lightroom/Darktable/RawTherapee. Inkscape is vector graphics, a whole other use case and competitive landscape.
(Speaking of old/dead software, I remember using Xara Xtreme LX for a while, it was really slick...)
My understanding is the old Photoshop was pretty bad. I went through a phase of being a student of file formats and the PSD format is practically a case study in how not to do it. (By my metrics, PDF was excellent for its time; they've been able to cram so much crazy stuff into PDF because the foundation is good)
When I first used it on a Mac circa '95 or '96, it felt like a legacy product when I was using it for web work because the color management features intended for print meant it would always screw the colors up if you output for the web [1] if you didn't disable color management, whereas the GIMP 'just worked' because it didn't color manage. [2]
To play devil's advocate, the old Photoshop was a huge lump sum which meant you'd buy one version and then go six years without updating it. The new one is more accessible to people. Also, I find the A.I. features in Photoshop pretty useful, it is easier than ever to zap things out of images: it took just seconds to disappear a cable in [3] for which the older tools didn't work so well. For [4] it removed a splotch and later I had it add a row of bricks to the bottom to improve the visual balance of the image.
Note people sure complained about Office in the '95 era, see [5]
[1] sRGB was new!
[2] Funny I do a lot of work for print now, some of which pushes the boundaries of color management, such as making red-cyan anaglyph stereograms
Take Photoshop, for example, first released in 1987, last updated yesterday.
Use it and you can see the two ages like rings in a tree. At the core of Photoshop is a consistent, powerful, tightly-coded, thoughtfully-designed set of tools for creating and manipulating images. Once you learn the conventions, it feels like the computer is on your side, you're free, you're force-multiplied, your thoughts are manifest. It's really special and you can see there's a good reason this program achieved total dominance in its field.
And you can also see, right beside and on top of and surrounding that, a more recent accretion disc of features with a more modern sensibility. Dialogs that render in web-views and take seconds to open. "Sign in". Literal advertisements in the UI, styled to look like tooltips. You know the thing that pops up to tell you about the pen tool? There's an identically-styled one that pops up to tell you about Adobe Whatever, only $19.99/mo. And then of course there's Creative Cloud itself.
This is evident in Mac OS X, too, another piece of software that spans both eras. You've still got a lot of the stuff from the 2000s, with 2000s goals like being consistent and fast and nice to use. A lot of that is still there, perhaps because Apple's current crop of engineers can't really touch it without breaking it (not that it always stops them, but some of them know their limits). And right next to and amongst that, you've got ads in System Settings, you've got Apple News, you've got Apple Books that breaks every UI convention it can find.
There are many such cases. Windows, too. And MS Word.
One day, all these products will be gone, and people will only know MBA-ware. They won't know it can be any other way.