Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's impressive but early macOS were pretty awful UX; I think the UI thread was everything.

I remember clicking and waiting.



I remember that yes, expensive operations could take a while, but the interface was much faster than my M1 Max Studio for the sole reson you actually do not have to wait for animations.

And not just for the reasons that animations were sparse, they also never blocked input, so for example if you could see where a new element would appear you could click there DURING the animation and start eg typing and no input would be lost meaning that apps you used every day and became accustomed to would just zip past at light speed because there were no do-wait do-wait pipeline.


The animations were there, but they were frame-based with the number of frames carefully calculated to show UI state changes that were relevant. For example, when you would open a folder, there would be an animation showing a window rect animating from the folder icon into the window shape, but it would be very subtle - I remember it being 1 or 2 intermediate frames at most. It was enough to show how you get from "there" to "here" but not dizziingly egregious the way it became in Aqua.

Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.

There were also quite a few tricks used all the way from the dithering/blitting optimizations on the early Macs. For example, if you can blit a dotted rect for a window being dragged instead of buffering the entire window, everything underneath, the shadow mask - and then doing the shadow compositing and the window compositing on every redraw - you can save a ton of cycles.

You could very well have do-wait-do-wait loops when custom text compositing or layout was involved and not thoroughly optimized - like in early versions of InDesign, for instance - but it was the exception rather than the rule.


> Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.

Done exactly this myself to conceal ugly inconsistent lags - I don’t think it is that uncommon an idea.


I'm think that ObjC's dynamic dispatch is reasonably fast. I remember reading something about being able to do millions of dynamic dispatch calls per second (so less 1 us per) a long time ago (2018-ish?), but I can't think how to find it. The best I could come up with is [1], which benchmarks it as 2.8 times faster than a Python call, and something like 20% slower than Swift's static calling. In the Aqua time-frame I think that it would not have been slow enough to need animations to cover for it.

[1] https://forums.swift.org/t/performance-of-swift/26911


My most durable memory is all the reboots due to programs crashing. Didn't help that a null pointer deref required a system reboot - or that teenage me was behind the keyboard on that front.


more the fault of MB of ram and HDDs being quite slow to be honest.


> I think the UI thread was everything.

How would you have done it?


Preemption is a very nice OS feature it turns out (particularly once multi-core rolled around). Still, I recall os 8 and 9 being generally snappier than windows 98 (and a lot snappier than early builds of OSX)


How does preemption work on a processor that barely has interrupts and has no way to recover state after a page fault, in an OS that has to fit into a couple dozen kilobytes of ROM?


There were plenty of preemptive multitasking systems for the original 68000, and regardless page fault recovery was fixed from the 010 onwards.

And certainly was very not a problem on PowerPC which TFA is about.

Also not sure how you can say the 68000 "barely has interrupts" I don't even know what you're on about.

MacOS was broken because Jobs forced it to be that way after he was kicked off the Lisa team. Which had a preemptive multitasking operating system on the 68000.


Preemptive multitasking is unrelated to page faults. And the 68k handled page faults just fine starting from the 68010.

Space constraints were certainly limiting on the earlier models, but later ones were plenty capable. Apple itself shipped a fully multitasking, memory protected OS for various 68k Mac models.

By the late 80s, the only reason the Macintosh system was still single-tasking with no protected memory was compatibility with existing apps, and massive technical debt.


Later Mac ROMs were 512KB, same with the later Amiga Kickstarts (3.x) That was a lot of space for the late 80's and early 90's. Interrupts were supported (8, if I recall.) And 68000 machines didn't support virtual memory until the 68010 and later, so no issues with page faults.

I still remember the day teenage me got an Amiga 500 with a whopping 512K of RAM, and witnessed the power of multitasking, way back in 1988.


The Amiga had preemptive multithreading with multiple task priorities on the original MC68000. Preemption is distinct from memory protection or paging.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: