"When compared to Intel-based systems, it delivers up to 86x faster AI performance"
I'm imagining the engineers responsible for running the tests finely tuning the test suite for days and days so they could get that number into the press release, lol. There's no way that's a coincidence and someone definitely advocated for that line being the way it is.
To be fair: They have the internal metrics on how many people are still on Intel-based Macs, and its very possible that this influences the types of comparisons they choose to make. There's still so many Intel macs out there.
The targeted snark isn't the issue. The issue is that even well-informed techies ignore Apple's metrics until they can read the fine-print. And the average Intel Mac owner probably doesn't even know what "x86" means. The target audience is almost nobody.
I will, yes. If macOS supported Vulkan, then those Intel Macs would have GPU acceleration too, and thus it would be a fair fight comparing it to MPS. Apple's tech stack is so miserly and poor that they never supported the common GPGPU libraries that literally every single OEM is and was shipping.
Apple's tech is appalling. Are you saying they exercise good judgement on behalf of their users?
I see this is them following their own differentiation and integration which I'd say is good for their users. (Personally I don't care about Vulkan support for example.)
So saying their tech is "appealing" is a matter of opinion and I'd argue something a small minority of their users care about. But I don't know.
Gp is saying their primary expertise is advertising. It's hard to watch any apple announcement and not notice how utterly hyperbolic they are at touting their own achievements.
Ya sure, you can say that every company must do that, but apple are exceptional at it. Once you start noticing the unlabeled performance charts, the missing baselines, the comparing with ages old models, the disingenuous "86x" metrics, the whole show becomes cringe worthy.
Your comment implies that it’s obviously not this spec that they compare against. Could you spell it out for the ignorant like me? What about that config makes it definitely not the thing that is 86x slower?
I don't see anything in the GP that implies that. It's simply a CPU that was released before an entire AI economic bubble was a twinkle in Jensen Huang's eye. Of course it has piss-poor AI performance vs something with hardware dedicated to accelerating that workflow.
It's not that the comparison is incorrect, just that it's a silly and unenlightening statement, bordering on completely devoid of meaning if it weren't for the x86 pun.
In my free time I (Along with some co-conspirators) run raves/music events for @echochamberbne and while we've got a decent amount of lighting hardware due to my poor spending decisions, a projector (which I also have due to the aforementioned) with live visuals is top of the pile for impact over effort spent.
Today I went down a rabbit hole looking at VJ tools like touch designer and resolume, then I got lead to a long expired link to a beta test of this app; on looking up the creator to find the source I realized that the tool got officially released yesterday.
I thought you were being handwave-y about the signal being returned because that's an obscenely small amount of power, but you're right. Hardware that is probably 50 years old at this stage, in the cold of (interstellar)space, and we're still able to talk to it despite the signal after its long journey back being ~0.000000000000000001w.
Essentially averaging. To put it simply, noise is random while the signal is not. So if you average 2x the noise reduces but sqrt(2) but the signal remains. Keep doing that and you have the signal appear out of the noise.
GPS operates at a negative SNR. That also uses code division multiple access to allow multiple transmitters to operate on the same frequency and the signals do not interfere.
Imagine you're watching a trail of animal footprints in the snow, but the snow has covered parts of the trail (noise), so some prints are unclear. You're trying to figure out exactly which path the animal took. The Viterbi algorithm is like a detective method for doing just that, but instead of animal tracks, it's used for decoding messages or signals that got partially scrambled during transmission.
This is fascinating. It sounds like viterbi builds probabilities from known data, is that right? Is it essentially looking at the faint signal now and comparing it to data from when the signal was stronger and extrapolating?
That’s how I understand it. I think it uses knowledge of the state changes, what they should be like, and selects the most likely one from a table of options. Based on what it knows about the signal, it guessed whether it’s more likely to be a or b.
I use it with ham radio, where the software I use sends signals well under the noise floor (-23db) that can still get across call signs, signal reports, and maybe a thank you. The naked ear would not hear a thing at the low end of received signal.
Note of caution to the hoarders: I had a copy of flashpoint 11 on one of my drives and recently tried to move it to another disk... after two days of robocopy doing it's thing less than half of the files had been moved.
Obviously 1.4TB is enormous so long copy operations should be expected, but the sheer number of files makes it hilariously difficult to manage if you ever want to move it after extraction.
I'm imagining the engineers responsible for running the tests finely tuning the test suite for days and days so they could get that number into the press release, lol. There's no way that's a coincidence and someone definitely advocated for that line being the way it is.
https://www.apple.com/newsroom/2025/10/apple-unveils-new-14-...