Hacker Newsnew | past | comments | ask | show | jobs | submit | murderfs's commentslogin

This is likely more of a Windows filesystem benchmark than anything else: there are fundamental restrictions on how fast file access can be on Windows due to filesystem filter drivers. I would bet that if you tried again with Linux (or even in WSL2, as long as you stay in the WSL filesystem image), you'd see significantly improved results.

Which still wouldn’t beat the Apple Silicon chips. Apple rules the roost.


You click on laptop and somehow that's a gotcha, I click on single thread and the M5 is at the very top. What is that?

You are seeing basically the lithography node used to make the CPU. Since Apple books more capacity than anyone else, they have their chip 5-6 months ahead of the market, you'll see chips with similar performance by core.

And what about the M3 Ultra, that sits at number 3 and came out ten months ago? Why was it not beaten five months ago? Might I add that the M3 Ultra is on an older node than the M5. And what about the A19 Pro, which is better at single core than every desktop chip in the world, and happens to be inside a phone!

Apple has the best silicon team in the world. They choose perf per watt over pure perf, which means they don't win on multi-core, but they're simply the best in the world in the most complicated, difficult, and impossible metric to game: single core perf.


It's bench score on single thread is 0.6% better than the Intel Core Ultra 9 285K, which have a lower TDP and was released 6 months before. Boths use the same lithography node. If you look at the chip by their lithography node, the Apple silicons are the same than the others...

Apple's M-series chips are fantastic, but I do agree with you that it's mostly a combination of newer process and lots of cache.

Even when they were new, they competed with AMD's high end desktop chips. Many years later, they're still excellent in the laptop power range - but not in the desktop power range, where chips with a lot of cache match it in single core performance and obliterate it in multicore.

https://www.cpu-monkey.com/en/compare_cpu-apple_m4-vs-amd_ry...


Alternatively, in the same socket and without the 3D stacked cache: https://www.cpu-monkey.com/en/compare_cpu-apple_m4-vs-amd_ry... with double the cores.

And in laptop form compared with a m4 max: https://www.cpu-monkey.com/en/compare_cpu-apple_m4_max_14_cp...


> Apple's M-series chips are fantastic, but I do agree with you that it's mostly a combination of newer process and lots of cache.

Why does it matter how they achieved their thunderous performance? Why must it be diminished to just a boatload of cache? Does it matter from which implementation detail you got the best single-core performance in the world? If it's just way more cache, why isn't Intel just cranking up the cache?


Intel IS cranking up the cache. Unfortunately, Intel chose to allocate significant resources to improving their fabs instead of immediately going to TSMC and pumping out a competitive chip, and in the years where they were misspending their resources, their competitors were gobbling up market share. Their new stuff that's competitive with Apple is all built by TSMC.

It's worth noting that Intel is not a stranger to building CPUs with lots of cache - they just segmented it into their server chips and not their consumer ones.

It matters because it is useful to understand why a given chip is faster or slower than its competitors. Apple didn't achieve this with their architecture/ISA or with some snazzy new hardware (with some notable exceptions like their x86 memory emulator), they did it by noticing how important cache was becoming to consumer workloads.


Apple was not tasked with producing the very best supercomputer with the ARM architecture.

That was Fujitsu. They each have their own specialties.

https://en.wikipedia.org/wiki/Fugaku_(supercomputer)


But at the end of the day, the fact is the best gear is made by Apple.

maybe. But then you have to use macOS which by far not the best OS

MacOS is the worst OS, except when compared to all of the other ones.

I'm sure there are some OS worse than macOS but that's not a high bar to clear.

Windows and Linux come to mind. I'm sure there's probably others.

It depends at which point in time and what you consider is the best gear.

> The maintenance costs are higher because the lifetime of satellites is pretty low

Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean...


Another significant factor is that radiation makes things worse.

Ionizing radiation disrupts the crystalline structure of the semiconductor and makes performance worse over time.

High energy protons randomly flip bits, can cause latchup, single event gate rupture, destroy hardware immediately, etc.


If anything, considering this + limited satellite lifetime, it almost looks like a ploy to deal with the current issue of warehouses full of GPUs and the questions about overbuild with just the currently actively installed GPUs (which is a fraction of the total that Nvidia has promised to deliver within a year or two).

Just shoot it into space where it's all inaccessible and will burn out within 5 years, forcing a continuous replacement scheme and steady contracts with Nvidia and the like to deliver the next generation at the exact same scale, forever


And just like that you've added another not never done before, and definitely not at scale problem to the mix.

These are all things which add weight, complexity and cost.

Propellant transfer to an orbital Starship hasn't even been done yet and that's completely vital to it's intended missions.


> Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean

Hell, you're going to lose some fraction of chips to entropy every year. What if you could process those into reaction mass?


I believe that a modern GPU will burn out immediately. Chips for space are using ancient process nodes with chunky sized components so that they are more resilient to radiation. Deploying a 3nm process into space seems unlikely to work unless you surround it with a foot of lead.

Or cooling water/oil?

> Or cooling water/oil?

Oh. You surround it with propellant. In a propellant depot.


Hah, kill three birds with one stone? The satellites double up as propellant depots for other space missions, that just happen to have GPUs inside? And maybe use droplet radiators to expel the low grade heat from the propellant. I wonder if that can be made safe at all. They use propellant to cool the engine skins so... maybe?

You're describing cryogenic fuels there and dumping heat into them. Dumping heat (sparks, electricity) into liquid oxygen would not necessarily be the best of ideas.

Dumping heat into liquid hydrogen wouldn't be explosive, but rather exacerbate the problem of boil off that is already one of the "this isn't going to work well" problems that needs to be solved for space fuel depots.

https://en.wikipedia.org/wiki/Orbital_propellant_depot

> Large upper-stage rocket engines generally use a cryogenic fuel like liquid hydrogen and liquid oxygen (LOX) as an oxidizer because of the large specific impulse possible, but must carefully consider a problem called "boil off", or the evaporation of the cryogenic propellant. The boil off from only a few days of delay may not allow sufficient fuel for higher orbit injection, potentially resulting in a mission abort.

They've already got the problem of that the fuel is boiled off in a matter of days. This is not a long term solution for a place to dump waste heat. Furthermore, it needs to be at cryogenic temperatures for it to be used by the spacecraft that the fuel depot is going to refuel.

> In a 2010 NASA study, an additional flight of an Ares V heavy launch vehicle was required to stage a US government Mars reference mission due to 70 tons of boiloff, assuming 0.1% boiloff/day for hydrolox propellant. The study identified the need to decrease the design boiloff rate by an order of magnitude or more.

0.1% boiloff/day is considered an order of magnitude to large now. That's not a place to shunt waste heat.


Thanks, great answer.

This brings a whole new dimension to that joke about how our software used to leak memory, then file descriptors, then ec2 instances, and soon we'll be leaking entire data centers. So essentially you're saying - let's convert this into a feature.

It's certainly one way to do arena-based garbage collection.

Reminds me of the proposal to deorbit end of life satellites by puncturing their lithium batteries :)

The physics of consuming bits of old chip in an inefficient plasma thruster probably work, as do the crawling robots and crushers needed for orbital disassembly, but we're a few years away yet. And whilst on orbit chip replacement is much more mass efficient than replacing the whole spacecraft, radiators and all, it's also a nontrivial undertaking


Or maybe they want to just use them hard and deorbit them after three yesrs?

"Planning" is a strong word..

They're IATA airport codes (except for San Francisco, which should be SFO).

Sure, it could blow up its economy and have the U.S. just switch to the existing domestic alternative, which also appears to be superior (tirzepatide).


> Generally, text compresses extremely well. Images and video do not.

Is that actually true? I'm not sure it's fair to compare lossless compression ratios of text (abstract, noiseless) to images and video that innately have random sampling noise. If you look at humanly indistinguishable compression, I'd expect that you'd see far better compression ratios for lossy image and video compression than lossless text.


The comparison makes sense in what I am charitably assuming is the case the GP is referring to: we know how to build a tight embedding space from a text corpus, and get out outputs from it tolerably similar to the inputs for the purposes they're put to. That is lossy compression, just not in the sense anyone talking about conventional lossless text compression algorithms would use the words. I'm not sure we can say the same of image embeddings.


I don't think I've ever seen documentation from tech writers that was worth reading: if a tech writer can read code and understand it, why are they making half or less of what they would as an engineer? The post complains about AI making things up in subtle ways, but I've seen exactly the same thing happen with tech writers hired to document code: they documented what they thought should happen instead of what actually happened.


You sound unlucky in your tech writer encounters!

There are plenty of people who can read code who don't work as devs. You could ask the same about testers, ops, sysadmins, technical support, some of the more technical product managers etc. These roles all have value, and there are people who enjoy them.

Worth noting that the blog post isn't just about documenting code. There's a LOT more to tech writing than just that niche. I still remember the guy whose job was writing user manuals for large ship controls, as a particularly interesting example of where the profession can take you.


A tech writer isn't a class of person. "Tech writer" is a role or assignment. You can be an engineer working as a tech writer.

Also, the primary task of a tech writer isn't to document code. They're supposed to write tutorials, user guides, how to guides, explanations, manuals, books, etc.


> they documented what they thought should happen instead of what actually happened.

The other way around. For example the Python C documentation is full of errors and omissions where engineers described what they thought should happen. There is a documentation project that describes what actually happens (look in the index for "Documentation Lacunae"): https://pythonextensionpatterns.readthedocs.io/en/latest/ind...


Not everyone wants to write code.


Yeah, but almost everyone wants money. You can see this by looking at what projects have the best documentation: they're all things like the man-pages project where the contributors aren't doing it as a job when they could be working a more profitable profession instead.


While I do appreciate man pages, I don't think they are something I would consider to be "the best documentation". Many of the authors of them are engineers, by the way.


> Been doing it the same way for centuries so, care to elaborate on what's wrong with how they farm?

You're talking about the same Japan that's had rice shortages for like two years now, right?


The rice shortages were not because of poor farming practices, this is basic knowledge now: https://eastasiaforum.org/2024/10/18/japans-rice-crisis-show.... Yes the weather was bad in 2023 for farming rice, but it could be bad anywhere in the world. Australia does plenty of industrial scal farming and there are years were certain crops are decimated.

Anyway the government dipped into the stockpiles and all is good now.


It's as if you didn't read the article - this is just the summary they give:

> Japan faced a rice shortage in the summer of 2024, exposing flaws in its food security policy. Despite declining consumption, small shocks caused market disruption. The government refused to release stockpiles, prioritising producer interests over consumer needs. This reflects political considerations, with upcoming elections influencing policy decisions. The crisis highlights the need for a more balanced approach to food security, emphasising both physical stockpiles and effective public communication. Japan must reassess its agricultural policies to ensure long-term food security and market stability.

The actual meat of the article goes in to further, damning detail.

As I wrote, Japan is not a model to follow.


Ratelimiting doesn't solve anything, you can just parallelize your queries across IP addresses.


The whole "defense in depth" principle disagrees. Having a layered defense can not only buy defenders time, but downgrades attacks from 100% data exfiltration to <10%


Increasing the barrier to entry from "trivial" to "less trivial" is always a good start.


Yup. This is some of the stuff that gets missed when understanding Security.

Ultimately, you're just buying time, generating tamper evidence in the moment, and putting a price-tag on what it takes to break in. There's no "perfectly secure", only "good enough" to the tune of "too much trouble to bother for X payout."


or like, are people going to wonder why we dropped the ball so hard, or are they going to be impressed by what the attackers pulled off.



Well, they'd be more functional as insurance, at least! The way insurance is supposed to work is that your insurance premium is proportional to the risk. You can't go uninsured and then after discovering that your house is on fire and about to burn down, sign up for an insurance plan and expect it to be covered.

We've blundered into a system that has the worst parts of socialized health care and private health insurance without any of the benefits.


How did she get into the supermarket in the first place?


Driving? Plenty of people have personal mobility issues while driving just fine once they're in a car. Have you ever given away used home health aid items on craigslist? It can be a pretty sad scene.


Did she ram her car through the front door?


Sorry, I had misread the comment I was responding to. I had already covered what it was actually asking in my original comment - parking near a cart someone else abandoned, or failing that making her way to the closest one despite difficulty.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: