Hacker Newsnew | past | comments | ask | show | jobs | submit | sebzim4500's commentslogin

Can one man really make a C compiler in one week that can compile linux, sqlite, etc.?

Maybe I'm underestimating the simplicity of the C language, but that doesn't sound very plausible to me.


yes, if you do not care to optimize, yes. source: done it

I would love to see the commit log on this.

Implementing just enough to conform to a language is not as difficult as it seems. Making it fast is hard.

did this before i knew how to git, back in college. target was ARMv5

Great. Did your compiler support three different architectures (four, if you include x86 in addition to x86-64) and compile and pass the test suite for all of this software?

> Projects that compile and pass their test suites include PostgreSQL (all 237 regression tests), SQLite, QuickJS, zlib, Lua, libsodium, libpng, jq, libjpeg-turbo, mbedTLS, libuv, Redis, libffi, musl, TCC, and DOOM — all using the fully standalone assembler and linker with no external toolchain. Over 150 additional projects have also been built successfully, including FFmpeg (all 7331 FATE checkasm tests on x86-64 and AArch64), GNU coreutils, Busybox, CPython, QEMU, and LuaJIT.

Writing a C compiler is not that difficult, I agree. Writing a C compiler that can compile a significant amount of real software across multiple architectures? That's significantly more non-trivial.


Cooling a datacenter in space isn't really any harder than cooling a starlink in space, the ratio of solar panels to radiating area will have to be about the same. There is nothing uniquely heat-producing about GPUs, ultimately almost all energy collected by a satellite's solar panels ends up as heat in the satellite.

IMO the big problem is the lack of maintainability.


> Cooling a datacenter in space isn't really any harder than cooling a starlink in space

A watt is a watt and cooling isn't any different just because some heat came from a GPU. But a GPU cluster will consume order of magnitudes more electricity, and will require a proportionally larger surface area to radiate heat compared to a starlink satellite.

Best estimate I can find is that a single starlink satellite uses ~5KW of power and has a radiator of a few square meters.

Power usage for 1000 B200's would be in the ballpark of 1000kW. That's around 1000 square meters of radiators.

Then the heat needs to be dispersed evenly across the radiators, which means a lot of heat pipes.

Cooling GPU's in space will be anything but easy and almost certainly won't be cost competitive with ground-based data centers.


Sure but you also need proportionally more solar panels. I'm just pointing out that if we accept that they can launch enough solar panels they will likely be able to launch enough radiators too.

Sure, but cooling a starlink in space is a lot more difficult than cooling a starlink on earth would be. And unlike starlink which absolutely must be in space in order to function, data centers work just fine on the ground.

This. There's no scenario where it's cheaper to put them in space.

I think there's a lot of a room for an energy play that will ultimately obviate the enormously costly terrestrial energy supply chain.

You can just use the cheap solar panels that were gonna be launched into space (expensive) and not launch them into space (not expensive) and plug them into some batteries (still, cheaper than a rocket launch)

You forget the "in 2 years" part.

It will cost less to put it in low earth orbit than it will be to purchase land for it at any reasonable location.

I think that it's not just about the ratio. To me the difference is that Starlink sattelites are fixed-scope, miniature satellites that perform a limited range of tasks. When you talk about GPUs, though, your goal is maximizing the amount of compute you send up. Which means you need to push as many of these GPUs up there as possible, to the extent where you'd need huge megastructures with solar panels and radiators that would probably start pushing the limits of what in-space construction can do. Sure, the ratio would be the same, but what about the scale?

And you also need it to make sense not just from a maintenance standpoint, but from a financial one. In what world would launching what's equivalent to huge facilities that work perfectly fine on the ground make sense? What's the point? If we had a space elevator and nearly free space deployment, then yeah maybe, but how does this plan square with our current reality?

Oh, and don't forget about getting some good shielding for all those precise, cutting-edge processors.


Why would you need to fit the GPUs all in one structure?

You can have a swarm of small, disposable satellites with laser links between them.


Because the latencies required for modern AI training are extremely restrictive. A light-nanosecond is famously a foot, and the critical distances have to be kept in that range.

And a single cluster today would already require more solar & cooling capacity than all starlink satellites combined.


Because that brings in the whole distributed computing mess. No matter how instantaneous the actual link is, you still have to deal with the problems of which satellites can see one another, how many simultaneous links can exist per satellite, the max throughput, the need for better error correction and all sorts of other things that will drastically slow the system down in the best case. Unlike something like Starlink, with GPUs you have to be ready that everyone may need to talk to everyone else at the same time while maintaining insane throughput. If you want to send GPUs up one by one, get ready to also equip each satellite with a fixed mass of everything required to transmit and receive so much data, redundant structural/power/compute mass, individual shielding and much more. All the wasted mass you have to launch with individual satellites makes the already nonsensical pricing even worse. It just makes no sense when you can build a warehouse on the ground, fill it with shoulder-to-shoulder servers that communicate in a simple, sane and well-known way and can be repaired on the spot. What's the point?

Isn't this already a major problem for AI clusters?

I vaguely recall an article a while ago about the impact of GPU reliability: a big problem with training is that the entire cluster basically operates in lock-step, with each node needing the data its neighbors calculated during the previous step to proceed. The unfortunate side-effect is that any failure stops the entire hundred-thousand-node cluster from proceeding - as the cluster grows even the tiniest failure rate is going to absolutely ruin your uptime. I think they managed to somehow solve this, but I have absolutely no idea how they managed to do it.


Starlink already solved those problems, they do 200 GBit/s via laser between satellites.

And for data centers, the satellite wouldn't be as far apart as starlight satellites, they would be quite close instead.


No they didn't. 200Gb/s is 25GB/s, so... They could run 1/36th of a single current-gen SXM5 socket. Not even any of the futuristic next-gen stuff. 25GB/s is less than the bandwidth of one X16 PCIe3 socket. And that's already assuming the best-case scenario, and in reality trying to sync up GPUs like that would likely have loads of other issues. But even just the sheer amount of inter-GPU bandwidth you need is quite extreme. And this isn't some point-to-point routing like Starlink trying to get data from A to B, this is maintaining a network of interconnected systems that need to communicate chaotically and with uneven demand.

Assuming you can stay out of the way of other satellites I'd guess you think about density in a different way to building on Earth. From a brief look at the ISS thermal system it would seem the biggest challenge would be getting enough coolant and pumping equipment in orbit for a significant wattage of compute.

According to Gemini, Earth datacenters cost $7m per MW at the low end (without compute) and solar panel power plants cost $0.5-1.5m per MW, giving $7.5-8.5m per MW overall.

Starlink V2 mini satellites are around 10kW and costs $1-1.5m to launch, for a cost of $100-150m per MW.

So if Gemini is right it seems a datacenter made of Starlinks costs 10-20x more and has a limited lifetime, i.e. it seems unprofitable right now.

In general it seems unlikely to be profitable until there is no more space for solar panels on Earth.


All kinds of industries have been conserving more each decade since the energy crisis of the 1970's.

With recent developments, projected use is now skyrocketing like never seen since.

Before that I thought it was calculated that if alternative energy could be sufficiently ramped up, there would be electricity too cheap to meter.

I would like to see that first.

Whoever has the attitude to successfully do "whatever it takes" to get it done would be the one I trust do it in space after that.


His bet then, is that the $1 million cost to get a Starlink V2 mini into orbit can be made cheaper by an order of magnitude or two.

But it is always going to be significantly more expensive than a terrestial data center. Best-case scenario it'll be identical to a regular data center, plus the whole "launching it into space" part. There's no getting around the fuel required to get out of the gravity well. And realistically you'll also be spending an additional fortune on things like station keeping, shielding, cooling, and communication.

Seems pretty unnecessary given we've got reddit for that

>People's standards for when they're willing to cede control over their lives both as the passenger and the pedestrian in the situation to a machine are higher than a human.

Are they? It is now clear that Tesla FSD is much worse than a human driver and yet there has been basically no attempt by anyone in government to stop them.


> basically no attempt by anyone in government to stop them.

No one in the _US_ government. Note that European governments and China haven't approved it in the first place.


FSD is already better than at least one class of drivers. If FSD is engaged and the driver passes out, FSD will pull over to the side of the road and stop. And before we leap to conclusions that it only helps in the case of drunk drivers who shouldn't be driving in the first place (which, they shouldn't be), random strokes and seizures happen to people all the time.

Do you have data to back this claim up, specifically with HW4 (most recent hardware) and FSD software releases?

>I can't imagine this browser being used outside of tinkering or curiosity toy - so the purpose of the research is just to see whether you can run absurd amount of agents simultaneously and produce something that somewhat works?

Yes but this is a very interesting question IMO


Surely this would be push back to the CEO, not the engineer? The engineer is presumably the one telling the truth.

Either of them. If the CEO refuses to answer, you ask others. If you get a chance to talk with them, you ask them about it. Just ignoring the elephant in the room and hoping that the unclear details gets forgotten helps no one except Cursor here.

Clearly we can't all agree on those or there would be no need for the restriction in the first place.

I don't even think you'd get majority support for a lot of it, try polling a population with nuclear weapons about whether they should unilaterally disarm.


>absurdly sterile and clear cut toy moral quandaries.

I don't think it's that clear cut, if you polled the population I'm sure you'd find a significant number of people who would pick 1.


I'm sure many Christians and Muslims believe that they have universal moral standards, however no two individuals will actually agree on what those standards are so I would dispute their universality.


> Releases by far the least useful research of all the major US and Chinese labs, minus vanity interp projects from their interns

From what I've seen the anthropic interp team is the most advanced in the industry. What makes you think otherwise?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: