Hacker Newsnew | past | comments | ask | show | jobs | submit | shdh's commentslogin

Rust build times are not an enjoyable part of its DX


Get better at reviewing code


Guess I'm sticking with my 5950x and 128GB of DDR4 for a while longer, should have upgraded my system earlier this year like I was thinking


Is this sarcasm


Hadn’t heard of Granian before, thinking about upgrading to 3.14 for my services and running them threaded now


They did experiment with carbon fiber if I recall correctly

Stainless steel is much more cost effective


Partially it was that stainless steel was cheaper. A bigger issue was that making large carbon fiber structures takes much larger than with steel and so it would really have eaten into their iteration time. But also while the strength to weight savings from carbon fiber are a big deal at regular temperatures the heating from Starship reentry erased that.


And they abandoned it to try to eliminate the need for a heat shield. This plan did not pan out.

The whole point of a reusable launch system is the cost of the vehicle is amortized over many launches, so you can use expensive, high performance materials.


Can you elaborate how SpaceX is an extraction/wealth transfer powerhouse?


Heroku and Vercel don’t ever have any intention of competing on price

They offer convenience


It’s not just convenience. This single box is a single point of failure.


Heroku is cool in that it helps you get running and autoscaled, but it would be much cheaper for anyone with traffic to just get a dedicated box


Cost decreases with time

Humans can work on a problem 8 hours a day? You can run inference 24/7


It decreases, but decreasing from $1 million per token to $0.9 million per token after a year is still a decrease, but it still is not viable. Paying an AGI a $100 billion dollars for it to work 24/7 for a year is worse than hiring 10 people for $30k a year to work shifts to do the same work 24/7.


Power use is less important than model capability

AGI is either more scale or differing systems, or both

They can always optimize for power consumption after AGI has been reached


Disagree. AI that displaces workers is worth spending anything up to that worker's salary on, and this can have a devastating impact on energy prices for everyone.

Worked example, but this is a massive oversimplification in several different ways all at once:

Global electricity supply was around 31,153 TWh in 2024. The world's economy is about $117e12/year. Any AI* that is economically useful enough to handle 33% that, $38.6e12/year, is economically worthwhile to spend anything up to $38.6e12/year to keep that AI running.

If you spend $38.6e12 (per year) to buy all of those 31,153 TWh of electricity (per year), the global average electricity market price is now $1.239/kWh, and a lot of people start to wonder what the point of automating everything was if nobody can afford to keep their heating/AC (delete as appropriate) switched on. Or even the fridge/freezer, for a lot of people.

* I don't care what definition you're using for AGI, this is just about "economically useful"


> AGI is either more scale

So you plan to scale without increasing power usage. How's that?

> They can always optimize for power consumption after AGI has been reached

If you don't optimize power consumption you're going to increase surface area required to build it. There are hard physical limits having to do with signal propagation times.

You're ignoring the engineering entirely. The software is not hardly interesting or even evolving.


> If you don't optimize power consumption you're going to increase surface area required to build it. There are hard physical limits having to do with signal propagation times.

While true, that probably stopped being an important constraint around the time we switched from thermionic valves to transistors as the fundamental unit of computation.

To be deliberately extreme: if we built cubic-kilometre scale compute hardware where each such structure only modelled a single cortical column from a human's brain, and then spread multiple of these out evenly around the full volume within Earth's geosynchronous orbital altitude until we had enough to represent a full human brain, that would still be on par with human synapses.

Synapses just aren't very fast.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: