Hacker Newsnew | past | comments | ask | show | jobs | submit | sireat's commentslogin

After, RAM, SSD, GPUs, now HDDs what else is there left to sell out? Power supplies, fans?

In a way this feels a bit absurd for these AI centers to hog HDDs.

As pointed by others neither training nor inference require HDDs and storing raw data should not require that much.

So my hypothesis is that it is a double whammy of overall declining consumer sided HDD demand, leaving data centers as main source of demand and additional demand from the new AI centers.

I feel like the AI centers are just buying HDDs because why not throw a HDD in each server blade even if there is no need? The money is there to be spent and it must be spent.

As someone who has been building computers since 1989 it feels like end of personal hobby casual building.

I will end with an imperfect analogy with multiplayer gaming. It is quite common in multiplayer games for higher level players to wish to acquire some tradeskill they neglected to acquire earlier. maybe a new quest appears, or new "must have" item that requires such skill.

They (past me included) have too much game money and no wish to acquire tradeskill items slowly. So the "rich" will overpay by 2x or 10x or even 100x the usual price.

That is free market at work right?

In the process whole low level economy is destroyed due to 2nd order effects. Meaning a new player starting out can only be a farmer.

So if a student comes to me wishing to start building computers what advice do I give them? Farm something?


> As someone who has been building computers since 1989 it feels like end of personal hobby casual building.

We have a long way to go before the average PC costs even half as much as it did in 1989 (adjusted for inflation). And of course the performance for typical consumer use is orders of magnitude better than it was back then.


My parents love to tell me how in either late 1997 or early 1998 they bought the first PC for our family, a Compaq with a Pentium 3, 12 GB hard drive, 128MB of RAM, and no graphics acceleration at all beyond whatever was integrated on the Pentium 3. It cost $2000 back then so probably almost $4000 today. My high end 4090 rig cost a bit more than that to build for comparison and that machine is better than 98% of machines out there today.

>So if a student comes to me wishing to start building computers what advice do I give them? Farm something?

Buy used stuff? 99.9% of consumers have no need for anywhere near the cutting edge tech. I do far more than most people and get by just fine with a workstation I bought used in 2014. My newest Laptop is ~2018 and that was only because I wanted something with 4K that I could flip to tablet.

Raspberry PI's, SOCS, Microcontrollers, there's a million things today that are awesome. Are hobbyist students needing to build datacenters!?


Most of the computers I buy are refurbished or used models, I've never had a bad experience. Especially now, when computers are not getting much better whereas prices are increasing faster than performance.



Gas turbines

This is very cool and having stalemate is nice, however how much space would it take to implement the full ruleset?

As you write: not implemented: castling, en passant, promotion, repetition, 50-move rule - those are all required to call the game being played modern chess.

I could see an argument for skipping repetition and 50-move rule for tiny engines, but you do need castling, en pessant and promotion for pretty much any serious play.

https://en.wikipedia.org/wiki/Video_Chess fit in 4k and supported fuller ruleset in 1980 did it not?

So I would ask what is the smallest fully UCI (https://www.chessprogramming.org/UCI) compliant engine available currently?

This would be a fun goal to beat - make something tiny that supports full ruleset.

PS my first chess computer in early 1980s was this: https://www.ismenio.com/chess_fidelity_cc3.html - it also supported castling, en pessant, not sure about 50 move rule.


ToledoChess [0] has a few implementations of this in different languages. Some highlights:

2KB of JavaScript with castling, en passant, promotion, search and even a GUI

326 bytes of assembly, without the special rules

I don't think the author has a UCI-compliant one, but it should be easier than the GUI. There are forks of the JS one that might do it.

[0] https://nanochess.org/chess6.html


What about SSH requires GUI?

I mean I SSH to my Hetzner Ubuntu fun box usually from Powershell or PuTTY, but sometimes I SSH from a Debian server without any GUI.


> I SSH to my Hetzner Ubuntu fun box

How did you provision your Hetzner Ubuntu fun box in the first place? That's the part that usually needs a GUI


Interesting information but these are not hard numbers.

Surely the 100-char string information of 141 bytes is not correct as it would only apply to ASCII 100-char strings.

It would be more useful to know the overhead for unicode strings presumably utf-8 encoded. And again I would presume 100-Emoji string would take 441 bytes (just a hypothesis) and 100-umlaut chars string would take 241bytes.


Very simple - look for who has a stake in Groq currently:

https://www.cnbc.com/2025/12/24/nvidia-buying-ai-chip-startu...

"Davis, whose firm has invested more than half a billion dollars in Groq since the company was founded in 2016, said the deal came together quickly. Groq raised $750 million at a valuation of about $6.9 billion three months ago. Investors in the round included Blackrock and Neuberger Berman, as well as Samsung, Cisco , Altimeter and 1789 Capital, where Donald Trump Jr. is a partner."

POP QUIZ - Which minority partner is the key here?


What is current state of the art workflow when working with legacy code across multiple languages?

This would be a 100 kLOC legacy project written in C++, Python, and jQuery era Javascript circa 2010. Original devs have long left. I would rather avoid C++ as much as possible.

I've been Github Copilot (in VS Code) user since June of 2021 and still use it heavily, but the "more powerful intellisence" approach is limiting me on legacy projects.

Presumably I need to provide more context on larger projects.

I can get pretty far with just ChatGPT plus and feeding bits and pieces of project. However that seems like using the wrong tool.

Codex seems better for building things but not sure about grokking existing things.

Would Cursor be more suitable for just dumping the whole project (all languages) basically 4 different sub projects and then selectively activating what to include in queries?


I dont understand, the agent mode of copilot will search for and be pretty good and filling its own context afaik. I never really feed any of our 100k+ lines legacy codebase explicitly to the LLM.


How about picture gen though on Dall-E?

It is so infuriating to get content block on ChatGPT for pretty much any fairy tale that has had a Disney related adaptation.

Try getting a Grimm's 19th century Snow White illustrations. You can not because the Disney crap supersedes it.

In fact you can not get a Snow White illustration of any kind on ChatGPT.

I can not figure out any prompts that would draw using public domain knowledge.

Same goes for a pirate fighting a flying boy - no good.

New one this week was when I tried to draw a border around my daughter's picture of a Poppy from Trolls(That's Dreamworks but same problem).

The actual copyrighted Poppy appeared in the border half way down the generation and then of course content block appeared.

What is hilarious though that ChatGPT will profusely apologize and provide extremely detailed instructions in setting up local Stable Diffusion as an alternative...


As I recall Holmes did in fact do a lot of walking. He vacillated between periods of inactivity(cocaine, violin, shooting V in wall with a revolver) and intense activity (taking up disguises and doing various physical activities including walking all across London and elsewhere.

Just because your logical mind says one thing is good to do and you know you should do it you are not going to always obey your rider, the inertia of the elephant takes over.

So you need a trigger to snap out of it, for Holmes it was a new case.


> and intense activity

AFAIR those had a specific purpose (chasing a perp, tracking down evidence, etc.). Most of his thinking he did sitting in a chair and smoking his pipe for hours on end (sometimes the whole night).


No they are not. He plays violin and shoots a gun inside his house for fun.


Indeed regular Jupyter works so well on VS Code for solo work these days that there is no real need for a new entrant.

So what pain point are these new entrants trying to solve?

Sure there is an issue of .ipynb basically being a gnarly json ill suited for git but it is rare that I need to track down a particular git commit. Even then that json is not that hard to read.

Also I'd like an easier way to copy cells across different Jupyter notebooks, but at the end of day it is just Python and markdown not very hard to grok.


OpenAI has ridiculous guardrails for illustrations covering any public domain subject that has been covered by Disney or any other major public corporation.

So by that benchmark Japanese companies have a case.

Try generating a 19th century style illustration of Snow White. You can't at least not on OpenAI platform.

Try generating a picture "of flying boy fighting a pirate on a ship".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: