Hacker Newsnew | past | comments | ask | show | jobs | submit | prezk's commentslogin

Maybe LLMs are like a next evolution of a rubber ducky: you can talk to it, and it's very helpful, just don't expect that IT will give you the final answer.


Linux will run on most platforms, so just pick up a fast, lightweight laptop, and select a conservative power profile for longer battery life and less heat, and don't run 32-thread machine learning jobs on it.

A 12-hour laptop battery life is a little bit of a red herring: yes, you can get it on efficient ultrabooks and MacBooks, with light use like web browsing or office work, on low brightness and minimal background apps. This is true on MacOS, Windows and Linux. The first two may be better at handling low power modes on hardware peripherals, but OTOH on Linux I have a better control over background tasks.

I have an absolute trash travel laptop from last decade, running Fedora Linux, and it lasts for multiple days if I keep it mostly closed and just open it for whatever browsing/editing I need on the road.


And how many laptops running Linux are light, power efficient, fast, quiet with good battery life?

My 16 inch M3 MacBook Pro runs 5 hours at 80% brightness doing development with my USB powered (video and power from one USB cord) portable monitor. The Mac battery is powering the monitor

https://a.co/d/gHqpcs3


Pretty much every laptop on the planet will run Linux. Maybe your optics are tinted because you seem to be a Mac person, and Linux support for newer Macs has known issues with low power modes.

I note how your 12+ hour claim was reduced to 5 hours when you actually put it to real work. It's still impressive, of course, but 5 hours aren't out of reach for Ryzen laptops either.

BTW, I have a RISC-V platform with 8 1.6GHz CPUs that uses under 5W under full load; on your 100Wh battery it would last for 20 hours. It's not a complete system, and performance lags behind Apple/Intel CPUs, but I think in few years RISC-V may take a bite out of both.


It’s not “running Linux” that’s the issue. It’s running Linux and getting good battery lifez

And a 1.6Ghz RISC V CPU isn’t exactly “fast” in 2026 or even 2021.

You noted that it was 5 hours when powering a second monitor from its USB port. Not just displaying video from the USB port, the monitor is getting power from the USB port.

How long do you think your 5 hour laptop would last powering an external display - again not just video out, also supplying power?


"Pretty much every laptop on the planet will run Linux."

Well, as long as you buy a Mac laptop that's at least 3 years old you'll be mostly..good. Unfortunately Apple isn't interested in helping Linux and everything has to be painfully reverse engineered and some stuff for M1 is still broken.


I personally don't care about battery life, there are power outlets around everywhere I'll be more than several hours.

Still, no one is getting that kind of battery life outside apple, just the way it is. If your existence revolves around battery life there's no substitute.

But note, this thread is about replacing Windows, and Wintel does not do as well as Apple either. So this thread is off-topic.


You could run on 5V with a boost voltage converter to 12V. For extra credit, you could run the USB-PD off 5V, negotiate 12V and only then switch it to the load.


If I need 12V/1A, then that suggests I need 5V/2.4A even with 100% efficiency. Without negotiating anything, a device shouldn't draw more than 5V/0.5A.

That's not to say that a boost converter doesn't have value, but it still leaves a gap where there could be confusion.

The confusion or complexity even multiplies if the device has additional USB-C for data transfer. In that case, you either have to mark one port as being the "power in" port, or you have to support power in and data out on all the ports, which gets complicated and expensive.

It would be a great move by the USB IF to think through this sort of thing more carefully. Right now the USB-c connector is so overloaded in terms of power, display modes, thunderbolt, speeds, etc. that it's very hard to predict whether two USB-c devices will connect and at what power or speed and with what capabilities. For power, it would be helpful to require supplies to have a standardized status LED so that e.g. green means that the supply is providing the highest power allowed by the device (not the supply), yellow means there's been a compromise, and red suggests an error condition.


If you need 12V/1A, starting up and showing an error message at 12V/0.2A sounds quite feasible. Of course it depends on what's using up all your power. But at least microprocessors can usually be started at lower power levels (lower frequency) with a switch to high frequency once you've confirmed you have the power available. Display backlight can be dim until you have the power, and peripherals can be powered through a transistor so you can start delivering power after initial system checks.

But it's a bit more involved than just replacing a barrel jack with a USB-C port, and would require some design considerations early on


Well the question is how many watts you need to display an error message. You made it sound like voltage was the main issue.

And yeah you're supposed to negotiate before pulling 2.4 amps at 5v but that's not usually a big deal in practice. Especially when you're actually supposed to stick to 100mA at first, but who does that.

A diagnostic LED sounds nice but given how most cables don't even have a speed printed on them good luck at something more invasive.

I will say that thunderbolt support isn't often an issue beyond the basic speed rating, and should be even less of one since USB4. And that power ratings are pretty simple, 60W or moreW. I really don't think the overloading of many different types of feature is a big deal, I think the single feature of unknown speed is the big issue-causer.


Normally a wafer would have die-sized spaces for test structures used for optical, electrical, chemical and other tests. Think the TV test card https://en.wikipedia.org/wiki/Test_card


apparently all Western Digital drives have a RISC-V controller, as well as NVIDIA graphic cards https://riscv.org/wp-content/uploads/2016/07/Tue1100_Nvidia_...

WCH makes a microcontroller that sells for around 10 cents; it's cheaper than a 7400 quad gate, so it's bound to end up in a ton of things. It occurred to me that they are like electric motors: unglamorous but ubiquitous (there are several dozen electric motors, mostly small ones, in the room I am sitting in right now)


Yeah. To get that 12.4c CH32V003 price (for an 8 pin package) you need to spend $6.21 on 50 of them. If you want to replace a 7400 quad 2-input NAND then you'll need to buy the 16 pin package which is 16.3c each for 50 ($8.30 in total)

On Digikey the cheapest I can find SOIC14 7400s is 20c each, but you have to buy 1480 of them to get that. If you want just a few they're $1.60 each, and if you want DIP14 they're $2.

The propagation delay of using a microcontroller to implement a quad NAND gate will be a lot higher than the 7400's 14ns of course. At a wild guess I'd say 200ns or greater. Could be 1us. I don't think more than that. That's still fine for many uses.

For those who don't know, a CH32V003 is a 32 bit RISC-V CPU implementing the RV32EC instruction set (basic integer instructions, 16 registers, 2-byte instructions available for the most common operations, as well as the standard 4-byte instructions, to save 25%-30% program space. It has 2048 bytes of RAM and 16k of flash memory to hold your program. A program to emulate a 7400 would use 0 of the RAM and maybe 100 bytes of the flash (most of it would be init code, run once at power-on).


It is just two years younger than the University of Constantinople, established in AD 425 https://en.wikipedia.org/wiki/University_of_Constantinople


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: