Hacker Newsnew | past | comments | ask | show | jobs | submit | hoss1474489's commentslogin

The only fully-functional stack currently available requires Python >= 3.8, which is the main limitation to where it will run. But there’s still a lot you can do with that!

there is a pretty compatible rust implementation as well which claims to target microcontrollers https://github.com/BeechatNetworkSystemsLtd/Reticulum-rs

Did you use it? I've never seen it used outside of Beechat's own devices.

Beachball of death on “Starting Claude’s workspace” on the Cowork tab. Force quit and relaunch, and Claude reopens on the Cowork tab, again hanging with the beachball of death on “Starting Claude’s workspace”.

Deleting vm_bundles lets me open Claude Desktop and switch tabs. Then it hangs again, I delete vm_bundles again, and open it again. This time it opens on the Chat tab and I know not to click the Cowork tab...


I noticed a couple hanging `diskutil` processes that were from the hanging and killed Claude instances. Additionally, when opening Disk Utility, it would just spin and never show the disks.

A restart fixed all of the problems including the hanging Cowork tab.


Same thing for me. It crashes. Submitted a report with the "Send to Apple" report, not sure if there is any way the team can retrieve these reports.


Restarting the machine got Cowork working for me.


some things will never change :)


Can you submit feedback and attach your logs when asked?


I haven’t found any place to do that.


Should be a feedback button (like a megaphone) next to your profile name in the bottom of the left sidebar.


I found a feedback link in a dismissible banner on the Cowork tab. Then the clock is running to fill it out and submit it before Claude crashes.


Lol


> there wont be a guarantee that they will never lose their jobs, they will continue to live on the wobbly and uncertain foundation

The people who lose their jobs prove this was always the case. No job comes with a guarantee, even ones that say or imply they do. Folks who believe their job is guaranteed to be there tomorrow are deceiving themselves.


Always hated Logitech Options bloatware, why should the software drain my battery, phone home, and require screen recording permission just to emulate a keystroke when I push a button on the mouse?

I enjoyed my trip to Micro Center today to finally ditch Logitech after those buttons stopped working. Put up with Options for over a decade because at least it did the one thing I needed.


I love the hardware. Software sucks. When this broke today, I just downloaded SteerMouse and updated my license. Never going back to the Logitech software.


In my frustration I didn’t even think to look if there was a third party solution.

Looks like that would do the one thing I need, and I’m finding the grass to be just as brown with Razer Synapse is it is with LogiOptions.


Just install something like "Better touch tool" you'll get the functionality (and more) without the bloat.


To be fair, it lack some functionality, like mapping a single modifier to a button


Better than you’d think; worse than you’d hope.


I like this. I more generally look for reduces chaos.

I’ve seen the pursuit of disambiguation employed to deadlock a project. Sometimes that’s the right thing to do—the project sponsor doesn’t know what they want. But many times the senior needs to document some assumptions and ship something rather than frustrating the calendars of 15 people trying to nail down some exact spec. Knowing whether to step on the brake or the gas for the benefit of the team and company is a key senior trait.

This is a yes, and to the article; building without understanding the problem usually will increase chaos—though sometimes the least effort way through it is to build a prototype, and a senior would know when to do that and how to scope it.


GPUs in 16x slots is still important for LLM stuff, especially multi-GPU, where lots of data needs to move between cards during computation.


A 16x PCIE 6.0 setup has more bandwidth than any dual channel DDR5 memory kit.


Depends on what you're doing. I'm pretty sure the bandwidth for inference isn't much.


Depends, if it's tensor parallel or pipeline parallel. Only PP doesn't pass too much. TP does


Wow, 595 USD is insanely expensive for literally half a keyboard.


Effort is the algorithm. (Presentation on learning in the age of AI by the Veritasium guy) https://youtu.be/0xS68sl2D70


Explicit and obvious encoding in rules isn’t what makes something systemic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: