Hacker Newsnew | past | comments | ask | show | jobs | submit | Tossrock's commentslogin

They did ship that feature, it's called "&" / teleport from web. They also have an iOS app.

That's non-local. I am not interested in coding assistants that work on cloud based work-spaces. That's what motivated me to developed this feature for myself.

But... Claude Code is already cloud-based. It relies on the Anthropic API. Your data is all already being ingested by them. Seems like a weird boundary to draw, trusting the company's model with your data but not their convenience web ui. Being local-only (ie OpenCode & open weights model running on your own hw) is consistent, at least.

It is not a moral stance. I just prefer to have my files of my personal projects in one place. Sure I sync them to GitHub for backup, but I don't use GitHub for anything else in my personal projects. I am not going to use a workflow which relies on checking out my code to some VM where I have to set everything up in a way where it has access to all the tools and dependencies that are already there on my machine. It's slower, clunkier. IMO you can't beat the convenience of working on your local files. When I used my CC mirror for the brief period where it worked, when I came back to my laptop, all my changes were just already there, no commits, no pulls, no sync, nothing.

Ah okay, that makes sense. Sorry they pulled the plug on you!

Don't forget gyms and other physical-space subscriptions. It's right up there with razor-and-blades for bog standard business models. Imagine if you got a gym membership and then were surprised when they cancelled your account for reselling gym access to your friends.

It's not a system prompt, it's a tool used during the training process to guide RL. You can read about it in their constitutional AI paper.


Moreover the Claude (Opus 4.5) persona knows this document but believes it does not! It's a very interesting phenomenon. https://www.lesswrong.com/posts/vpNG99GhbBoLov9og


Windows has the WSL for native Linux vms, these days (and also the past ~decade)


I can rm -rf Windows files from WSL2. And so can LLMs.

Meanwhile a VM isolates by default.


You can turn all the interop and mounting of the windows FS with ease. I run claude in yolo mode using this exact setup. Just role out a new WSL env for each claude I want yoloing and away it goes. I suppose we could try to theorize how this is still dangerous buts its getting into extremely silly territory.


That's great to know! And important to clarify because by default WSL has access to all disks.


> but when poking around CAD systems a few months back we realized there's no way to go from a text prompt input to a modeling output in any of the major CAD systems.

This is exactly what SGS-1 is, and it's better than this approach because it's actually a model trained to generate Breps, not just asking an LLM to write code to do it.


Do you have a budget per-player of cloud usage? What happens if people really like the game and play it so much it starts getting expensive to keep running? I guess at $0.79 / Mtok llama70B is pretty affordable, but a per-player opex seems hard to handle without a subscription model.


Our initial plan was to simply ask enough for the game that the price would cover the costs on average... but that means that we're basically encouraged to have people play the game as little as possible? We're looking into some kind of subscription now, it sounds weird but I do think it's a better incentive in this case. Plus we can actually ask for less upfront.


It's true that a lot of established ML techniques were first popularized to fight spam (ie bayesian filtering), but it might also be the case that they're not applying the full might of eg Gemini-3-Pro to every email received. I suspect Gemini-3-Pro would do an effectively perfect job of determining if something is phishing, with negligible values in the false quadrants of the confusion matrix, but it's probably too expensive to use in that way. Which is why things like this can still slip through.


So where do I collect my prize for this 2015 comment? https://news.ycombinator.com/item?id=9882217


Never call a man happy until he is dead. Also I don’t think your argument generalizes well - there are plenty of private research investment bubbles that have popped and not reached their original peaks (e.g. VR).


It wasn't a generalized argument, though, it was a specific one, about AI.


Okay, but the only part that’s specific to AI (that the companies investing the money are capturing more value than they’re putting into it) is now false. Even the hyperscalers are not capturing nearly the value they’re investing, though they’re not using debt to finance it. OpenAI and Anthropic are of course blowing through cash like it’s going out of style, and if investor interest drops drastically they’ll likely need to look to get acquired.


Here is one sentence from the referenced prediction:

> I don't think there will be any more AI winters.

This isn't enough to qualify as a testable prediction, in the eyes of people that care about such things, because there is no good way to formulate a resolution criteria for a claim that extends indefinitely into the future. See [1] for a great introduction.

[1]: https://www.astralcodexten.com/p/prediction-market-faq


There's an awful lot of typos in the little Veo/Sora type clip. "Shar", "Evolue", "Transilate", etc.


That's a hydroelectric dam, not a water wheel. A water wheel captures mechanical energy directly for eg grain milling, not conversion to electricity via turbine.


Yeah, I'm talking of mediaval or early modern water wheel powered mills for milling flour, not hydropower dams. :)

https://en.wikipedia.org/wiki/Watermill


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: