Hacker Newsnew | past | comments | ask | show | jobs | submit | azuanrb's commentslogin

On the contrary, that actually is pretty cool. z.ai subscription is cheap enough that I'm thinking to run it 24/7 too. Curious if you've tried any other AI orchestration tools like Gas Town? What made you decide to build your own, and how is it working for you so far?

I didn't know about Gas Town! Super cool! I will try it once I have a chance. I started with a few dumb Tmux based scripts and eventually I figured I make it into a proper package.

I think using GitHub with issues,PRs and specially leveraging AI code reviewers like Greptile is the way to go Actually. I did an attempt here https://github.com/mohsen1/claude-orchestrator-action but I think it needs a lot more attention to get it right. Ideas in Gas Town are great and I might steal some of those. Running Claude Code in GitHub Action works with GLM 4.7 great.

Microsoft's new Agent SDK is also interesting. Unlocks multi-provider workflows so user can burn out all of their subscriptions or quickly switch providers

Also super interested in collaborating with someone to build something together if you are interested!


codex have auth.json. claude is using credentials.json on Linux, Keychain on MacBook. I prefer to just use a long lived token instead for Claude due to this.

I have my own Docker image for similar purpose, which is for multiple agent providers. Works great so far.


Not necessary. I use Claude/Chatgpt ~$20 plan. Then you'll get access to the cli tools, Claude Code and Codex. With web interface, they might hallucinate because they can't verify it. With cli, it can test its own code and keep iterating on it. That's one of the main difference.

Unfortunately, local modals are not good yet. For serious work, you'll need Claude/Gemimi/OpenAI models. Pretty huge difference.

I just learned that you can run `claude setup-token` to generate a long-lived token. Then you can set it via `CLAUDE_CODE_OAUTH_TOKEN` as a reusable token. Pretty useful when I'm running it in isolated environment.

yes! just don't forget to `RUN echo '{"hasCompletedOnboarding": true}' > /home/user/.claude.json` otherwise your claude will ask how to authenticate on startup, ignoring the OAUTH token

One recent example. For some reason, recently Claude prefer to write scripts in root /tmp folder. I don't like this behavior at all. It's nothing destructive, but it should be out of scope by default. I notice they keep adding more safeguards which is great, eg asking for permissions, but it seems to be case by case.

If you're not using .claude/instructions.md yet, I highly recommend it, for moments like this one you can tell it where to shove scripts. Trickery with the instructions file is Claude only reads it during a new prompt, so any time you update it, or Claude "forgets" instructions, ask it to re-read it, usually does the trick for me.

Claude, I noticed you rm -rf my entire system. Your .instructions.md file specifically prohibits this. Please re-read your .instructions.md file and comply with it for all further work

IMHO a combination of trash CLI and a smarter shell program that prevents deleting critical paths would do it.

https://github.com/andreafrancia/trash-cli


Note that there are reports that it can disable sandbox, so personally I wouldn't trust this.

> Any tool that used it would get blocked.

Isn't that misleading from Anthropic side? The gist shows that only certain tools are block, not all. They're selectively enforcing their ToS.


The gist showsmthat the first line ofmthe system prompt must be "You are Claude Code, Anthropic's official CLI for Claude."

That’s a reasonable attempt to enforce the ToS. For OpenCode, they also take the next step of additionally blocking a second line of “You are OpenCode.”

There might be more thorough ways to effect a block (e.g. requiring signed system prompts), but Anthropic is clearly making its preferences known here.


They can enforce their ToS however they like. It's their product and platform.

But we're against that, right? Or do we want a world where other companies' ToS also forbid open source software use if you use their product? After all, "it's their product", so if they want to say that you aren't allowed to use open source software, "they can enforce their ToS however they like". Or is it only Anthropic where we are OK with them forbidding open source software use with their product?

What we want is a world where there are enough options out there that if one doesn't like the ToS or even the name of an option, then it's trivial to select another option. No need for anyone to constrain anyone else.

What we want is a world where there are enough options out there that of one doesn't like the ToS or even the name of an option, then it's trivial to select another option. No need for anyone to constrain anyone else.

What do you mean by "not all"? They aren't obligated to block every tool/project trying to use the private API all the way to a lone coder making their own closed-source tool. That's just not feasible. Or did you have a way to do that?

> The gist shows that only certain tools are block, not all.

Are those other phrases actually used by any tools? I thought they were just putting phrases into the LLM arbitrarily. Any misuse of the endpoint is detected at scale they probably add more triggers for that abuse.

Expecting it to magically block different phrases is kind of silly.

> They're selectively enforcing their ToS.

Do you have anything to support that? Not a gist of someone putting arbitrary text into the API, but links to another large scale tool that gets away with using the private API?

Seems pretty obvious that they’re just adding triggers for known abusers as they come up.


Sharing my experience. I experimented with SolidQueue for my side project. My conclusion for production usage was:

- No reason to switch to SolidQueue or GoodJob if you have no issue with Sidekiq. Only do it if you want to remove the Redis infra, no other big benefits other than that imo. - For new projects, I might be more biased towards GoodJob. They're more matured, great community and have more features. - One thing I don't like about SolidQueue is the lack of solid UI. Compared to GoodJob or Sidekiq, it's pretty basic. When I tried it last time, the main page would hang due to unoptimized indexes. Only happens when your data reaches certain threshold. Might have been fixed though.

Another consideration with using RDBMS instead of Redis is that you might need to allocate proper connection pool now. Depends on your database setup. It's nothing big, but that's one additional "cost" as you never really had to consider when you're using Redis.


If we take webhook for example.

- Persist payload in db > Queue with id > Process via worker.

Push the payload directly to queue can be tricky. Any queue system usually will have limits on the payload size, for good reasons. Plus if you already commit to db, you can guarantee the data is not lost and can be process again however you want later. But if your queue is having issue, or it failed to queue, you might lost it forever.


> Push the payload directly to queue can be tricky. Any queue system usually will have limits on the payload size, for good reasons.

Is that how microservice messages work? They push the whole data so the other systems can consume it and take it from there?


A microservice architecture would probably use a message bus because they would also need to broadcast the result.

yes and no, as the sibling comment mentions sometimes a message bus is used (Kafka, for example), but Netflix is (was?) all-in with HTTP (low-latency gRPC, HTTP/3, wrapped in nice type-safe SDK packages)

but ideally you don't break the glass and reach for a microservices architecture if you don't need the scalability afforded by very deep decoupling

which means ideally you have separate databases (and DB schema and even likely different kind of data store), and through the magic of having minimally overlapping "bounded contexts" you don't need a lot of data to be sent over (the client SDK will pick what it needs for example)

... of course serving a content recommendation request (which results in a cascade of requests that go to various microservices, eg. profile, rights management data, CDN availability, and metadata for the results, image URLs, etc) for a Netflix user doesn't need durability, so no Kafka (or other message bus), but when the user changes their profile it might be something that gets "broadcasted"

(and durable "replayable" queues help, because then services can be put to read-only mode to serve traffic, while new instances are starting up, and they will catch up. and of course it's useful for debugging too, at least compared to HTTP logs, which usually don't have the body/payload logged.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: