I wonder how far I could go with a barebone agent prompted to take advantage of this with Sonnet and the Bash tool only, so that it will always try to use the tool to only do `python -c …`
I can't use Google Meet on firefox/zen, I tried every setting combination I could find but the video call quality is still not comparable to chromium based browsers, so at work I reluctantly switched to Vivaldi.
Have you tried switching your User Agent (with or without a helper extension) to Chrome/Chromium/Edge to see if that makes a difference? I have heard that some G sites that are clunky or broken seem to work better under Firefox when it identifies itself as Chrome.
I think it's widely speculated that Google sabotages how their own products work in Firefox. I don't know if there is actual evidence to support that though.
I'm in a Zoom shop now, but when I was in a Meet/Hangouts shop, I used Chrome for that, and Firefox for everything else. If you're on a Mac, the utility Choosy can send links to appropriate browsers based on patterns.
I kind of hate the implications of it, but if HN (or someone else) wanted to add value, they could show one-line sentiment analyses of the comments in the HN articles so you can decide what's what without even clicking.
No. You are giving textual instructions to Claude in the hopes that it correctly generates a shell command for you vs giving it a tool definition with a clearly defined schema for parameters and your MCP Server is, presumably, enforcing adherence to those parameters BEFORE it hits your shell. You would be helping Claude in this case as you're giving a clearer set of constraints on operation.
Well, with MCP you’re giving textual instructions to Claude in hopes that it correctly generates a tool call for you. It’s not like tool calls have access to some secret deterministic mode of the LLM; it’s still just text.
To an LLM there’s not much difference between the list of sample commands above and the list of tool commands it would get from an MCP server. JSON and GNU-style args are very similar in structure. And presumably the command is enforcing constraints even better than the MCP server would.
Not strictly true. The LLM provider should be running a constrained token selection based off of the json schema of the tool call. That alone makes a massive difference as you're already discarding non-valid tokens during the completion at a low level. Now, if they had a BNF Grammer for each cli tool and enforced token selection based on that, you'd be much better off than unrestrained token selection.
Yeah, that's why I said "not much" difference. I don't think it's much, because LLMs do quite well generating JSON without turning on constrained output mode, and I can't remember them ever messing up a bash command line unless the quoting got weird.
Either way it is text instructions used to call a function (via a JSON object for MCP or a shell command for scripts). What works better depends on how the model you’re using was post trained and where in the prompt that info gets injected.
I'm trying nix instead of Homebrew on my mac. It worked great until I decided to give rust a shot. I think my solution is to just do rust development on my Arch machine and stick with nix. That said, if I run into additional issues, I will probably just go back to Homebrew.
from the top of my head: various hacks to make apps available to spotlight, packages/apps behind their equivalents in brew to the point where I use nix to orchestrate brew for
too many things, starting envs and build switch is too slow for my taste despite caching etc, nix the language is unfriendly and hard to debug, the stack traces are useless, etc
reply