Hacker Newsnew | past | comments | ask | show | jobs | submit | gbrindisi's commentslogin

The crowdstrike incident taught us that no one is going to review any dependency whatsoever.

Yep, that's what late stage capitalism leaves you with: consolidation, abuse, helplessness and complacency/widespread incompetence as a result

I wonder how far I could go with a barebone agent prompted to take advantage of this with Sonnet and the Bash tool only, so that it will always try to use the tool to only do `python -c …`


I can't use Google Meet on firefox/zen, I tried every setting combination I could find but the video call quality is still not comparable to chromium based browsers, so at work I reluctantly switched to Vivaldi.

If you figure this out please let me know!


Have you tried switching your User Agent (with or without a helper extension) to Chrome/Chromium/Edge to see if that makes a difference? I have heard that some G sites that are clunky or broken seem to work better under Firefox when it identifies itself as Chrome.


I think it's widely speculated that Google sabotages how their own products work in Firefox. I don't know if there is actual evidence to support that though.


I'm in a Zoom shop now, but when I was in a Meet/Hangouts shop, I used Chrome for that, and Firefox for everything else. If you're on a Mac, the utility Choosy can send links to appropriate browsers based on patterns.


We’ve kinda solved the detection of issues. what we still lack is understanding what’s important.

I think an underappreciated use case for LLMs is to contextualize security issues.

Rather than asking Claude to detect problems, I think it’s more useful to let it figure out the context around vulnerabilities and help triage them.

(for better or worse, I am knee-deep in this stuff)


This must be the best technical article I read on HN in months!


meta: if you use AI to write articles, don’t have them written so that I’m forced to use AI to summarize them


Reading the HN comments instead of the article is the best summarizing hack


I kind of hate the implications of it, but if HN (or someone else) wanted to add value, they could show one-line sentiment analyses of the comments in the HN articles so you can decide what's what without even clicking.


The reason reading comments is so useful is because it's not one summary but a variety of different, unique reactions (and reactions to reactions).

The model I want to train is ME, so a one sentence sentiment analysis offers 0 value to me while a lot of distinct human perspectives are a gold mine.

It's kinda like the difference between being able to look at a single picture of a landscape and being able to walk around it.


Ironically there was a tool just the other day that would read HN articles and summarize them.


Ai decompressor doing their best again. Seriously, though, the article feels too long imho


Better, it should be compulsory for these to lead with a summarized version


not in Italy


I don’t think I’ve ever been to Italy and not seen a Caesar salad in a menu!


wouldn't this defeat the point? Claude Code already has access to the terminal, adding specific instruction in the context is enough


No. You are giving textual instructions to Claude in the hopes that it correctly generates a shell command for you vs giving it a tool definition with a clearly defined schema for parameters and your MCP Server is, presumably, enforcing adherence to those parameters BEFORE it hits your shell. You would be helping Claude in this case as you're giving a clearer set of constraints on operation.


Well, with MCP you’re giving textual instructions to Claude in hopes that it correctly generates a tool call for you. It’s not like tool calls have access to some secret deterministic mode of the LLM; it’s still just text.

To an LLM there’s not much difference between the list of sample commands above and the list of tool commands it would get from an MCP server. JSON and GNU-style args are very similar in structure. And presumably the command is enforcing constraints even better than the MCP server would.


Not strictly true. The LLM provider should be running a constrained token selection based off of the json schema of the tool call. That alone makes a massive difference as you're already discarding non-valid tokens during the completion at a low level. Now, if they had a BNF Grammer for each cli tool and enforced token selection based on that, you'd be much better off than unrestrained token selection.


Yeah, that's why I said "not much" difference. I don't think it's much, because LLMs do quite well generating JSON without turning on constrained output mode, and I can't remember them ever messing up a bash command line unless the quoting got weird.


Either way it is text instructions used to call a function (via a JSON object for MCP or a shell command for scripts). What works better depends on how the model you’re using was post trained and where in the prompt that info gets injected.


I do this too. Nix is incredible, until it isn’t and then I regret using it so much.

I’ll probably use something dumber for the next machine, and keep nix for servers and local vms.


I'm trying nix instead of Homebrew on my mac. It worked great until I decided to give rust a shot. I think my solution is to just do rust development on my Arch machine and stick with nix. That said, if I run into additional issues, I will probably just go back to Homebrew.

Where were your pain points?


from the top of my head: various hacks to make apps available to spotlight, packages/apps behind their equivalents in brew to the point where I use nix to orchestrate brew for too many things, starting envs and build switch is too slow for my taste despite caching etc, nix the language is unfriendly and hard to debug, the stack traces are useless, etc


I like it! How does your stack look like?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: