Hacker Newsnew | past | comments | ask | show | jobs | submit | MuffinFlavored's commentslogin

any reason why you did

    const { rows } = client.query(
      "select id, name, last_modified from tbl where id = $1",
      [42],
    );
instead of

    const { rows } = client.query(
      "select id, name, last_modified from tbl where id = :id",
      { id: 42 },
    );

That is the way node-postgres works. pg-typesafe adds type safety but doesn’t change the node-postgres methods

Is trading even a real thing? Is there really a job title "trader"? Entities really think they can outperform DCA SPY?

Yes. Traders are involved in all kinds of deals that aren't like index funds.

Who do you think makes the price of SPY change?

I guess you better go tell them they don't exist.

> Your machine runs a little slower, your bandwidth gets a little thinner, and someone halfway around the world is routing traffic through your home IP.

I wish in 2026 the default on new computers (Windows + Mac) was not only "inbound firewall on by default" but also outbound and users having to manually select what is allowed.

I know it is possible, it's just not the default and more of a "power user" thing at the moment. You have to know about it basically.


I use LuLu (https://objective-see.org/products/lulu.html) to block outgoing connections and manually select which connections/apps are allowed. It's free and works just fine.

As a power user I agree, but how do you avoid it being like the Vista UAC popups? Everyone expects software to auto update these days and it's easy enough to social engineer someone into accepting.

Even if it was a default there is so many services reaching out the non-technical user would get assaulted with requests from services which they have no idea about. Eventually people will just click ok with out reading anything which puts you back at square one with annoying friction.

I do this outbound filtering but I don't use a computer running Windows or MacOS to do it

It doesn't make sense to expect the companies promoting Windows or MacOS to allow the user to potentially interfere with their "services" and surveillance business model

Windows and MacOS both "phone home" (unfiltered outgoing connections). If computer owners running these corporate OS were given an easy way to stop this, then it stands to reason that owners would stop the connections back to the mothership. That means loss of surveillance potential and lost revenue

As of 2006, still nothing stops anyone from setting the gateway of their computer running a corporate OS to point to a computer running a non-corporate OS that can do the outbound filtering


Fort Firewall for the win.

https://github.com/tnodir/fort


I always wondered this, is this true/does the math come out to be really that bad? 6x?

Is the writing on the wall for $100-$200/mo users that, it's basically known-subsidized for now and $400/mo+ is coming sooner than we think?

Are they getting us all hooked and then going to raise it in the future, or will inference prices go down to offset?


The writing has been on the wall since day 1. They wouldn't be marketing a subscription being sold at a loss as hard as they are if the intention wasn't to lock you in and then increase the price later.

What I expect to happen is that they'll slowly decrease the usage limits on the existing subscriptions over time, and introduce new, more expensive subscription tiers with more usage. There's a reason why AI subscriptions generally don't tell you exactly what the limits are, they're intended to be "flexible" to allow for this.


> Imagine if Siri could genuinely file your taxes

I do not like reading things like this. It makes me feel very disconnected from the AI community. I defensively do not believe there exist people who would let AI do their taxes.


> You can shrink the model to a fraction of its "full" size and get 92-95% same performance, with less VRAM use.

Are there a lot of options how "how far" do you quantize? How much VRAM does it take to get the 92-95% you are speaking of?


> Are there a lot of options how "how far" do you quantize?

So many: https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overvie...

> How much VRAM does it take to get the 92-95% you are speaking of?

For inference, it's heavily dependent on the size of the weights (plus context). Quantizing an f32 or f16 model to q4/mxfp4 won't necessarily use 92-95% less VRAM, but it's pretty close for smaller contexts.


Thank you. Could you give a tl;dr on "the full model needs ____ this much VRAM and if you do _____ the most common quantization method it will run in ____ this much VRAM" rough estimate please?


It’s a trivial calculation to make (+/- 10%).

Number of params == “variables” in memory

VRAM footprint ~= number of params * size of a param

A 4B model at 8 bits will result in 4GB vram give or take, same as params. At 4 bits ~= 2GB and so on. Kimi is about 512GB at 4 bits.


Did you eventually move to a $20/mo Claude plan, $100/mo Claude plan, $200/mo, or API based? if API based, how much are you averaging a month?


The $20 one, but it's hobby use for me, would probably need the $200 one if I was full time. Ran into the 5 hour limit in like 30 minutes the other day.

I've also been testing OpenClaw. It burned 8M tokens during my half hour of testing, which would have been like $50 with Opus on the API. (Which is why everyone was using it with the sub, until Anthropic apparently banned that.)

I was using GLM on Cerebras instead, so it was only $10 per half hour ;) Tried to get their Coding plan ("unlimited" for $50/mo) but sold out...

(My fallback is I got a whole year of GLM from ZAI for $20 for the year, it's just a bit too slow for interactive use.)


Try Codex. It's better (subjectively, but objectively they are in the same ballpark), and its $20 plan is way more generous. I can use gpt-5.2 on high (prefer overall smarter models to -codex coding ones) almost nonstop, sometimes a few in parallel before I hit any limits (if ever).


I now have 3 x 100 plans. Only then I an able to full time use it. Otherwise I hit the limits. I am q heavy user. Often work on 5 apps at the same time.


Shouldn't the 200 plan give you 4x?? Why 3 x 100 then?


Good point. Need to look into that one. Pricing is also changing constantly with Claude


and it has proven to not be a great hedge against inflation (lately, short term)

the digital gold narrative falls apart when non-digital gold outperforms it and nobody wants it for its digital properties (payments, blockchain, etc.)


Why does the news matter?

Sincerely, Somebody on "Hacker" News


Worth noting that the implied volatility extracted here is largely a function of how far OTM the strike is relative to current spot, not some market-specific view on $100. If NVDA were trading at $250 today, the options chain would reprice and you'd extract similar vol for whatever strike was ~45% below. The analysis answers "what's the probability of a near-halving from here" more than "what's special about $100." Still useful for the prediction contest, but the framing makes it sound like the market is specifically opining on that price level.


this is gpt, right?


I had a conversation (prompts) with Claude about this article because I didn't feel I could as succinctly describe my point alone.


There are grammatical mistakes and abbreviations, big tells that it's NOT ChatGPT.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: