> Your machine runs a little slower, your bandwidth gets a little thinner, and someone halfway around the world is routing traffic through your home IP.
I wish in 2026 the default on new computers (Windows + Mac) was not only "inbound firewall on by default" but also outbound and users having to manually select what is allowed.
I know it is possible, it's just not the default and more of a "power user" thing at the moment. You have to know about it basically.
As a power user I agree, but how do you avoid it being like the Vista UAC popups? Everyone expects software to auto update these days and it's easy enough to social engineer someone into accepting.
Even if it was a default there is so many services reaching out the non-technical user would get assaulted with requests from services which they have no idea about. Eventually people will just click ok with out reading anything which puts you back at square one with annoying friction.
I do this outbound filtering but I don't use a computer running Windows or MacOS to do it
It doesn't make sense to expect the companies promoting Windows or MacOS to allow the user to potentially interfere with their "services" and surveillance business model
Windows and MacOS both "phone home" (unfiltered outgoing connections). If computer owners running these corporate OS were given an easy way to stop this, then it stands to reason that owners would stop the connections back to the mothership. That means loss of surveillance potential and lost revenue
As of 2006, still nothing stops anyone from setting the gateway of their computer running a corporate OS to point to a computer running a non-corporate OS that can do the outbound filtering
The writing has been on the wall since day 1. They wouldn't be marketing a subscription being sold at a loss as hard as they are if the intention wasn't to lock you in and then increase the price later.
What I expect to happen is that they'll slowly decrease the usage limits on the existing subscriptions over time, and introduce new, more expensive subscription tiers with more usage. There's a reason why AI subscriptions generally don't tell you exactly what the limits are, they're intended to be "flexible" to allow for this.
I do not like reading things like this. It makes me feel very disconnected from the AI community. I defensively do not believe there exist people who would let AI do their taxes.
> How much VRAM does it take to get the 92-95% you are speaking of?
For inference, it's heavily dependent on the size of the weights (plus context). Quantizing an f32 or f16 model to q4/mxfp4 won't necessarily use 92-95% less VRAM, but it's pretty close for smaller contexts.
Thank you. Could you give a tl;dr on "the full model needs ____ this much VRAM and if you do _____ the most common quantization method it will run in ____ this much VRAM" rough estimate please?
The $20 one, but it's hobby use for me, would probably need the $200 one if I was full time. Ran into the 5 hour limit in like 30 minutes the other day.
I've also been testing OpenClaw. It burned 8M tokens during my half hour of testing, which would have been like $50 with Opus on the API. (Which is why everyone was using it with the sub, until Anthropic apparently banned that.)
I was using GLM on Cerebras instead, so it was only $10 per half hour ;) Tried to get their Coding plan ("unlimited" for $50/mo) but sold out...
(My fallback is I got a whole year of GLM from ZAI for $20 for the year, it's just a bit too slow for interactive use.)
Try Codex. It's better (subjectively, but objectively they are in the same ballpark), and its $20 plan is way more generous. I can use gpt-5.2 on high (prefer overall smarter models to -codex coding ones) almost nonstop, sometimes a few in parallel before I hit any limits (if ever).
I now have 3 x 100 plans. Only then I an able to full time use it. Otherwise I hit the limits. I am q heavy user. Often work on 5 apps at the same time.
and it has proven to not be a great hedge against inflation (lately, short term)
the digital gold narrative falls apart when non-digital gold outperforms it and nobody wants it for its digital properties (payments, blockchain, etc.)
Worth noting that the implied volatility extracted here is largely a function of how far OTM the strike is relative to current spot, not some market-specific view on $100. If NVDA were trading at $250 today, the options chain would reprice and you'd extract similar vol for whatever strike was ~45% below. The analysis answers "what's the probability of a near-halving from here" more than "what's special about $100." Still useful for the prediction contest, but the framing makes it sound like the market is specifically opining on that price level.
reply