Hacker Newsnew | past | comments | ask | show | jobs | submit | balder1991's commentslogin

Actually, most VPN providers explicitly label the virtual locations as such, I think the famous ones at least do it (ex: Proton and NordVPN even explain them in their respective docs).


It depends on whether the VPN is lying to you. Proton, for example, makes them quite explicit in the software and even lists them for you here: https://protonvpn.com/support/how-smart-routing-works and seems like NordVPN also has a page explaining that.


Yeah, Proton is quite explicit about that: https://protonvpn.com/support/how-smart-routing-works


I have this impression that LLMs are so complicated and entangled (in comparison to previous machine learning models) that they’re just too difficult to tune all around.

What I mean is, it seems they try to tune them to a few certain things, that will make them worse on a thousand other things they’re not paying attention to.


Anything that is very specific has the same problem, because LLMs can’t have the same representation of all topics in the training. It doesn’t have to be too niche, just specific enough for it to start to fabricate it.

One of these days I had a doubt about something related to how pointers work in Swift and I tried discussing with ChatGPT (don’t remember exactly what, but it was purely intellectual curiosity). It gave me a lot of explanations that seemed correct, but being skeptical and started pushing it for ways to confirm what it was saying and eventually realized it was all bullshit.

This kind of thing makes me basically wary of using LLMs for anything that isn’t brainstorming, because anything that requires knowing information that isn’t easily/plentifully found online will likely be incorrect or have sprinkles of incorrect all over the explanations.


It doesn’t really solve it as a slight shift in the prompt can have totally unpredictable results anyway. And if your prompt is always exactly the same, you’d just cache it and bypass the LLM anyway.

What would really be useful is a very similar prompt should always give a very very similar result.


This doesn't work with the current architecture, because we have to introduce some element of stochastic noise into the generation or else they're not "creatively" generative.

Your brain doesn't have this problem because the noise is already present. You, as an actual thinking being, are able to override the noise and say "no, this is false." An LLM doesn't have that capability.


Well that’s because if you look at the structure of the brain there’s a lot more going on than what goes on within an LLM.

It’s the same reason why great ideas almost appear to come randomly - something is happening in the background. Underneath the skin.


That’s a way different problem my guy.


At least it used to be true.


Even assuming it’s not malicious, the script can mess up your environment configuration.


I'm so thankful for nixos for making it hard for me to give in to that temptation. you always think "oh just this once". but with nixos I either have to do it right or not bother.


NixOS gives you a place to configure things in a reproducible way, but it doesn’t require you do it.


$ ./Downloads/tmp/xpack-riscv-none-elf-gcc-15.2.0-1/bin/riscv-none-elf-cpp Could not start dynamically linked executable: ./Downloads/tmp/xpack-riscv-none-elf-gcc-15.2.0-1/bin/riscv-none-elf-cpp NixOS cannot run dynamically linked executables intended for generic linux environments out of the box. For more information, see: https://nix.dev/permalink/stub-ld

You have to go out of your way to make something like that run in an fhs env. By that point, you've had enough time to think, even with ADHD.


It sort of does actually, at least if you don't have nix-ld enabled. A lot of programs simply won't start if they're not static-linked, and so a lot of the time if you download a third-party script, or try to install it when the `curl somesite.blah | sh`, it actually will not work. Moreover, it also is likely that it won't be properly linked in your path unless you do it thr right way.


So can a random deb, or npm package, or pip wheel? You’re either ok with executing unverified code or not - piping wget into bash doesn’t change that


Maybe they can with postinstall scripts, but they usually don't.

For the most part, installing packaged software simply extracts an archive to the filesystem, and you can uninstall using the standard method (apt remove, uv tool remove, ...).

Scripts are way less standardized. In this case it's not an argument about security, but about convenience and not messing up your system.


Not sure that’s even possible with ChatGPT embedding your chat history in the prompts to try to give more personal answers.


I think it would be better to just use Docker/Podman at this point?


I was considering that route but was surprised that I couldn't find a version of Docker Desktop that'll run on Monterey anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: