Hacker Newsnew | past | comments | ask | show | jobs | submit | more noirscape's commentslogin

Grades aren't necessarily an indicator on if a person comprehends the educational material. Someone can visibly under-perform on general tests, but when questioned in-person/made to do an exam still recite the educational material from the top of their head, apply it correctly and even take it in a new direction. Those are underachievers; they know what they can do, but for one reason or another, they simply refuse to show it (a pretty common cause tends to be just finding the general coursework to be demeaning or the teachers using the wrong education methods, so they don't put a lot of effort into it[0].) Give them coursework above their level, and they'll suddenly get acceptable/correct results.

IQ can be used somewhat reliably to identify if someone is an underachiever, or if they're legitimately struggling. That's what the tests are made and optimized for; they're designed to test how quickly a person can make the connection between two unrelated concepts. If they do it quick enough, they're probably underachieving compared to what they actually can do and it may be worth trying to give them more complicated material to see if they can actually handle it. (And conversely, if it turns out they're actually struggling, it may be worth dedicating more time to help them.)

That's the main use of it. Anything else you attach to IQ is a case of correlation not being causation, and anyone who thinks it's worth more than that is being silly. High/Low IQ correlates to very little besides a sort of general trend on how quickly you can recognize patterns (because of statistical anomaly rules, any score outside the 95th percentile is basically the same anyways and IQ scores are normalized every couple years; this is about as far as you can go with IQ - there's very little difference between 150/180/210 or whatever other high number you imagine).


It keeps astounding me that people assign value to a score whose purpose was mainly intended to find outliers in the education system as being anything besides that.

Or to quote the late astrophysicist Stephen Hawking: "People who boast about their IQ are losers".


I've never understood IQ tests. Granted, i have only seen the beginning of some of them where they show you those 3/4 figures and tell you to choose the "next one" ... I have never had a clue of what am I supposed to look for.

Maybe I am just extremely stupid. But then I'm a walking example that you can be averagly successful even being dumb as a rock, if you are stubborn enough haha.


The whole "Heaven is real because this guy with high IQ said so" is extremely cringe. They treat him like a show pony lol.


The problem with a standard video element is that while it's mostly nice for the user, it tends to be pretty bad for the server operator. There's a ton of problems with browser video, beginning pretty much entirely with "what's the codec you're using". It sounds easy, but the unfortunate reality is that there's a billion different video codecs (and a heavy use of Hyrum's law/spec abuse on the codecs) and a browser only supports a tiny subset of them. Hosting video already at a basis requires transcoding the video to a different storage format; unlike a normal video file you can't just feed it to VLC and get playback, you're dealing with the terrible browser ecosystem.

Then once you've found a codec, the other problem immediately rears its head: video compression is pretty bad if you want to use a widely supported codec, even if for no other reason than the fact that people use non-mainstream browsers that can be years out of date. So you are now dealing with massive amounts of storage space and bandwidth that are effectively being eaten up by duplicated files, and that isn't cheap either. To give an estimate, under most VPS providers that aren't hyperscalers, a plain text document can be served to a couple million users without having to think about your bandwidth fees. Images are bigger, but not by enough to worry about it. 20 minutes of 1080p video is about 500mb under a well made codec that doesn't mangle the video beyond belief. That video is going to reach at most 40000 people before you burn through 20 terabytes of bandwidth (the Hetzner default amount) and in reality, probably less because some people might rewatch the thing. Hosting video is the point where your bandwidth bill will overtake your storage bill.

And that's before we get into other expected niceties like scrolling through a video while it's playing. Modern video players (the "JS nonsense" ones) can both buffer a video and jump to any point in the video, even if it's outside the buffer. That's not a guarantee with the HTML video element; your browser is probably just going to keep quietly downloading the file while you're watching it (eating into server operator cost) and scrolling ahead in the video will just freeze the output until it's done downloading up until that point.

It's easy to claim hosting video is simple, when in practice it's probably the single worst thing on the internet (well that and running your own mailserver, but that's not only because of technical difficulties). Part of YouTube being bad is just hyper capitalism, sure, but the more complicated techniques like HLS/DASH pretty much entirely exist because hosting video is so expensive and "preventing your bandwidth bill from exploding" is really important. That's also why there's no real competition to YouTube; the metrics of hosting video only make sense if you have a Google amount of money and datacenters to throw at the problem, or don't care about your finances in the first place.


Chrome desktop has just landed enabled by default native HLS support for the video element within the last month. (There may be a few issues still to be worked out, and I don't know what the rollout status is, but certainly by year end it will just work). Presumably most downstream chromium derivatives will pick this support up soon.

My understanding is that Chrome for Android has supported it for some time by way of delegating to android's native media support which included HLS.

Desktop and mobile Safari has had it enabled for a long time, and thus so has Chrome for iOS.

So this should eventually help things.


Any serious video distribution system would not use metered bandwidth. You're not using a VPS provider. You are colocating some servers in a datacenter and buying an unmetered 10 gigabit or 100 gigabit IP transit service.


A cache can help even for small stuff if there's something time-consuming to do on a small server.

Redis/valkey is definitely overkill though. A slightly modified memcached config (only so it accepts larger keys; server responses larger than 1MB aren't always avoidable) is a far simpler solution that provides 99% of what you need in practice. Unlike redis/valkey, it's also explicitly a volatile cache that can't do persistence, meaning you are disincentivized from bad software design patterns where the cache becomes state your application assumes any level of consistency of (including it's existence). If you aren't serving millions of users, stateful cache is a pattern best avoided.

DB caches aren't very good mostly because of speed; they have to read from the filesystem (and have network overhead), while a cache reads from memory and can often just live on the same server as the rest of the service.


The problem that is that a lot of CVEs often don't represent "real" vulnerabilities, but merely theoretical ones that could hypothetically be combined to make a real exploit.

Regex exploitation is the forever example to bring up here, as it's generally the main reason that "autofail the CI system the moment an auditing command fails" doesn't work on certain codebases. The reason this happens is because it's trivial to make a string that can waste significant resources to try and do a regex match against, and the moment you have a function that accepts a user-supplied regex pattern, that's suddenly an exploit... which gets a CVE. A lot of projects then have CVEs filed against them because internal functions rely on Regex calls as arguments, even if they're in code the user is flat-out never going to be able interact with (ie. Several dozen layers deep in framework soup there's a regex call somewhere, in a way the user won't be able to access unless a developer several layers up starts breaking the framework they're using in really weird ways on purpose).

The CVE system is just completely broken and barely serves as an indicator of much of anything really. The approval system from what I can tell favors acceptance over rejection, since the people reviewing the initial CVE filing aren't the same people that actively investigate if the CVE is bogus or not and the incentive for the CVE system is literally to encourage companies to give a shit about software security (at the same time, this fact is also often exploited to create beg bounties). CVEs have been filed against software for what amounts to "a computer allows a user to do things on it" even before AI slop made everything worse; the system was questionable in quality 7 years ago at the very least, and is even worse these days.

The only indicator it really gives is that a real security exploit can feel more legitimate if it gets a CVE assigned to it.


Aren't there regex libraries that aren't susceptible to that


Pretty much. Most animals are both smarter than you expect, but also tend to be more limited in what they can reason about.

It's why anyone who's ever taken care of a needy pet will inevitably reach the comparison that taking care of a pet is similar to taking care of a very young child; it's needy, it experiences emotions but it can't quite figure out on its own how to adapt to an environment besides what it grew up around/it's own instincts. They experience some sort of qualia (a lot of animals are pretty family-minded), but good luck teaching a monkey to read. The closest we've gotten is teaching them that if they press the right button, they get food, but they take basically their entire lifespan to understand a couple hundred words, while humans easily surpass that.

IIRC some of the smartest animals in the world are actually rats. They experience a qualia very close to humans to the point that psychology experiments are often easily observable in rats.


Nowadays pip also defaults to installing to the users home folder if you don't run it as root.

Basically the only thing missing from pip install being a smooth experience is something like npx to cleanly run modules/binary files that were installed to that directory. It's still futzing with the PATH variable to run those scripts correctly.


> Nowadays pip also defaults to installing to the users home folder if you don't run it as root.

This could still cause problems if you run system tools as that user.

I haven't checked (because I didn't install my distro's system package for pip, and because I use virtual environments properly) but I'm pretty sure that the same marker-file protection would apply to that folder (there's no folder there, on my system).


Main reason is that it's hard to get certificates for intranets that all devices will properly trust.

Public CAs don't issue (free) certificates for internal hostnames and running your own CA has the drawback that Android doesn't allow you to "properly" use a personal CA without root, splitting it's CA list between the automatically trusted system CA list and the per-application opt-in user CA list. (It ought to be noted that Apple's personal CA installation method uses MDM, which is treated like a system CA list). There's also random/weird one-offs like how Firefox doesn't respect the system certificate store, so you need to import your CA certificate separately in Firefox.

The only real option without running into all those problems is to get a regular (sub)domain name and issue certificates for that, but that usually isn't free or easy. Not to mention that if you do the SSL flow "properly", you need to issue one certificate for each device, which leaks your entire intranet to the certificate transparency log (this is the problem with Tailscale's MagicDNS as a solution). Alternatively you need to issue a wildcard certificate for your domains, but that means that every device in your intranet can have a valid SSL certificate for any other domain name on your certificate.


If someone is in your LAN then you have bigger problems than them snooping on you while you talk to your fridge.



oh wow, port scanning with websockets! Interesting! Thanks for the link! :)


> get a regular (sub)domain name

You can get $2/yr domain names on weird TLDs like .site, .cam, .link, ...

> which leaks your entire intranet to the certificate transparency log

Not necessarily, you don't route the domain externally, and use offline DNS challenge/request to renew the certificate.


> You can get $2/yr domain names on weird TLDs like .site, .cam, .link, ...

You can, but as stated - that's not free (or easy). That's still yet another fee you have to pay for... which hurts adoption of HTTPS for intranets (not to mention it's not really an intranet if it's reliant on something entirely outside of that intranet.)

If LetsEncrypt charged 1$ to issue/renew a certificate, they wouldn't have made a dent in the public adoption of HTTPS certificates.

> Not necessarily, you don't route the domain externally, and use offline DNS challenge/request to renew the certificate.

I already mentioned that one, that's the wildcard method.


At its core, the main drawbacks that need to be solved for them to be a viable option are imo:

* Improving OS flows. Every passkey implementer that's also an OS gets really excited about enrolling you into their proprietary clouds, and using alternate flows to respect the users wish to use their own manager is usually hidden in confusing UI forms that don't feel consistent if you don't already know what you're doing.

* Device loss scenario is already mentioned, although more broadly speaking a lot of the reasons people get leery is because all three major providers (Google, Microsoft, Apple) are notorious for their near black box technical support. Losing access to one of these providers on its own is already enough to heavily disrupt the average person's life. Having your login details stored with them makes this even worse.

* The FIDO Alliance Is Way Too Excited About Device Attestation And I Don't Like It. Basically the FIDO Alliance's behavior around passkeys reeks of security theater and them badgering an open source password manager for daring to let users export their passkeys in the format they preferred, rather than what the FIDO Alliance wanted (which is that passkeys must always be encrypted with a password) is telling. If they are as secure as promised, it's a bad look to start threatening device attestation as a means to get people to comply with your specific idea of security. The only real barrier right now to it outright being a thing is that Apple zeroes out the field and when Apple is the only meaningful halt to that kind of attestation, something has gone very wrong.

I think passkeys are interesting, but I just flat out don't trust the FIDO Alliance with the idea. They're way too invested in big tech being good stewards of the ecosystem, which is becoming increasingly unpalatable as more and more evidence piles up that they're really bad actors on everything else. (So why should we trust them with our credentials?) The idea genuinely has value (it's literally the same kind of mechanism as SSH keys), but the hostility towards user freedom is deeply concerning and a blocker to getting people to use it. Even non-technical people seem leery of them, just because of how aggressively big tech has been pushing it.


God if it could just be a single key that you dump to paper or titanium plate and don't worry about backing up a zoo of keys/password with a cloud. Just take my one and only public key. If you care about per service privacy, you are welcome to use multiple. I don't think there is any compromise scenario where you would leak any single specific passkey and they are not bruteforcable. Why is it not as simple as that?


> * Improving OS flows. Every passkey implementer that's also an OS gets really excited about enrolling you into their proprietary clouds, and using alternate flows to respect the users wish to use their own manager is usually hidden in confusing UI forms that don't feel consistent if you don't already know what you're doing.

You're kidding yourself if you think that this is something Microsoft, Apple, or Google are incentivized to solve. Microsoft is especially bad here - pushing their crappy products in Windows every chance they get. Once some marketing director gets the idea that this can improve retention in Outlook or something the UI will get more confusing and the dark patterns will get darker.


I never said they had an incentive to solve it. I said that it's one of the big blockers to getting regular adoption. It ought to be obvious that all these issues aren't a problem if you look at it through the big tech lens: why is it a problem when we're providing the service. They're a problem when you're a normal person with a healthy distrust of big tech companies.

In practice, I expect someone to figure out a way to break into/bypass the OS flow entirely with a less "big tech wants your private details" solution and that's what winds up getting adoption.


They mention it later in the article; if they drop connections to internal networks from the graph, Linux shoots up all the way to 97%.

The answer is probably that people that run Linux are far more likely to run a homelab intranet that isn't secured by HTTPS, because internal IP addresses and hostnames are a hassle to get certificates for. (Not to mention that it's slightly pointless on most intranets to use HTTPS.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: