> It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
"append(s, ...)" without the assignment doesn't even compile. So your entire post seems like a strawman?
> So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated
No, I think it is more that the compromise of complicating the language that is always made when adding features is carefully weighed in Go. Less so in other languages.
Clipping doesn't seem to automatically move the data, so while it does mean appending will reallocate, it doesn't actually shrink the underlying array, right?
The reason for US economic domination starting in the 50s is the fact that society and infrastructure in the rest of the developed world had been utterly devastated by the second World War. The rate of college education is utterly irrelevant.
No, this is analogous to forcing everything into separate accounts in the name of "security" and then failing to implement any way to pass data between them. It would be fine to have optional protocols on top of the core wayland protocol, and it would be fine to require a single permission prompt, but only if they actually get implemented and there's actually a way to persistently give permission. Otherwise you've just reduced the functionality of the system.
And yet unix account separation really did turn out to be overcomplicated and useless. Hosting providers were never able to separate untrusted users by user account, they either use VMs or containers or give up on offering shell access at all, and on home machines the whole effort falls prey to https://xkcd.com/1200/ .
Good point. I just submitted a pull request with new data for foot 1.25.0 https://github.com/jquast/ucs-detect/pull/17. The test suite is really easy to run (and fast, foot rocks!).
I have seen so many takes lamenting how this kind of supply chain attack is such a difficult problem to fix.
No it really isn't. It's an ecosystem and cultural problem that npm encourages huge dependency trees that make it impractical to review dependency updates so developers just don't.
> It's an ecosystem and cultural problem that npm encourages huge dependency trees
It is an ecosystem and culture that learned nothing from the debacle of left pad. And it is an affliction that many organizations face and it is only going to get worse with the advent of AI assisted coding (and it does not have to be).
There simply arent enough adults in the room with the ability to tell the children (or VC's and business people) NO. And getting an "AI" to say no is next to impossible unless you're probing it on a "social issue".
The thing is, having access to such dependencies is also a huge productivity boost. It's not by accident that every single language whose name isn't C or C++ has pretty much moved to this model (or had it way before npm, in the case of Perl or Haskell).
The alternative is C++, where every project essentially starts by reinventing the wheel, which comes with its own set of vulnerabilities.
I'm saying this without a clear idea of how to fix this very real problem.
> The alternative is C++, where every project essentially starts by reinventing the wheel
Sure, in 1995.
Most C++ projects nowadays belong to some fairly well understood domain and for every broad domain there is usually one or two large 'ecosystem' libraries that come batteries included. Huge monolithic dependency with well stablished governance instead of 1000 small ones.
Examples of such ecosystems are Qt, LLVM, ROOT, tensorflow, etc. For smaller projects that want something slightly more than a standard library but not belonging to a clear ecosystem like the above you have boost, folly, abseil, etc.
Most of these started by someone deciding to reinvent the wheel decades ago, but there's no real reason to do that in 2025.
That is valid though, if someone says "It hurts when I walk" its not reasonable to tell them to not walk, you try to figure out why it hurts and if it can be fixed.
Other languages has similar package managers as npm, but with much less issues, so it can be fixed without changing the package manager completely.
I would say Javascript's lack of a standard library is at least in part responsible for encouraging npm use, things just spiraled out of control from there.
[not a dev] why isn't there the equivalent of "Linux distributions" for npm? I know I know: because developers all need a different set of libs. But if there were thousands of packages required to provide basic "stdlib-like functionality" couldn't there be an npm distribution that you can safely use as a starting point, avoiding importing asinine stuff like 'istrue' (yea I'm kinda joking there). Or is that just what bloated Frameworks all start out as?
There could, this would essentially be in the form of a standard library. That would work until someone decides they don't like the form/naming conventions/architecture/ideology/lack of ideology/whatever else and then reinvent everything to do the same, but in a slightly different way.
And before you know it, you have a multitude of distributions to choose from, each with their own issues...
Who is shipping/maintainig this ? Even node itself is maintained by OSS. That's one of the advantages of Microsoft .NET ecosystem - you can do a lot of stuff without pulling anything not shipped by Microsoft. I don't know of any other ecosystem that's as versatile with so much first party support.
Source available beats open source from a security perspective.
I don't understand the point of this article. Container images are literally immutable packaged filesystems so old versions of affected packages are in old Docker images for every CVE ever patched in Debian.
The point seems to be that they're selling a product which (they say towards the end of the article) gives their customers "access to a precise analysis developed by our research team to detect IFUNC-based hooking, which is the same technique used in the XZ backdoor".
Unpatched long-lived VMs are much more cumbersome to fix than an outdated Docker image. And good luck reproducing your long-lived VM with years of mutation-via-patch.
You seem to not understand how to manage long lived VMs.
Why are your VMs unpatched? Why do you not have a policy to synchronize package versions and configurations across VMs? Why do you not have a full policy on how to build the VM from scratch?
Sysadmins were orchestrating VMs using a plethora of popular tools before Linux failed to adopt containers correctly to get the job done. Docker just inherited all the bad practices without building a culture around the good practices.
Quite literally: everything you can say sysadmins did wrong back in the day is what Docker users typically to today. That right there underpins my entire view of Docker. Docker can be used correctly, but most don't, and I don't want to be in that blast radius when it inevitably goes off.
Sorry, that is incorrect: https://pkg.go.dev/slices#Clip
> It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
"append(s, ...)" without the assignment doesn't even compile. So your entire post seems like a strawman?
https://go.dev/play/p/icdOMl8A9ja
> So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated
No, I think it is more that the compromise of complicating the language that is always made when adding features is carefully weighed in Go. Less so in other languages.