"Groupthink" informed by extremely broad training sets is more conventionally called "consensus", and that's what we want the LLM to reflect.
"Groupthink", as the term is used by epistemologically isolated in-groups, actually means the opposite. The problem with the idea is that it looks symmetric, so if you yourself are stuck in groupthink, you fool yourself into think it's everyone else doing it instead. And, again, the solution for that is reasonable references grounded in informed consensus. (Whether that should be a curated encyclopedia or a LLM is a different argument.)
> "Groupthink" informed by extremely broad training sets is more conventionally called "consensus", and that's what we want the LLM to reflect.
Definitely not! I absolutely do not want an LLM that gives much or any truth-weight to the vast majority of writing on the vast majority of topics. Maybe, maybe if they’d existed before the Web and been trained only on published writing, but even then you have stuff like tabloids, cranks self-publishing or publishing through crank-friendly niche publishers, advertisements full of lies, very dumb letters to the editor, vanity autobiographies or narrative business books full of made-up stuff presented as true, et c.
No, that’s good for building a model of something like the probability space of human writing, but an LLM that has some kind of truth-grounding wholly based on that would be far from my ideal.
> And, again, the solution for that is reasonable references grounded in informed consensus. (Whether that should be a curated encyclopedia or a LLM is a different argument.)
“Informed” is a load bearing word in this post, and I don’t really see how the rest holds together if we start to pick at that.
> I absolutely do not want an LLM that gives much or any truth-weight to the vast majority of writing on the vast majority of topics.
I can think of no better definition of "groupthink" than what you just gave. If you've already decided on the need to self-censor your exposure to "the vast majority of writing on the vast majority of topics", you are lost, sorry.
A spectacular amount of extant writing accessible to LLM training datasets is uninformed noise from randos online. Not my fault the internet was invented.
I have to be misunderstanding you, though, because any time we want to build knowledge and skills for specialists their training doesn’t look anything like what you seem to be suggesting.
You're the second responder here that appears to think LLMs are "averaging" machines and that they need to be "protected" from wrong info. That's exactly the opposite of the way they work. You feed them the garbage precisely so they can explain to you why it's garbage. Otherwise we'd have just fed them wikipedia and stopped, but clearly that doesn't work as well.
The issue is that on the open internet, the consensus is usually the one from 2000, 2010 at best. And since social science are moving fast recently (i mostly think about modern history and linguistics here), i wouldn't trust the consensus to be at the edge of the scientific knowledge (which is actually also _extremely_ true of wikipedia)
Gotta be honest, when I go to an encyclopedia the last thing I want is what the mathematically average chronically online person knows and thinks about a topic. Because common misconceptions and the "facts" you see parroted on online forums on all sorts of niche topics look just like consensus but ya know… wrong.
I would rather have an actual
audio engineer's take than than the opinion of an amalgamation of hifi forums' talking pseudoscience and the latter is way more numerous in the training.
> what the mathematically average chronically online person knows and thinks about a topic
Yes you do, often. Understanding ideas and consensus is part of understanding "topics". To choose a Godwinized existence proof: an LLM that didn't understand public opinion in, say, 1920's Germany is one that can't answer the question of how the war started.
You're making two mistakes here: one is that you're assuming that "facts" exist as a separate idea from "discourse". And the second is that you appear to think LLMs merely "average" the stuff they read instead of absorbing controversies and discourse on their own terms. The first I can't really help you with, but the second you can disabuse yourself of on your own just by pulling up a GPT chat and talking to it.
Really the whole theme that (from the article) "FreeBSD ships as a complete, coherent OS" is belied by this kind of nonsense. No, it's not. Or, sure, it is, but in exactly the same way that Debian or whatever is. It's a big soup of some local software and a huge ton of upstream dependencies curated for shipment together. Just like a Linux distro.
And, obviously, almost all those upstream dependences are exactly the same. Yet somehow the BSD folks think there's some magic to the ports stuff that the Linux folks don't understand. Well, there isn't. And honestly to the extent there's a delta in packaging sophistication, the Linux folks tend to be ahead (c.f. Nix, for example).
> "FreeBSD ships as a complete, coherent OS" is belied by this kind of nonsense. No, it's not. Or, sure, it is, but in exactly the same way that Debian or whatever is.
Ehhh... not exactly. With nothing but the smallest FreeBSD installer image, you can, if you include just one optional package, have a system that is capable of entirely recompiling itself.
You might say "who cares?" and that's fine. But it is "complete" in a sense that no linux system I know of is. I admit that I don't know what it would take to install from, say, almalinux-10.0-x86_64-minimal.iso, and end up with a system capable of recompiling itself, but I expect it would be a whole lot more work than that. Could be wrong.
The key thing is that on freebsd you do not risk bricking your system by installing a port. Even though this guarantee has become less true with PkgBase
> The key thing is that on freebsd you do not risk bricking your system by installing a port
What specifically are you trying to cite here? Which package can I install on Debian or Fedora or whatever that "bricks the system"? Genuinely curious to know.
I was referring to the need to be careful to not modify/update packages also used by the base system. Since all packages are treated the same on Linux, you often can't tell which package can put you in trouble if you update it along with the dependencies it drags with it.
This kind of problem happens frequently when users add repositories such as Packman on Linux providing dependencies versions different from the ones used by the base system of the distro.
Experienced people know how to avoid these mistakes, but this whole class of problem does not exist on FreeBSD.
> Since all packages are treated the same on Linux
This is no longer the case in "immutable" distros such as Bluefin/Aurora, which uses ostree for the "base" distro, while most other user packages are installed with homebrew. Nix and Guix solve it in a very different way. Then there's flatpak and snap.
A lot of poor *BSD advocacy likes to deride Linux for its diversity one moment, then switch to treating it as a monolith when it's convenient. It's a minority of the users for sure, but they naturally make an outsized share of the noise.
I think you missed the point in my original comment. I explained I moved my platform with all dependencies and had 1 bug which was actually a silent bug in Linux.
In other words, it works. Your particular stack might have a different snag profile but if I can move my giant complex app there, yours is worth a shot.
FreeBSD is more complete than you make out. They also have hard working ports maintainers.
Well, sure, but that's a ridiculous double standard. You're making the claim (or implying it, at least) that FreeBSD is fundamentally superior because it's a unified piece of software shipped as a holistic piece of artifice or whatever. And that by inferrence it's unlike all that kludgey linux stuff that you can't trust because of politics or whatever.
But your evidence that it's actually superior? "it works". Well, gosh.
You'll tar the competition with all sorts of ambiguous smears, but all you ask from your favorite is... that you got your app to work?
> You’re making the claim that (or implying it) … unified piece of software shipped as a holistic piece
I never said anything about that. Again in my opening comment I listed the reasons I like it. It’s boring and stable which is what I like to center my work on. I even provided a specific technical example of superior memory management.
While there are legitimate/measurable performance and resource issues to discuss regarding Electron, this kind of hyperbole just doesn't help.
I mean, look: the most complicated, stateful and involved UIs most of the people commenting in this thread are going to use (are going to ever use, likey) are web stack apps. I'll name some obvious ones, though there are other candidates. In order of increasing complexity:
1. Gmail
2. VSCode
3. www.amazon.com (this one is just shockingly big if you think about it)
If your client machine can handle those (and obviously all client machines can handle those), it's not going to sweat over a comparatively simple Electron app for talking to an LLM.
Basically: the war is over, folks. HTML won. And with the advent of AI and the sunsetting of complicated single-user apps, it's time to pack up the equipment and move on to the next fight.
I actually avoid using VSCode for a number of reasons, one of which is its performance. My performance issues with VSCode are I think not necessarily all related to the fact that it's an electron app, but probably some of them are.
In any case, what I personally find more problematic than just slowness is electron apps interacting weirdly with my Nvidia linux graphics drivers, in such a way that it causes the app to display nothing or display weird artifacts or crash with hard-to-debug error messages. It's possible that this is actually Nvidia's fault for having shitty drivers, I'm not sure; but in any case I definitely notice it more often with electron apps than native ones.
Anyway one of the things I hope that AI can do is make it easier for people to write apps that use the native graphics stack instead of electron.
VSCode isn't a regular Electron crap application, in fact Microsoft has dozens of out-of-process plugins written in C++, Rust and C# to work around Electron crap issues, also the in-editor terminal makes use of WebGL instead of div and p soup.
Sigh. Beyond the deeply unserious hyperbole, this is a no-true-scotsman. Yes, you can use native APIs in Electron. They can even help. That's not remotely an argument for not using Electron.
> the in-editor terminal makes use of WebGL
Right, because clearly the Electron-provided browser environment was insufficient and needed to be escaped by using... a browser API instead?
Again, folks, the argument here is from existence. If the browser stack is insufficient for developing UIs in the modern world, then why is it winning so terrifyingly?
Gen X and Boomers strangely enough managed to write portable native code, across multiple hardware architectures, operating systems and language toolchains.
As is an insurmountable challenge apparently, to master Web UI delivery from system services, daemons to the default browser like UNIX administration tooling.
> Again, folks, the argument here is from existence. If the browser stack is insufficient for developing UIs in the modern world, then why is it winning so terrifyingly?
If McDonald’s hamburgers taste like warmed-over shit, why are they the most popular in the world?
Using the terminal in vscode will easily bring the UI to a dead stop. iterm is smooth as butter with multiple tabs and 100k+ lines of scrollback buffer.
Try enabling 10k lines of scrollback buffer in vscode and print 20k lines.
I'm not sure what you're responding to. What I'm describing is my actual experience using Claude, and what I'm hoping for is that they'll spend something like two engineers for a quarter making the app more pleasant to use.
Setting that aside, I think you learned the wrong lesson here. There's no fight. Performance comes from app architecture engineering more than the underlying tools. Building on the trash fire that is the current JS ecosystem may make it harder, true, but apps like VS Code, Discord, Slack, etc show that with enough effort a team can use those tools to deliver something with relatively much better performance. The underlying browser engines are quite sophisticated and very efficient for what they are asked to do, it's just a question of good engineering on top of that. Based on the observable behavior I'm guessing the Claude app is doing something like triggering reflow for the entire chat thread every time they append a few characters to the chat. Totally avoidable.
The big reason web tech is ubiquitous is it has the best properties for distribution. That may last a little while or a long time, but there is no fundamental reason why it's more durable than say Win32 and MFC.
> complex UI that isn't a frustratingly slow resource hog
Maybe you can give ones of competing ones of comparable complexity that are clearly better?
Again, I'm just making a point from existence proof. VSCode wiped the floor with competing IDEs. GMail pushed its whole industry to near extinction, and (again, just to call this out explicitly) Amazon has shipped what I genuinely believe to be the single most complicated unified user experience in human history and made it run on literally everything.
People can yell and downvote all they want, but I just don't see it changing anything. Native app development is just dead. There really are only two major exceptions:
1. Gaming. Because the platform vendors (NVIDIA and Microsoft) don't expose the needed hardware APIs in a portable sense, mostly deliberately.
2. iOS. Because the platform vendor expressly and explicitly disallows unapproved web technologies, very deliberately, in a transparent attempt to avoid exactly the extinction I'm citing above.
> Maybe you can give ones of competing ones of comparable complexity that are clearly better?
Thunderbird is a fully-featured mail app and much more performant than Gmail. Neovim has more or less the same feature set as VSCode and its performance is incomparably better.
> Thunderbird is a fully-featured mail app and much more performant than Gmail.
TB is great and I use it every day. An argument for it from a performance standpoint is ridiculous on its face. Put 10G of mail in the Inbox and come back to me with measurements. GMail laughs at mere gigabytes.
Verifiably false. Like, this is just trivial to disprove with the "Reload" button in the browser (about 1.5s for me, FWIW). Why would you even try to make that claim?
That's just bluster. The IEEPA nonsense was already the creative trickery deployed in defense of a novel and prima facia unconstitutional policy. If they had a better argument, they would have made it.
And we know in practice that Trump TACOs out rather than pick real fights with established powers. Markets don't like it when regulatory agencies go rogue vs. the rule of law. They'll just shift gears to something else.
Trump Always Chickens Out (TACO) is a term that gained prominence in May 2025 after many threats and reversals during the trade war U.S. president Donald Trump initiated with his administration's "Liberation Day" tariffs.
The charitable explanation is that he chickens out when confronted with real backlash.The less charitable explanation is that he 'chickens' out after the appropriate bribe has been paid to him.
I think that the tariffs are what he said they were... a starting point for pushing (re)negotiation, and that has largely been successful. This ruling doesn't roll back all those trade deals.
1. That's transparently NOT what the white house said the tariffs were for.
2. There has been NO significant change (via negotiation or not) in non-tariff trade policy under this administration. Essentially all those "announcements" of "deals" were, were just the acts of rolling back the tariffs themselves. No one caved. We didn't get any advantage.
It's just absolutely amazing to me the degree of epistemological isolation the right has created for it in the modern US.
Prices drop all the time. But no, they don't drop "automatically" as some kind of rules thing when regulations change. Prices drop when someone has extra inventory and needs to liquidate, or run a sale, or whatever.
Anthropomorphizing markets as evil cartels is 100% just as bad as the efficient market fetishization you see in libertarian circles. Markets are what markets do, and what they do is compete trying to sell you junk.
This falls under the "selling somthing" angle I mentioned. Yes yes yes, generality and abstraction are tradeoffs and higher level platforms lack primitives for things the lower levels can do.
That is, at best, a ridiculous and specious way to interpret the upthread argument (again c.f. "selling something").
The actual point is that all real systems involve tradeoffs, and one of the core ones for a programming language is "what problems are best solved in this language?". That's not the same question as "what problems CAN be solved in this language", and trying to conflate the two tells me (again) that you're selling something. The applicability of C to problem areas it "can" solve has its own tradeoffs, obviously.
> Those deps have to come from somewhere, right? Unless you're actually rolling your own everything
The point is someone needs to curate those "deps". It's not about rolling your own, it's about pulling standard stuff from standard places where you have some hope that smart people have given thought to how to audit, test, package, integrate and maintain the "deps".
NPM and Cargo and PyPI all have this disease (to be fair NPM has it much worse) where it's expected that this is all just the job of some magical Original Author and it's not anyone's business to try to decide for middleware what they want to rely on. And that way lies surprising bugs, version hell, and eventually supply chain attacks.
The curation step is a critical piece of infrastructure: thing things like the Linux maintainer hierarchy, C++ Boost, Linux distro package systems, or in its original conception the Apache Foundation (though they've sort of lost the plot in recent years). You can pull from those sources, get lots of great software with attested (!) authorship, and be really quite certain (not 100%, but close) that something in the middle hasn't been sold to Chinese Intelligence.
But the Darwinian soup of Dueling Language Platforms all think they can short circuit that process (because they're in a mad evangelical rush to get more users) and still ship good stuff. They can't.
I mean somebody could make a singular rust dependency that re-packages all of the language team's packages.
But what's the threat model here. Does it matter that the Rust STD library doesn't expose say "Regex" functionality forcing you to depend on Regex [1] which is also written by the same people who write the STD library [2]? Like if they wanted to add a back-door in to Regex they could add a backdoor into Vec. Personally I like the idea of having a very small STD library so that it's focused (as well as if they need to do something then it has to be allowed by the language unlike say Go Generics or ELM).
Personally I think there's just some willful blindness going on here. You should never have been blindly trusting a giant binary blob from the std library. Instead you should have been vendoring your dependencies and at that point it doesn't matter if its 100 crates totaling 100k LOC or a singular STD library totaling 100k LOC; its the same amount to review (if not less because the crates can only interact along `pub` boundaries).
[1]: https://docs.rs/regex/latest/regex/
> I mean somebody could make a singular rust dependency that re-packages all of the language team's packages.
That's not the requirement though! Curation isn't about packaging, it's about independent (!) audit/test/integration/validation paths that provide a backstop to the upstream maintainers going bonkers.
> But what's the threat model here.
A repeat of the xz-utils fiasco, more or less precisely. This was a successful supply chain attack that was stopped because the downstream Debian folks noticed some odd performance numbers and started digging.
There's no Debian equivalent in the soup of Cargo dependencies. That mistake has bitten NPM repeatedly already, and the reckoning is coming for Rust too.
> Wasn't that a suspected state actor? Against that threat model your best course of action is a prayer and some incense.
No? They caught it! But they did so because the software had extensive downstream (!) integration and validation sitting between the users and authors. xz-utils pushed backdoored software, but Fedora and Debian picked it up only in rawhide/testing and found the issue.
> Notably, xz utils didn't use any package manager ala NPM and it relied on package management by hand.
With all respect, this is an awfully obtuse take. The problem isn't the "package manager", it's (and I was explicit about this) it's the lack of curation.
It's true that xz-utils didn't use NPM. The point is that NPM's lack of curation is, from a security standpoint, isomorphic to not having any packaging regime at all, and equally dangerous.
> a Postgres dev running bleeding edge Debian
Exactly. Not sure how you think this makes the point different. Everything in Debian is volunteer, the fact that people do other stuff is a bonus. Point is the debian community is immunized against malicious software because everyone is working on validation downstream of the authors.
No one does that for NPM. There is no Cargo Rawhide or NPM Testing operated by attested organizations where new software gets quarantined and validated. If the malicious authors of your upstream dependencies want you to run backdoored software, then that's what you're going to run.
No? Who else has 2-3 years worth of time to become a contributior and maintainer for obscure OSS utils?
Plus made sockpuppets to put pressure on OG maintainer to give Jia Tan maintainer privilege.
> Exactly. Not sure how you think this makes the point different. Everything in Debian is volunteer, the fact that people do other stuff is a bonus.
What you mean exactly? This isn't curation working as intended. This is some random dev discovering it by chance. While it snuck past maintainers and curator of both Debian and Red Hat.
> Everything in Debian is volunteer, the fact that people do other stuff is a bonus. Point is the debian community is immunized against malicious software because everyone is working on validation downstream of the authors.
You can do same in NPM and Cargo.
Release a v1.x.y-rc0, give everyone a trial run, see if anyone complains. If they do, it's downstream validation working as intended.
Then yank RC version and publish a non-RC version. No one is preventing anyone from making their release candidate version.
> No one does that for NPM. There is no Cargo Rawhide or NPM Testing
Because, it makes no more sense to have Cargo Rawhide than to have XZ utils SID.
Cargo isn't an integration point, it's infra.
Bevy, which integrates many different libs, has a Release Candidate. But a TOML/XYZ library it uses doesn't.
Isn't xz-utils exactly why you would want a lot of dependencies over a singular one?
If say Serde gets compromised then only the projects depending on that version of Serde are as opposed to if Serde was part of the std library then every rust program is compromised.
> That mistake has bitten NPM repeatedly already, and the reckoning is coming for Rust too.
Eh, the only things that coming is using software expressly without a warranty (expectantly) will mean that software will cause you problems at an unknown time.
> 1. USAID was never purely a soft power instrument and has extensive integration with the IC, including providing cover for destructive and often illegal programs, i.e. clandestine infra.
That's... pretty much a good definition of soft power, and frankly not even a cynical one. Your argument presupposes a world where "clandestine infra" and whatnot simply wouldn't happen if we didn't do it. But obviously it would, it would just serve someone else's interests.
And fine, you think the cold war US was bad, clearly. And maybe it was, but it was better (for the US, but also for the world as a whole) than the alternatives at the time, and it remains so today. China's international aspirations are significantly more impactful (c.f. Taiwan policy, shipping zone violations throughout the pacific rim, denial of access to internal markets, straight up literal genocide in at least one instance) and constrained now only by US "soft power".
The world sucks. Whataboutism only makes it worse.
USAID is nowhere near the most effective nor the most important source of soft power for the U.S., just a highly visible one.
Besides security guarantees/defense aegis, the heaviest lifters in U.S. soft power projection are structural and cultural forces that operate largely independent of government:
I'm somewhat ignorant on this subject (by design, my mental health cannot afford too much pondering on that which I cannot control)
but in this instance I can't help but wonder from a game theory standpoint, is there anything GAINED by affecting USAID in a way in which we clearly lose some (relatively small per your comment) amount of soft power?
That is to say, a perfectly played game would involve not making any sacrifices unless it was to gain some value or reduce some loss. What is gained (or not lost) here?
Domestic 'gain' is fiscal + political + transparency. USAID was pass-through where taxpayer dollars flowed to NGOs and contractors whose missions aligned with whatever administration or congressional bloc was in power – but with enough layers of separation to obscure the nature of the spending.
Foreign 'gain' is a move away from liberal internationalism to transactional bilateralism/resetting expectations wrt American largesse. We were being outbid everywhere anyway, and the org was ineffectively doing something DoS should be doing.
Local producers cant compete with the aid (nor in trade). The same scheme China runs in the west. On the receiving end you not just stop development but you actively shut down what you had and forget how to do it.
Yes, USAID was only one part of US soft power. Everything else you have listed though, the destructionists have done effective jobs of trashing those as well!
In a thread about USAID it makes sense to talk about the damage to USAID. If these other pillars of soft power matter more to you, then try writing productive comments lamenting their destruction rather than downplaying in this discussion.
I feel like currently, all four of those points you raised have also been significantly eroded too, and will continue to be for the following decades - countries seem to be rolling back US tech, contracts, dollars, and less people are going to the US for study.
>> The world sucks. Whataboutism only makes it worse.
> USAID is nowhere near the most effective nor the most important source of soft power for the U.S.
And the goalposts move again. Your original point was that soft power was bad. After pushback, now it's "soft power is good but USAID was inefficient".
I submit that neither of these arguments was presented in good faith and that your real goal is just defense of DOGE.
This is all debatably valid, except for the fact that the entrenched system produced massive fraud, money laundering, wagging-the-dog and worst of all, a decade of domestic propaganda and anti-democratic schemes in an attempt to protect the machine from widespread exposure.
Except all of that was widely recognized and reported on at the time. People just didn't care. Lots of people will argue about this stuff until they're blue in the face, but no one is "surprised" by any of the evidence. The malfeasance was going to happen anyway, it's an inevitable consequence of global realpolitik. There's no Rule of Law on the high seas, as it were.
My point really isn't that cynical, it's more optimistic: if you're going to do all that stuff (and let's be honest and admit upfront: we were 100% going to do all that stuff) you might as well feed a bunch of people and garner some good will along the way.
There's clearly a difference between "what about" as a distraction technique (introducing an unrelated argument to avoid having to defend the original) and pointing out the existence of a clearly related issue. This is "youforgotaboutism", if you must label it.
Basically: analysis of international relations and influence techniques can only be correct when it treats with the influence of all parties, and not just the US. You agree with that framing, right?
"Groupthink", as the term is used by epistemologically isolated in-groups, actually means the opposite. The problem with the idea is that it looks symmetric, so if you yourself are stuck in groupthink, you fool yourself into think it's everyone else doing it instead. And, again, the solution for that is reasonable references grounded in informed consensus. (Whether that should be a curated encyclopedia or a LLM is a different argument.)
reply