Hacker Newsnew | past | comments | ask | show | jobs | submit | jbreckmckye's commentslogin

something something Goodhart's Law

Something "systems that are attacked by entities that adapt often need to be defended by entities that adapt".

> Labeling people as villains is almost always an unhelpful oversimplification of reality

This is effectively denying the existence of bad actors.

We can introspect into the exact motives behind bad behaviour once the paper is retracted. Until then, there is ongoing harm to public science.


IMHO, you should deal with actual events, when not ideas, instead of people. No two people share the exact same values.

For example, you assume that guy trying to cut the line is a horrible person and a megalomaniac because you've seen this like a thousand times. He really may be that, or maybe he's having an extraordinarily stressful day, or maybe he's just not integrated with the values of your society ("cutting the line is bad, no matter what") or anything else BUT none of all that really helps you think clearly. You just get angry and maybe raise your voice when you're warning him, because "you know" he won't understand otherwise. So you left your values now too because you are busy fighting a stereotype.

IMHO, correct course of action is assuming good faith even with bad actions, and even with persistent bad actions, and thinking about the productive things you can do to change the outcome, or decide that you cannot do anything.

You can perhaps warn the guy, and then if he ignores you, you can even go to security or pick another hill to die on.

I'm not saying that I can do this myself. I fail a lot, especially when driving. It doesn't mean I'm not working on it.


I used to think like this, and it does seem morally sound at first glance, but it has the big underlying problem of creating an excellent context in which to be a selfish asshole.

Turns out that calling someone on their bullshit can be a perfectly productive thing to do, it not only deals with that specific incident, but also promotes a culture in which it's fine to keep each other accountable.


I think they're both good points. An unwillingness to call out bullshit itself leads to a systemic dysfunction but on the flip side a culture where everyone just rages at everything simply isn't productive. Pragmatically, it's important to optimize for the desired end result. I think that's generally going to be fixing the system first and foremost.

It's also important to recognize that there are a lot of situations where calling someone out isn't going to have any (useful) effect. In such cases any impulsive behavior that disrupts the environment becomes a net negative.


You cannot call all the bullshit. You need to call what's important for you. That defines your values.

It's also important to base your actions on what's at hand, not teaching a lesson to "those people".


I honestly think this would qualify as "ruinous empathy"

It's fine and even good to assume good faith, extend your understanding, and listen to the reasons someone has done harm - in a context where the problem was already redressed and the wrongdoer is labelled.

This is not that. This is someone publishing a false paper, deceiving multiple rounds of reviewers, manipulating evidence, knowingly and for personal gain. And they still haven't faced any consequences for it.

I don't really know how to bridge the moral gap with this sort of viewpoint, honestly. It's like you're telling me to sympathise with the arsonist whilst he's still running around with gasoline


> I don't really know how to bridge the moral gap with this sort of viewpoint, honestly. It's like you're telling me to sympathise with the arsonist whilst he's still running around with gasoline

That wasn't how I read it. Neither sympathize nor sit around doing nothing. Figure out what you can do that's productive. Yelling at the arsonist while he continues to burn more things down isn't going to be useful.

Assuming good faith tends to be an important thing to start with if the goal is an objective assessment. Of course you should be open to an eventual determination of bad faith. But if you start from an assumption of bad faith your judgment will almost certainly be clouded and thus there is a very real possibility that you will miss useful courses of action.

The above is on an individual level. From an organizational perspective if participants know that a process could result in a bad faith determination against them they are much more likely to actively resist the process. So it can be useful to provide a guarantee that won't happen (at least to some extent) in order to ensure that you can reliably get to the bottom of things. This is what we see in the aviation world and it seems to work extremely well.


I thought assuming good faith does not mean you have to sympathize. English is not my native language and probably that's not the right concept.

I mean, do not put the others into any stereotype. Assume nothing? Maybe that sounds better. Just look at the hand you are dealt and objectively think what to do.

If there is an arsonist, you deal with that a-hole yourself, call the police, or first try to take your loved ones to safety first?

Getting mad at the arsonist doesn't help.


When bad behavior has been identified, reported, and repeated - as described in the article - it is no longer eligible for a good faith assumption.

I think they're actually just saying bad actors are inevitable, inconsistent, and hard to identify ahead of time, so it's useless to be a scold when instead you can think of how to build systems that are more resilient to bad acts

You have to do both. Offense and defense are closely related. You can make it hard to engage in bad acts, but if there are no penalties for doing so or trying to do so, then that means there are no penalties for someone just trying over and over until they find a way around the systems.

Academics that refuse to reply to people trying to replicate their work need to be instantly and publicly fired, tenure or no. This isn't going to happen, so the right thing to do is for the vast majority of practitioners to just ignore academia whilst politically campaigning for the zeroing of government research grants. The system is unsaveable.


Perhaps start by defunding any projects by institutions that insist on protecting fraudsters especially in the soft sciences. There is a lot of valuable hard science that IS real and has better standards.

But that would defund all of them. Plenty of fraud at 'top' institutions like Harvard, Stanford, Oxford etc...

If funding depended on firing former fraudsters and incompetents they would find the will to fire them

I don't think they would. They'd rather stage riots and try to unseat the government than change.

To which my reply would be, we can engage in the analysis after we have taken down the paper.

It's still up! Maybe the answer to building a resilient system lies in why it is still up.


Yes. It's one reason I've lost interest in OSS completely.

I'm doing a project with tree sitter right now

Any tips for keeping the grammar sizes under control? I'm distributing a CLI tool that needs to support several languages, and I can see the grammars gradually bloating the binary size

I could build some clever thing where language packs are opt-in and distributed as WASM, maybe. But that could be complex


I get a lot of unsolicited PRs on my projects that I don't actually want

Turning off PRs would be a good option for several of my repos


Great, we're on it

I actually kind of do the opposite to most developers.

Instead of having it write the code, I try to use it like a pair reviewer, critiquing as I go.

I ask it questions like "is it safe to pass null here", "can this function panic?", etc.

Or I'll ask it for opinions when I second guess my design choices. Sometimes I just want an authoritative answer to tell me my instincts are right.

So it becomes more like an extra smart IDE.

Actually writing code shouldn't be that mechanical. If it is, that may signify a lack of good abstractions. And some mechanical code is actually quite satisfying to write anyway.


I actually started doing this for side projects that I use as a vehicle for learning and not necessarily to solve a problem and it's been great. 1) I don't feel like I just commissioned someone to do something and I half-ass checking it and 2) I actually do learn about new stuff as well even though sometimes it distracts me from the goal (but that's on me)

Idk what models you are using. I'm pretty sure I tried the best avialable and arguing with them about pointers in non trivial cases seems like a recipe for disaster.

> Sometimes I just want an authoritative answer to tell me my instincts are right.

You realize that LLM answers highly depend on how you frame your question?


They also depend on several categories of chance circumstance.

> You realize that LLM answers highly depend on how you frame your question?

Of course I do. I'm not a moron.

In these cases, I already know the most likely answer to my question. The LLM just helps me reduce my self doubt


I think a lot of programming - the more business focused, application minded stuff - is a kind of analytical philosophy. You spend a lot of time as a dev working on "domain logic" which requires you to be very exacting in your terminology and carefully distinguish between ideas.

You can go too far with this, and waste time on taxonomy, but for the most part it's a good idea to subject your classes, variables, components to some kind of philosophical scrutiny.

(My degree was in English Literature though, which doesn't help as much, though I think I'm pretty good at naming variables. £40,000 well spent!)


Always appreciate an English major in the wild. But I think taxonomy is only wasteful if it doesn't map to real distinctions, good naming saves debugging time like when untangling "what did we mean by 'user' here?".

Wittgenstein said the limits of language are the limits of the world after all


I don't have a very thorough understanding of the international money markets.

If Europe "dumps the dollar" - what does that mean in practice?

This article suggests that Europe could also call in US debt. Presumably the US could grandstand and not pay it. What are the consequences of that?


It is not possible. Europe's financial markets are the size of Europe's military.

Europe needs US financial markets (dollars) to finance its debt as all European bank together could fit into JP Morgan.

Switching away from SWIFT or Visa/Mastercard is also improbable as Europe lack tech skills to run such complex systems.


#8 + #9 by assets are combined bigger than JP Morgan (which is #5 on the list) [https://en.wikipedia.org/wiki/List_of_largest_banks]

SWIFT sits in Belgium, why would anyone in Europe need to switch away from it? Is the US able to handle their (international) financial transactions without access to SWIFT?

The financial market being significantly smaller, sure, but will it stay like that?

Quickly summing up total spending of the European countries on this list, the Europeans seem to spend about half of what the US spends on the military, quite a lot more than I expected. [https://en.wikipedia.org/wiki/List_of_countries_with_highest...]


https://www.bloomberg.com/opinion/articles/2025-09-02/does-e...

As for SWIFT it is US executive branch that decides who to take off the system


But that's a political thing, not a technical.

> SWIFT’s data centers, located in the United States, the Netherlands, and Switzerland, act as the network’s central hubs, processing and routing messages across the network. The centralization at these data centers is critical for swift (no pun intended) and secure data transmission. These data centers are designed with redundancy and failover capabilities, so if one center is disrupted, the others take over, ensuring no interruptions to the SWIFT service.

[https://ahrvo.substack.com/p/how-does-swift-really-work]

To me, this sounds like SWIFT would posibly be split into 3-parts, without any redundancy. A US and a EU datacenter handling "local" business, with Switzerland possibly be able to interact with either?


Because the idea you can have all aspects of maintaining a complex piece of technology, maintained by a single cross-skilled team of interchangeable cogs, is utopian and unworkable past any reasonable level of scale

DevOps, shift left, full stack dev, all reminds me of the Futurama episode where Hermes Conrad successfully reorgs the slave camp he's sent to, so that all physical labour is done by a single Australian man

Speaking darker, there is a kind of - well, perhaps not misanthropy, but certainly a not-so-well-meaning dismissiveness, to the "silo breaking" philosophy that looks at complex fields and says "well these should all just be lumped together as one thing, the important stuff is simple, I don't know why you're making all these siloes, man" - assuming that ops specialists, sysadmins, programmers, DBAs, frontend devs, mobile devs, data engineers and testers have just invented the breadth and depth and subtleties of their entire fields, only as a way of keeping everybody else out

But modern systems are complex, they are only getting more so, and the further you buy into the shift-left everyone-is-everything computer-jobs-are-all-the-same philosophy the harder and harder it will get to find employees who can straddle the exhausting range of knowledge to master


> the "silo breaking" philosophy that looks at complex fields and says "well these should all just be lumped together as one thing, the important stuff is simple,

I don’t think this is the right take. “Silo’s” is an ill-defined term, but let’s look at a couple of the negative aspects. “Lack of communication”, and “Lack of shared understanding” (or different models of the world). I’m going to use a different industry example, as I think it helps think about the problem more abstractly.

In the world of biomedical engineering, the types of products you are making require the expertise of two very different groups of people. Engineers and Doctors. A member of either of these groups have an in-group language, and there is an inherent power differential between them. Doctors are more “important” than engineers. But to get anything made, you need the expertise of both.

One way to handle this is to keep the engineers and doctors separate and to communicate primarily via documents. The doctor will attempt to detail exactly how a certain component should work. The engineer will attempt to detail the constraints and request clarifications.

The problem with this approach is that the engineer cannot speak “doctorese” nor can the doctor speak “engineerese”; and the consequence is a model in each person’s head that differs significantly from the other. There is no shared model; and the real world product suffers as a result.

The alternative is to attempt to “break the silos”; force the engineers and doctors to sit with each other, learn each other’s language, and build a shared mental model of what is being created. This creates a far better product; one that is much closer to the “physical reality” it must inhabit.

The same is true across all kinds of business groups. If different groups of people are required to collaborate, in order to do something, those people are well served by learning each other’s languages and building a shared mental model. That’s what breaking silos is about. It is not “everyone is the same”, it’s “breaking down the communication barriers”.


I don't think that's like DevOps, though. A closer analogy would be a business that only hired EngDocs, doctors who had to be accredited engineers as well as vascular surgeons.

I don't think anyone thinks siloes are themselves a good thing, but they might be a necessary consequence of having specialists. Shift-left is mostly designed to reduce conversations between groups, by having individuals straddle across tasks. It's actually kind of anti-collaboration, or at least pessimistic that collaboration can happen


Oh, I completely agree! We created “EngDocs”, as you say, and simply made the situation worse. An EngDoc is an obviously ludicrous concept, on its face. But by breaking down the silo in the biomedical example, each engineer becomes a bit knowledgeable about an aspect of medicine and each doctor gains some knowledge about aspects of engineering.

I am arguing that all such people, whether developers or ops or ux designers or product managers; need to engage in this learning as they collaborate. This doesn’t mean that we want the DevPM as a resultant title, just that Siloing these different groups will lead to perverse outcomes.

Dev and ops have been traditionally siloed. DevOps was a silly attempt to address it.


I wasn't sure for a while, but this must be satirical - mustn't it?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: