I just want to thank you for taking the time to reply so thoughtfully to someone who is so intent on letting it all go to shit just so they can think themselves enlightened by predicting it.
I have the same response to people who ask me why I don’t leave the country since things are going so bad: fuck that, this is my home. I will always love this country. It is never beyond saving. We have been through worse (the civil war at the very most obvious, but there are plenty of other low points.) We can get through this. We can make it better, we can learn to love our neighbor again, we can learn to trust each other again. We can learn to avoid these tendencies towards hatred. We can’t give up.
I wouldn’t say they’re ignoring it’s so much as cheering it on, and falling over each other to voice their support for it. It’s liberals getting killed after all, and they’re not Americans like republicans are Americans.
No, there’s people that love what ICE is doing, people that hate it, people who try and stop it, and the rest of us who look on in horror at the trainwreck and collapse happening in front of us…
But I can’t think of a single group of people who are ignoring it. Other than maybe for a lack of perceived other options and to keep from going insane.
Myself? I’m basically a coward. I have two young children. I don’t want to go protest ICE and get killed by one of these wannabe gestapos. I’m in a real state of fear for my children and the world they’re going to grow up in, but I literally don’t know what else I can do. Maybe I’ll help join the campaign if a democratic candidate this year and help them get elected.
But I’m not ignoring it. I can’t think of anyone who is.
If you're not doing anything about this, you're ignoring it. I don't say this as an insult. I say it as a wakeup call. I'm right there with you, only maybe a few degrees closer to not ignoring it. I have been to protests. I have been tear-gassed and seen people within a few yards of me bloodied up by the authorities, but in comparison to this man that is now dead, I have been ignoring this. For those of us that see that this is wrong, we all need to do more, for your children and mine.
So democracy is falling in the most militarily powerful country on earth, citizens are being executed and rights stripped… and you think it’s not OK to be angry?
You think we just shouldn’t discuss it at all because people are angry?
no, it’s perfectly fine to be angry, I would be very concerned with anyone’s mental health who isn’t angry about this.
> You think we just shouldn’t discuss it at all because people are angry?
Well, if we just want to discuss how angry we are, that’s just called venting. That’s fine, vent. But don’t confuse that with discussion. I don’t find venting about how angry something makes you to be all that compelling most of the time. Sometimes someone distills the issue at hand into something very poetic and poignant, and that can sometimes be cathartic, but other than that it’s just pure emotion being tossed around and it just amplifies hatred.
> Putting our heads in the sand will not help.
Not sure where you’re drawing this conclusion that I’m putting my head in the sand. Or that people posting their outrage on HN are somehow not sticking their head in the sand, as if the dispensing of internet hot takes is somehow “doing your part” (hint: it’s not.)
> That’s fine, vent. But don’t confuse that with discussion.
The idea that discussion should be dispassionate and analytical is just wrong. All that does is hides biases.
Discussion should be honest; often that means being messy and angry.
there are a lot of places to be angry on the internet. in fact, basically every single website other than HN is a place to be angry. HN is deliberately not that, or at least it aims to not be that.
But people on HN are angry in non-political threads all the time, to the point that there are several items in the guidelines about it (the latest being "don't be curmudgeonly".)
And not everyone in every political thread is simply expressing anger. The majority of comments in this very thread are reasonable. The ones that aren't have been flagged, which is proper.
But flagging all political threads for "anger," regardless of the actual anger on display, while being far more lenient towards it elsewhere (no one is flagging every thread where someone expresses rage about javascript or AI or the modern web) seems hypocritical.
If HN were only anger, then HN would suck in general, yes. The quality of discussion on this site generally coincides with how much people are able to separate their emotion from the facts at hand. For threads like these that basically drops to zero.
I haven’t really been able to find any comments here that are all that reasonable, other than the meta-discussion we’re having now (and trust me, I hate meta-discussion like this. Honestly I’m regretting even bringing any of this up at this point. I should have just flagged and moved on, and had a discussion with IRL friends or family about it instead. Or talked to a therapist, I dunno.)
It seems to me like you're the one with a strong negative emotional investment in this thread. I don't know what your bar for "reasonable" is but the entire top thread seems fine to me. It's certainly better than many discussions I've seen of LLMs or other controversial but technical subjects.
In any case, I disagree that this thread, much less all "political" threads, deserve to be flagged by default. This community's specific grudge against politics is weird given how much politics gets excused in "technical" contexts.
I have a strong emotional investment in what’s happening to my country. That’s why I’m getting upset. It has fuck all to do with HN. I’m getting upset about what’s happening with my country and predictably taking it out on other HN commenters, and other HN commenters are upset about what’s happening and are clearly taking it out on me. (I basically painted a big target on my forehead saying “I flagged this post” and people are talking to me as if I’m one of the ICE supporters or something.)
I’m not against politics on HN. I’m against anger-driven discussions on the internet in general. It’s not only bad discussion by this site’s standards, it’s bad for the world. As in, the internet causes us to hate each other more than we otherwise would, and those divisions are (I believe) directly responsible for the shape of the political landscape today. This is not a game… people talk online, they develop hatred for other people they wouldn’t otherwise have, and take that hatred to the voting booth.
And I don’t count myself as better than average here either: I’m just as likely to post flippant one-line hot take responses to someone else’s flippant one-line hot take responses. I’m just as angry as all of you. I’m not trying to ignore anything, I’m not trying to silence anyone. I’m only saying that arguing angrily at strangers on the internet is the opposite of constructive towards actually fixing any real problems, and we would be better off with “normal” journalism where we hear the news from experts and discuss it with people we generally trust.
Yes, you should discuss only what is allowed. If you use the technology to dissent against your rulers, then it should be switched off (until you come back to your sense and submit yourself to the mercy of mulas).
> it's easy to see the very real authoritarian bend of comfortable professionals who are smart but also in favor of, say, summary execution of people for protesting
There’s that rhetoric I’m talking about! Thanks for giving a perfect example.
For topics like these, the expectation is that everyone comes in here and expresses sufficient levels of outrage. After all, if you’re scrolling through all the posts showing these awful things, you should have built up the requisite level of outrage by now, so if you post anything other than “HN is obviously ok with executions”, you must be one of them and therefore further evidence that these comfortable professionals are complacent and pro-murder.
The nuanced takes are nowhere to be found, because people who might want to come into these discussions with it, see the rhetoric being tossed around, and think “nope, this is all toxic, no way I’m joining in”, flag it, and move on.
But you can look at that exact situation (people flagging the post) and conclude “yup, the person doing the flagging is okay with executions.” It’s wild.
The sad thing is that there is a nuanced discussion to be had here. In fact it’s critical to this country’s survival that we are able to navigate our way through this. But this discussion, this navigation, needs to happen in small groups, where we can actually engage face-to-face. When we can see each other’s humanity, and know that the other person isn’t a monster, and doesn’t want to see innocent people die any more than you do. Where we can dissect each other’s viewpoints carefully.
None of that is really possible in online forums, because the group think is real, and the rhetoric destroys nuance, destroys compassion, destroys the ability to find common ground. It’s sheer toxicity.
I dunno, I’m still really angry about what I'm seeing. If I had anything to say it’s probably something I’d regret later. I’m talking with my loved ones about it and trying to come to grips with what’s happening to my country. It’s not really time for an internet hot take right now.
Fair enough to be angry, for sure. I am lucky enough to have found a therapist a couple of weeks ago, mostly quit drinking, and ceasing FB doom scrolling.
It's hard.
If it helps, there are plenty of folks doing work. Specifically, get trained by whoever your local rapid response network is. That will put you in contact with actual humans in your local who are in similar situations- for me that has bein invaluable.
You might consider (though perhaps not agree) that no where in my post did I insinuate that "flagging" was the same as taking a position on the subject:
my statement was about why I do, indeed, find utility these conversations you don't find useful.
That a) says nothing about my take on your understanding/ position on the issues around protest or politics and b) is a request to understand my position and not, like, a statement about the morality of your position.
And further, to me "being okay with the summary execution of people for protesting" seems like a pretty specific sentiment, and one which I have heard echoed here quite a bit. I find it super useful to see demonstrated so frequently that a person with excellent technical chops in a domain may often have massive deficiencies in their reasoning, if for no other reason that it helps me understand the weakness of my own cognition.
So, perhaps consider that it's you, in projecting a statement I didn't make in a very short and fairly clear post who is "giving a perfect example" of the level of nuance-free assumptions that do (as you correctly point out) often run rampant- and not just on this site, but in discourse in general.
To push my point a bit further, I am not here to make moral judgements or change peoples' mind on these political topics; rather, over the decade during which I've been interacting on this site, it has= has been super informative to trace the nascent fascism that breed in many of the confluences of technology and capitalism.
That may, to you, sound like hyperbolic rhetoric that is dismissive of other folks' opinions; from my position you're not understanding that this examination of (what is to me) highly disagreeable and almost sociopathic political discourse -is- the process of finding nuanced and useful understandings of our political situation.
The whole thread here started with “why do we continue to ignore this”, to which I replied “who’s ignoring this”, and the answer is “anyone who’s flagging this post.”
The conversation in this particular thread has gone off the rails, in large part because I am very angry about what’s happening, and I tend to get heated in replies. So I apologize for letting my anger get the best of me in this particular instance.
My only point was to say “I flagged this, but not because I’m ignoring it.”
I flagged it because I truly believe to my core that anonymous online discussion about emotional political topics is causing this country’s descent into fascism. Whipping people into a frenzy against one another, causes hatred to amplify past where it would be if it were just about the story itself. The discussions are where people go to out-signal each other, even if nobody’s there to argue the other point. Then if someone does end up saying something like “hmm, looks like the protester was actually carrying a gun” (or something equally not-wrong but clearly not the expected expression of ICE-hatred we all expect), they’re now the target of everyone’s anger. All that brewing hatred is now pointed at that one person, because they’re the closest thing on the site to someone who is actually pro-ICE. Then we have people like you casually saying things about this site being full of tech bros who are just fine with executions… I just feel like we need to tone everything way down. We need to be calm, to be honest. I know it’s hard. I don’t really know what else to say… it’s hard to formulate thoughts clearly in times like these.
I don’t know you but I hope wherever you are you’re safe and have a good idea of what your next steps are going to be. I’m getting increasingly distraught over what’s happening myself. It’s starting to affect my family life and I’m having a lot of trouble coping.
Venting on the internet is a way for a lot of people to come to grips with what they’re seeing. I understand. If this is what helps you cope, I won’t stop you.
Me, it’s especially difficult to see how this hatred is so self-amplifying. I see a president whose primary method of getting to where he is, is to make people hate each other to a maximum degree. I watch liberals like myself fall for it. I see how he intentionally puts armed agents in locations where he knows people will protest, then I see how those protesters are killed in the most predictable way imaginable, because they’re seen as a threat by the people with guns. Then I see the hatred get worse, the protests get larger, with more innocent people joining in, and meanwhile Trump is shipping more armed agents to the same cities.
I wish I had an answer. The answer isn’t “don’t protest”, nor is the answer “let’s all put ourselves in a position to be killed” either. I hope on some level that the images of these people getting executed is seared into enough people’s minds that Trump pays an actual political cost, but then I remember what the BLM protests turned into, and how public support tanked for what should have been an obvious issue. So whatever is going to happen, I’m not sure any of our anger is actually going to help. But I don’t really have an answer. I’m sorry.
My reasons for flagging topics like this is it just fits a pattern of “administration does something abhorrent, people get mad, social media amplifies the anger, it turns into real world deaths.” I really don’t like seeing this happen. I don’t like the hatred amplification that the internet is doing to my country. I don’t know what else to say.
I don't hate you. I'm frustrated with your mental gymnastics. Because your entire thread here boils down to this
> it just fits a pattern of “administration does something abhorrent, people get mad, social media amplifies the anger, it turns into real world deaths
I need you to pay attention to how you're writing. You are laying the deaths of these people squarely on the left for how they react to abhorrence. And if you're going to blame people on the internet for these deaths, you should provide something concrete.
Please outline what bait you think the left took that caused the death of Renee Nicole Good. Explain what "Hatred" was leading us to this moment. Because her last words were "That's fine dude, I'm not mad at you." We must have a fundamental difference of opinion on cause and effect. Because what I'm seeing is a bunch of untrained goons getting frustrated with people and shooting them in anger, knowing that nobody can stop them, and most people making excuses for it.
The idea that the choice is up to us whether we get brutalized is insulting, and frankly, I've seen it being used to excuse political violence for far too long. The idea that the best thing for us now would be to bury these deaths as 6th page news is frustrating, because as someone actually close to this, I've talked with many victims of ICE, and universally, all of them are more upset with the silence and indifference than they are with ICE themselves.
Please, please, please tone it down. Really. This is the amplification I’m talking about. You say “I don’t hate you” and then spew a bunch of hate at me. I’m having a lot of trouble not getting extremely angry right now at your reply because you’re taking the least charitable possible interpretation of every fucking thing I’m saying. This is so fucking maddening. I’m going to reply but please understand this is fucking killing me.
> You are laying the deaths of these people squarely on the left for how they react to abhorrence
> Please outline what bait you think the left took that caused the death of Renee Nicole Good
> Explain what "Hatred" was leading us to this moment.
> The idea that the choice is up to us whether we get brutalized
> The idea that the best thing for us now would be to bury these deaths as 6th page news
I am going to reply to all of this in a chunk. And I want to picture in your mind that I’m someone in your life that you respect (friend, neighbor, loved one, doesn’t matter) talking. Because otherwise I’m just a handle on a forum, and faceless, like the car driving in front of you when you have road rage. It’s so easy to forget we are human beings having a conversation here. Please remember that.
The hate amplification is not causing these deaths. I never said that. I’m saying it leads to them, in a sense that it puts people in the position where it’s going to happen. A death can have nearly infinite things that can lead to it, in the sense that if any one of those things didn’t happen, things could have gone differently.
This is a very important distinction to make. It’s the difference between giving your daughter advice for “don’t dress like that when you go out, there are a lot of dangerous people out there” and saying “it was her fault she was raped”. The former is practical advice you might give to someone you care about, and the latter is the abhorrent victim blaming you’re casually accusing me of doing.
The hatred leads to deaths because there are armed men who are itching to kill liberals, and the hatred makes people think “I ought to go put my car in the way of them.” (I’m not referring to how Renee moved her car in obvious compliance with the officer telling her to, I’m talking about the fact that she had her car there in the first place.) Renee herself probably didn’t have any hatred in her heart at that moment, but I can nearly guarantee she had a lot of conversations just like the ones others are having here, where they look at what’s happening and, in an echo chamber, escalate the rhetoric they’re using. It may go from “ICE is wrong” to “ICE is kidnapping people” to “ICE is the gestapo” to “We need to stop them” to “let’s stop them” to “ok I’ll bring a gun just in case” in the blink of an eye (the gun part being something that’s more relevant to yesterday’s story than the Renee one.)
No, it’s not the left’s fault they’re getting brutalized. It’s the brutalizer’s fault. It’s not fault I’m referring to, it’s “if things would have happened differently this death may have been avoided.” One of those things that could have happened differently is for people to choose a different avenue than directly interacting with the armed men.
No, I don’t have a better alternative than that in mind at the moment, I’m sorry. I know people have to do something, but there has to be something that is less likely to result in deaths than directly interfering with the armed men themselves (yes, filming isn’t interfering, but I dunno, get a zoom lens and maybe do it from farther away. Because if you’re standing right in front of them it very much does seem like interfering. Or in Renee’s case, maybe don’t intentionally park your car in front of the ice agents.)
I had the same thing in the house I bought, it was a nice surprise… there were 6 different phone jacks around the house in great locations for Ethernet (WiFi access points or just for a computer), and they all led down to the furnace room where they attached to a punch-down panel (basically they were all spliced into each other.)
To my surprise they were all cat5 cables. With the house being built in 2003 this was surprisingly forward-looking.
I capped all the cables that were on the punchdown panel and put a switch in there instead, and replaced all the wall jacks with RJ45, and bam, working gigabit around the house, including PoE for my WiFi access points. Still haven’t had to punch any holes in the walls.
Same; this was the nicest unexpected surprise about buying this place.
Condo built in 2006 with cat5 . Two bedrooms + living room all wired with rj11 phone jacks. Just snipped those off, wired up rj45, and attached the other ends in my utility closet to a patch panel with rj45 as well.
I don't know if it's just cat5 or 5e, but it saturates a 2.5Gbe link and in-wall cable length is about 15-25 meters.
And you're lucky with that build time, if it was more recent it'd probably be CCA or even CCS. When we redid our place a few years ago I went and bought a drum of plenum cable and told the electricians to use that, so I know what went in there. Overprovisioned slightly but who cares, I had a whole drum of cable and a 48-port switch so may as well use it all.
The only problem with this is that for some god-afwful reason, anything built before the 2010s (?) placed electrical and phone sockets at hip level instead of ankle level. So you're staring at ugly sockets all day.
So sadly you still have to punch holes.
Then again, it isn't that much of a bother if all you have to do is punch a lower hole, relocate the socket and then plaster both holes up and repaint. Especially if you make it a weekend job to do the whole house at once. Or rather, the way I look at it is that it's a weekend job that will improve how the house feels for decades. Doing blind wiring (gutters) for all the ceiling lights falls in the same category.
I think electrical/phone sockets were placed at that level because many telephones were designed to hang on the wall (docking onto and covering up the faceplate) for easy access. My childhood home had one that we used this way before we got a landline.
I was resigned to running cat6e up three floors because there was only coax and I needed a wifi AP up there. Came across the moca solution and it's great. I get flawless 2.5gbe from the basement switch to the third floor over coax. It's basically a little device that connects at each end of the coax and cat6 goes in and out.
Cat 6 would be better though so I could run POE from the basement switch to power the wifi AP, and instead I need to go do a much more complicated switch (cat6) -> moca adapter + power brick to power moca adapter -> coax -> moca adapter + power brick (cat6) -> POE injector (with power brick) -> wifi AP. SO I'm adding at least three power bricks to the setup, which is annoying. Otherwise it would be one cat6 drawing POE from the switch and powering the AP.
You can run power over coax! You can buy power-injecting splitters that were used to power old analog cameras. They basically just connect the cable to the 12V, sometimes directly but usually through some current-limiting safety switch.
MoCA devices have a 100 Ohm internal resistor at the end to limit the cable echoes, so they are not affected by the DC on the cable.
It's worth remembering that UK coax is typically lower quality than that used in the US where these are designed to be used, due to UK coax only needing to transmit terrestrial TV compared to cable in the US.
+1 on MOCA 2 being excellent to solve gaps in wiring. We bought a 6000 sqft 2001 house built with in-wall RJ11, lots of coax runs and some Cat5e runs (but not enough). Due to the size the house, the electrical, HVAC and cabling is roughly divided into two halves with separate electrical panels, HVAC pads, etc.
Unfortunately, all the RJ11 and alarm wiring runs to a closet in one half while all the coax and Cat5e run to a closet in the other half - with no RJ11 endpoints near the Cat5e/Coax closet and not Cat5e/Coax endpoints near the RJ11 closet (sigh). I tried Powerline data and it only works well in adjacent rooms and not at all between the halves due to separate electrical panels. Fortunately, there were a lot of coax runs set up for two separate nets (18-inch satellite and a huge attic antenna for OTA broadcast). So, by repurposing the now-unneeded antenna coax, MOCA 2.5 gbps mostly saved the day by filling in where the Cat5e should have gone but didn't.
My mother moved into a retirement village a while back and I was pleasantly surprised to find Ethernet jacks in every room, in some cases more than one. There was no patch panel or anything which was a bit odd, maybe hidden in a service cupboard, but initially I just needed to get a connection from the router to the bedroom and established that these two jacks there were connected. Hooked it up, nothing worked no matter what I did.
On the next visit, with diagnostic gear to look at the wiring map, I found out that the Ethernet jacks were wired up for phone lines. Some genius had decided to run Ethernet to every room, with RJ45 wall sockets, but wired it up for phone lines, so it was simultaneously unusable for either phones or networking.
My place had previous owners who had the foresight to thread the wire through PVC tube behind the wall. This means that when I wanted to add extra access points, it was easy to thread another cat5 through and pull it to where I wanted.
> It’s also common to find long-running idle queries in PostgreSQL. Configuring timeouts like idle_in_transaction_session_timeout is essential to prevent them from blocking autovacuum.
Idle transactions have been a huge footgun at $DAYJOB… our code base is full of “connect, start a transaction, do work, if successful, commit.” It means you’re consuming a connection slot for all work, even while you’re not using the database, and not releasing it until you’re done. We had to bump the Postgres connection limits by an order of magnitude, multiple times, and before you know it Postgres takes up more RAM than anything else just to support the number of connections we need.
The problem permeated enough of our (rust) codebase that I had to come up with a compile time check that makes sure you’re not awaiting any async functions while a Postgres connection is in your scope. Using the .await keyword on an async function call, but not passing the pg connection to that function, ends up being a nearly perfect proxy for “doing unrelated work while not releasing a connection”. It worked extremely well, the compiler now just straight up tells us where we’re doing it wrong (in 100+ places in fact.)
Actually getting away from that pattern has been the hard part, but we’re almost rid of every place we’re doing it, and I can now run with a 32-connection pool in load testing instead of a 10,000 connection pool and there’s no real slowdowns. (Not that we’d go that low in production but it’s nice to know we can!)
Just decreasing the timeout for idle transactions would have probably been the backup option, but some of the code that holds long transactions is very rarely hit, and it would have taken a lot of testing to eliminate all of it if we didn’t have the static check.
Why don’t you change the order to “do work, if successful, grab a connection from the Postgres connection pool, start a transaction, commit, release the connection to the connection pool”?
That’s what we should do, yes. The problem is that we were just sorta careless with interleaving database calls in with the “work” we were doing. So that function that calls that slow external service, also takes a &PgConnection as an argument, because it wants to bump a timestamp in a table somewhere after the call is complete. Which means you need to already have a connection open to even call that function, etc etc.
If the codebase is large, and full of that kind of pattern (interleaving db writes with other work), the compiler plugin is nice for (a) giving you a TODO list of all the places you’re doing it wrong, and (b) preventing any new code from doing this while you’re fixing all the existing cases.
One idea was to bulk-replace everything so that we pass a reference to the pool itself around, instead of a checked-out connection/transaction, and then we would only use a connection for each query on-demand, but that’s dangerous… some of these functions are doing writes, and you may be relying on transaction rollback behavior if something fails. So if you were doing 3 pieces of “work” with a single db transaction before, and the third one failed, the transaction was getting rolled back for all 3. But if you split that into 3 different short-lived connections, now only the last of the 3 db operations is rolled back. So you can’t just find/replace, you need to go through and consider how to re-order the code so that the database calls happen “logically last”, but are still grouped together into a single transaction as before, to avoid subtle consistency bugs.
We have a similar check in our Haskell codebase, after running into two issues:
1. Nested database transactions could exhaust the transaction pool and deadlock
2. Same as you described with doing eg HTTP during transactions
We now have a compile time guarantee that no IO can be done outside of whitelisted things, like logging or getting the current time. It’s worked great! Definitely a good amount of work though.
I figured it’d be Haskell that is able to do this sort of thing really well. :-D
I had this realization while writing the rustc plugin that this is basically another shade of “function coloring”, but done intentionally. Now I wish I could have a language that lets me intentionally “color” my functions such that certain functions can only be called from certain blessed contexts… not unlike how async functions can only be awaited by other async functions, but for arbitrary domain-specific abstractions, in particular database connections in this case. I want to make it so HTTP calls are “purple”, and any function that gets a database connection is “pink”, and make it so purple can call pink but not vice-versa.
The rule I ended up with in the lint, is basically “if you have a connection in scope, you can only .await a function if you’re passing said connection to that function” (either by reference or by moving it.) It works with rust’s knowledge of lifetimes and drop semantics, so that if you call txn.commit() (which moves the connection out of scope, marking the storage as dead) you’re now free to do unrelated async calls after that line of code. It’s not perfect though… if you wrap the connection in a struct and hold that in your scope, the lint can’t see that you’re holding a connection. Luckily we’re not really doing that anywhere: connections are always passed around explicitly. But even if we did, you can also configure the lint with a list of “connection types” that will trigger the lint.
It sounds super cool, your idea and implementation for await and transactions. Because of my limited Rust knowledge, it's hard for me to understand how difficult it was to implement such a plugin.
Also, your idea of using different domain specific colors is interesting. It might be possible to express this via some kind of effect system. I'm not aware of any popular Rust libraries for that, but it could be worth borrowing some ideas from Scala libraries.
It’s a compile-time check, and yeah it’s a lint rule. In fact it goes a little deeper than a lint can go, because it uses data from earlier compiler phases (in order to get access to what the borrow checker knows.) The correct terminology is a “rustc driver” from what I’ve heard. Lints like clippy run as a “LateLintPass”, which doesn’t have access to certain mir data that is intentionally deleted in earlier phases to lower the memory requirements.
Hopefully it’s something I can open source soon (I may upstream it to the sqlx project, as that is what we’re using for db connections.)
I would love to see how you implemented that (and also the lint itself). I so far haven't found a solid way to implement custom lints for Rust, so if you have any resources to share at some point, I would love to see them!
It feels like Apple lacks the institutional vocabulary to even think about fixing old bugs. The way the releases are structured, there’s a “zero bugs” day where all bugs are ceremonially kicked out of the current release, and the level of quality is deemed to be “what we’re shipping with”. On that day, it’s not like the bugs are fixed, they’re just bulk-modified to target “future os release” and that’s that.
Then the planning is made for next years release and they plan for X features, which require Y time and Z engineers, and some mild hand-waving later a schedule is made, and gee would you look at that, there’s no time anywhere for fixing existing bugs. But that’s ok because big rewrite of subsystem is gonna ship next release and it’ll probably make all the bugs invalid, right? Right? Well, it certainly won’t have more bugs, right? Right? Oops…
> But that’s ok because big rewrite of subsystem is gonna ship next release and it’ll probably make all the bugs invalid, right? Right? Well, it certainly won’t have more bugs, right? Right?
They keep doing them, but I wonder to what degree these rewrites are necessary, and whether your average Apple engineer is aware that they end up with more bugs and vulnerabilities than they started. Surely they've gotta know?
People look at the backlog of issues, see all the things they don't like about $subsystem, and think to themselves "we ought to rewrite this". The incentives are all aligned to make this common. Project managers get a nice chunk of work to manage, engineers get to write things their way, managers get a nice thing to add to their accomplishments, everyone feels like progress is happening. Heck, sometimes there may actually be real deficiencies in the existing code that are being addressed! And in the end, the bug count is lower! (Never mind that it's only lower because it hasn't had the time in production to actually find the bugs yet...)
Large-scale software is hard. So hard nobody's really managed to do it well. By large-scale I don't mean "a lot of users" or "a large deployment"... I mean "a lot of engineers". Once the number of engineers gets large enough, they start making decisions that make the product worse, more bloated, more buggy, and no human is capable of keeping it in check, because the sheer amount of activity in the code is so large you can't possibly keep up with it. And the worst part is that orgs try to solve this by... hiring more engineers to wrangle the complexity. By this point you're already sunk, there's no going back.
Why not have a "rewrite policy", criteria for when a rewrite makes sense or doesn't? Surely a random engineer can't decide to rewrite things on his own.
There’s too many people with incentives to rewrite. It keeps all the gears turning, keeps everyone employed. You certainly need to justify any rewrites, but… people are really good at justifying rewrites.
This seems like it would work if you build a system on solid bedrock, but how often does that really happen? CarPlay, for example, started as a disaster. Unsurprisingly, it has changed a lot but remains one.
That wouldn’t be cynical at all! It would mean that the system works, albeit slowly.
The best we can hope for is a world where Amazon faces real financial pressure to prevent counterfeits. Thus far I haven’t seen much evidence this was happening, but this is a welcome sign.
Yeah we’re not supposed to talk about how “HN is turning into Reddit”, but it already has. For years now. A typical comment here has been indistinguishable from a typical Reddit comment for many years, with the exception that humor is a lot less common here (although even that has changed a ton.)
The argument is that “HN is turning into Reddit” has been said since the beginning of HN… but that doesn’t make it wrong. To me the transformation is already complete. Regression to the mean is unavoidable.
Its the inevitable result when you allow politics to enter a forum. It used to be posts that were overtly political were considered off-topic, but that has become more normalized. Hell ignoring this post, the top story right now is "American importers and consumers bear the cost of 2025 tariffs: analysis" which is just another political post masquerading. I wish we would just ban politics or maybe find a middle-ground like allowing overtly political posts one day a week. It's probably too late to save HN though, the community has already normalized these posts.
> When great thinkers think about problems, they start to see patterns. They look at the problem of people sending each other word-processor files, and then they look at the problem of people sending each other spreadsheets, and they realize that there’s a general pattern: sending files. That’s one level of abstraction already. Then they go up one more level: people send files, but web browsers also “send” requests for web pages. And when you think about it, calling a method on an object is like sending a message to an object! It’s the same thing again! Those are all sending operations, so our clever thinker invents a new, higher, broader abstraction called messaging, but now it’s getting really vague and nobody really knows what they’re talking about any more.
Hi, so I generally actually agree with you and your criticisms of this blog post (in your thread with the author). I think there's something pretty true in the blog post you shared from Joel (true in that it applies to more than just the software world) and looked at some of his more recent posts.
This one in particular reads similar to what this comment section is about, it looks like Joel is basically becoming an architecture astronaut himself? Not sure if that's actually an accurate understanding of what his "block protocol" is, but I'm curious to hear from you what you think of that? In the 25 years since that post, has he basically become the thing he once criticized, and is that the result of just becoming a more and more senior/thinker within the industry?
Author here! I grew up reading Joel's blog and am familiar with this post. Do you have a more pointed criticism?
I agree something like "hyperlinked JSON" maybe sounds too abstract, but so does "hyperlinked HTML". But I doubt you see web as being vague? This is basically web for data.
After taking the time to re-read the article since I initial posted my (admittedly shallow) dismissal, I realized this article is really a primer/explainer for the AT protocol, which I don't really have enough background in to criticize.
My criticism is more about the usefulness of saying "what if we treated social networking as a filesystem": which is that this doesn't actually solve any problems or add any value. The idea of modeling a useful thing (social media)[0] as a filesystem is generalizing the not-useful parts of it (ie. the minutia of how you actually read/write to it) and not actually addressing any of the interesting or difficult parts of it (how you come up with relevant things to look at, whether a "feed" should be a list of people you follow or suggestions from an algorithm, how you deal with bad actors, sock puppets, the list goes on forever.)
This is relevant to Joel's blog because of the point he makes about Napster: It was never about the "peer to peer" or "sharing", that was the least interesting part. The useful thing about Napster was that you could type in a song and download it. It would have been popular if it wasn't peer to peer, so long as you could still get any music you wanted for free.
Modeling social media as a filesystem, or constructing a data model about how to link things together, and hypergeneralizing all the way to "here's how to model any graph of data on the filesystem!" is basically a "huh, that's neat" little tech demo but doesn't actually solve anything. Yes, you can take any graph-like structured data and treat it as files and folders. I can write a FUSE filesystem to browse HN. I can spend the 20 minutes noodling on how the schema should work, what a "symlink" should represent, etc... but at the end of the day, you've just taken data and changed how it's presented.
There's no reason for the filesystem to be the "blessed" metaphor here. Why not a SQL database? You can `SELECT * FROM posts WHERE like_count > 100`, how neat! Or how about a git repo? You can represent posts as commits, and each person's timeline as a branch, and ooh then you could cherry-pick to retweet!
These kind of exercises basically just turn into nerd-sniping: You think of a clever "what if we treated X as Y" abstraction, then before you really stop to think "what problem does that actually solve", you get sucked into thinking about various implementation details and how it to model things.
The AT protocol may be well-designed, it may not be, but my point is more that it's not protocols that we're lacking. It's a lack of trust, lack of protection from bad actors, financial incentives that actively harm the experience for users, and the negative effects on what social media does to people. Nobody's really solved any of this: Not ActivityPub, not Mastadon, not BlueSky, not anyone. Creating a protocol that generalizes all of social media so that you can now treat it all homogeneously is "neat", but it doesn't solve anything that you couldn't solve via a simple (for example) web browser extension that aggregated the data in the same way for you. Or bespoke data transformations between social media sites to allow for federation/replication. You can just write some code to read from site A and represent it in site B (assuming sites A and B are willing.) Creating a protocol for this? Meh, it's not a terrible idea but it's also not interesting.
- [0] You could argue whether social media is "useful", let's just stipulate that it is.
I think there was a bit of a communication failure between us. You took the article as a random "what if X was Y" exploration. However, what I tried to communicate something more like:
1. File-first paradigm has some valuable properties. One property is apps can't lock data out of each other. So the user can always change which apps they use.
2. Web social app paradigm doesn't have these properties. And we observe the corresponding problems: we're collectively stuck with specific apps. This is because our data lives inside those apps rather than saved somewhere under our control.
3. The question: Is there a way to add properties of the file-first paradigms (data lives outside apps) to web social apps? And if it is indeed possible, does this actually solve the problems we currently have?
The rest of the article explores this (with AT protocol being a candidate solution that attempts to square exactly this problem). I'm claiming that:
1. Yes, it is possible to add file-first paradigm properties to web social apps
2. That is what AT protocol does (by externalizing data and adding mechanisms for aggregation from user-controlled source of truth)
3. Yes, this does solve the original stated problems — we can see in demos from the last section that data doesn't get trapped in apps, and that developers can interoperate with zero coordination. And that it's already happening, it's not some theoretical thing.
I don't understand your proposed alternative with web extension but I suspect you're thinking about solving some other problems than I'm describing.
Overall I agree that I sacrificed some "but why" in this article to focus on "here's how". For a more "but why" article about the same thing, you might be curious to look at https://overreacted.io/open-social/.
The problems with social media are not at all the fact that things are “locked up in apps”.
Again, you missed my point. Data sharing is the least interesting thing imaginable, has already been solved countless times, and is not the reason social media sinks or swims.
Social media sinks or swims based on one thing and one thing only: is it enjoyable to use. Are all the people on here assholes or do they have something interesting to say? Can I post something without being overrun by trolls? How good are the moderation standards? How do I know if the people posting aren’t just AI bots? What are the community standards? In short: what kind of interactions can I expect to have on the platform?
The astronaut types look at the abysmal landscape social media has become, and think “you know what the fundamental problem is? That all this is locked up in apps! Let’s make a protocol, that’ll fix it!”
Never mind that the profit seeking platforms have zero interest in opening up their API to competing sites. Never mind that any of the sites that are interested in openness/federating all univerally have no answer to the problem of how you address content moderation, or at least nothing that’s any different from what we’ve seen before.
The problem in social media is not that things are locked up behind an app. There are apps/readers that combine multiple platforms for me (I remember apps that consolidated Facebook and twitter fully eighteen years ago. It’s not hard.)
The problem with social media is that it’s a wasteland full of bots and assholes.
A HN poster said it best 8 years ago about twitter, and I think it applies to all of social media: it’s a planetary scale hate machine: https://news.ycombinator.com/item?id=16501147
I actually agree with you on a lot of these things, I just think that they do relate to the technological shape.
To give you an example, Blacksky is in setting up their alternative server that is effectively forking the product, which gives them ability to make different moderation decisions (they've restored the account of a user that is banned from Bluesky: https://bsky.app/profile/rude1.blacksky.team/post/3mcozwdhjo...).
However, unlike Mastodon and such, anyone on the Blacksky server will continue living in the same "world" as the Bluesky users, it's effectively just a different "filter" on the global data.
Before AT, it was not possible to do that.
You couldn't "fork" moderation of an existing product. If you wanted different rules, you had to create an entire social network from scratch. All past data used to stay within the original product.
AT enables anyone motivated to spin up a whole new product that works with existing data, and to make different decisions on the product level about all of the things you mentioned people care about. How algorithms run, how moderation runs, what the standards are, what the platform affordances are.
What AT creates here is competition because normally you can't compete until you convince everyone to move. Whereas with AT, everybody is always "already there" so you can create or pick the best-run prism over the global data.
Does this make more sense? It's all in service of the things you're talking about. We just need to make it possible to try different things without always starting from scratch.
> It has been three years and these tools can do a considerable portion of my day to day work.
Agreed.
> Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
Eh… some people maybe. But history shows nearly every time a tool makes people more efficient, we get more jobs, not less. Jevon’s paradox and all that: https://en.wikipedia.org/wiki/Jevons_paradox
Of course cars wouldn't lead to more horses, because horses were the thing being replaced. But cars sure as hell lead to a lot more drivers, which is more akin to the analogy.
To take a software engineer for an example, Jevon's paradox would say that since software engineering is now so much easier due to LLM's, the demand will increase due to the reduced cost, which will lead to more software needing created, which paradoxically leads to more software engineers. There's no equivalent of the "horse" in the analogy, because the same people who were coding before ("driving" the horse) will be aided by LLM's in the future ("driving" a car.)
> But history shows nearly every time a tool makes people more efficient, we get more jobs, not less.
I hope so, but you have any ideas what they could be? This time feels different, especially because all the ultra-pro-AI people keep saying that "this time it's different" from a technological revolution. This is aiming to replaces people across many industries whereas historically it has been in smaller increments as new inventions are (more slowly) rolled out.
I’ve always found it crazy that my LLM has access to such terrible tools compared to mine.
It’s left with grepping for function signatures, sending diffs for patching, and running `cat` to read all the code at once.
I however, run an IDE and can run a simple refactoring tool to add a parameter to a function, I can “follow symbol” to see where something is defined, I can click and get all usages of a function shown at a glance, etc etc.
Is anyone working on making it so LLM’s get better tools for actually writing/refactoring code? Or is there some “bitter lesson”-like thing that says effort is always better spent just increasing the context size and slurping up all the code at once?
> Claude Code officially added native support for the Language Server Protocol (LSP) in version 2.0.74, released in December 2025.
I think from training it's still biased towards simple tooling.
But also, there is real power to simple tools, a small set of general purpose tools beats a bunch of narrow specific use case tools. It's easier for humans to use high level tools, but for LLM's they can instantly compose the low level tools for their use case and learn to generalize, it's like writing insane perl one liners is second nature for them compared to us.
If you watch the tool calls you'll see they write a ton of one off small python programs to test, validate explore, etc...
If you think about it any time you use a tool there is probably a 20 line python program that is more fit to your use case, it's just that it would take you too long to write it, but for an LLM that's 0.5 seconds
> but for LLM's they can instantly compose the low level tools for their use case and learn to generalize
Hard disagree; this wastes enormous amounts of tokens, and massively pollutes the context window. In addition to being a waste of resources (compute, money, time), this also significantly decreases their output quality. Manually combining painfully rudimentary tools to achieve simple, obvious things -- over and over and over -- is *not* an effective use of a human mind or an expensive LLM.
Just like humans, LLMs benefit from automating the things they need to do repeatedly so that they can reserve their computational capacity for much more interesting problems.
I've written[1] custom MCP servers to provide narrowly focused API search and code indexing, build system wrappers that filter all spurious noise and present only the material warnings and errors, "edit file" hooks that speculatively trigger builds before the LLM even has to ask for it, and a litany of other similar tools.
Due to LLM's annoying tendency to fall back on inefficient shell scripting, I also had to write a full bash syntax parser and shell script rewriting ruleset engine to allow me to silently and trivially rewrite their shell invocations to more optimal forms that use the other tools I've written, so that they don't have to do expensive, wasteful things like pipe build output through `head`/`tail`/`grep`/etc, which results in them invariably missing important information, and either wandering off into the weeds, or -- if they notice -- consuming a huge number of turns (and time) re-running the commands to get what they need.
Instead, they call build systems directly with arbitrary options, | filters, etc, and magically the command gets rewritten to something that will produce the ideal output they actually need, without eating more context and unnecessary turns.
LLMs benefit from an IDE just like humans do -- even if an "IDE" for them looks very different. The difference is night and day. They produce vastly better code, faster.
[1] And by "I've written", I mean I had an LLM do it.
Note that the Claude code LSP integration was actually broken for a while after it was released, so make sure you have a very recent version if you want to try it out.
However as parent comment said, it seems to always grep instead, unless explicitly said to use the LSP tool.
Correct. If you try to create a coding agent using the raw Codex or Claude code API and you build your own “write tool”, and don’t give the model their “native patch tool”, 70%+ of the time it’s write/ patch fails because it tries to do the operation using the write/ patch tool it was trained on.
> I however, run an IDE and can run a simple refactoring tool to add a parameter to a function, I can “follow symbol” to see where something is defined, I can click and get all usages of a function shown at a glance, etc etc
I am so surprised that all of the AI tooling mostly revolves around VSC or its forks and that JetBrains seem to not really have done anything revolutionary in the space.
With how good their refactoring and code inspection tools are, you’d really think they’d pass of that context information to AI models and that they’d be leaps and bounds ahead.
Recently, all these agents can talk LSP (language server protocol) so it should get better soon. That said, yeah they don't seem to default to use `ripgrep` when that is clearly better than `grep`
Are you? I'm not surprised at all, considering that the biggest investment juggernaut in AI is also the author of VSC. I wonder what the connection is? ;)
I haven't seen JetBrains as 'great'. I think they have a strong marketing team that gets into universities and potentially astroturfs on the internet, but I have always found better tools for every language. Although, I can't remember what I ended up choosing for PHP.
LLMs aren't like you or me. They can comprehend large quantities of code quickly and piece things together easily from scattered fragments. so go to reference etc become much less important. Of course though things change as the number of usages of a symbol becomes large but in most cases the LLM can just make perfect sense of things via grep.
To provide it access to refactoring as a tool also risks confusing it via too many tools.
It's the same reason that waffling for a few minutes via speech to text with tangents and corrections and chaos is just about as good as a carefully written prompt for coding agents.
Faster for worse results, though. Determining the source of a symbol is not as trivial as finding the same piece of text somewhere else, it should also reliably be able to differentiate among them. What better source for that then the compiler itself?
Yeah, especially for languages that make heavy use of type inference. There’s nothing you can really grep for most of the time… to really know “who’s using this code” you need to know what the compiler knows.
An LLM can likely approach compiler-level knowledge just from being smart and understanding what it’s reading, but it costs a lot of context to do this. Giving the LLM access to what the compiler knows as an API seems like it’s a huge area for improvement.
It depends on the language and codebase. For something very dynamic like Python it may be the case that grepping finds real references to a symbol that won’t be found by a language server. Also language servers may not work with cross-language interfaces or codegen situations as well as grep.
OTOH for a giant monorepo, grep probably won’t work very well.
Tidewave.ai does exactly that. It’s made Claude code so much more functional. It provides mcp servers to
- search all your code efficiently
- search all documentation for libraries
- access your database and get real data samples (not just abstract data types)
- allows you to select design components from your figma project and implements them for you
- allows Claude to see what is rendered in the browser
It’s basically the ide for your LLM client. It really closes the loop and has made Claude and myself so much more productive.
Highly recommended and cheap at $10/month
Ps: my personal opinion. I have Zero affiliation with them
JetBrain IDEs come with an MCP server that supports some refactoring tools [1]:
> Starting with version 2025.2, IntelliJ IDEA comes with an integrated MCP server, allowing external clients such as Claude Desktop, Cursor, Codex, VS Code, and others to access tools provided by the IDE. This provides users with the ability to control and interact with JetBrains IDEs without leaving their application of choice.
LLMs operate on text. They can take in text, and they can produce text. Yes, some LLMs can also read and even produce images, but at least as of today, they are clearly much better at using text[1].
So cat, ripgrep, etc are the right tools for them. They need a command line, not a GUI.
1: Maybe you'd argue that Nano Banana is pretty good. But would you say its prompt adherence is good enough to produce, say, a working Scratch program?
Inputs to functions are text, as in variables, or file names, directory names, symbol names with symbol searching. Outputs you get from these functions for things like symbol searching is text too, or at least easily reformatted to text. Like API calls are all just text input and output.
You can give agents the ability to check VSCode Diagnostics, LSP servers and the like.
But they constantly ignore them and use their base CLI tools instead, it drives me batty. No matter what I put in AGENTS.md or similar, they always just ignore the more advanced tooling IME.
Doesn't have to be a bad thing, not all languages have good LSP support. If the AI can optimize for simple cross-language tools it won't be as dependent on the LSP implementation.
I used grep and simple ctags to program in vanilla vim for years. It can be more useful than you'd think. I do like the LSP in Neovim and use it a lot, but I don't need it.
I also lived in ctags land, but gosh I don’t miss it. LSPs are a step change, and most languages do have either an actual implementation or something similar enough that’s still more powerful than bare strings.
It’s faster, too, as the model doesn’t need to scan for info, but again it really likes to try not to use it.
Of course I still use rg and fd to traverse things, cli tools are powerful. I just wish LLMs could be made to use more powerful tools reliably!
If you are willing to go language-specific, the tooling can be incredibly rich if you go through the effort. I’ve written some rust compiler drivers for domain-specific use cases, and you can hook into phases of the compiler where you have amazingly detailed context about every symbol in the code. All manner of type metadata, locations where values are dropped, everything is annotated with spans of source locations too. It seems like a worthy effort to index all of it and make it available behind a standard query interface the LLM can use. You can even write code this way, I think rustfmt hooks into the same pipeline to produce formatted code.
I’ve always wished there were richer tools available to do what my IDE already does, but without needing to use the UI. Make it a standard API or even just CLI, and free it from the dependency on my IDE. It’d be very worth looking into I think.
Well the point is to avoid them needing to swallow it in a single gulp… after all, the source code is already all the information you need to get all this metadata.
The use cases I have in mind are for codebases with many millions of lines of code, where just dumping it all into the context is unreasonably expensive. In these scenarios, it’d be beneficial to give the LLM a sort of SQL-like language it can use to prod at the code base in small chunks.
In fact I keep thinking of SQL as an example in my head, but maybe it’s best to take it literally: why don’t we have a SQL for source code? Why can’t I do “select function.name from functions where parameters contains …” or similar (with clever subselects, joins, etc) to get back whatever exists in the code?
It’s something I always wanted in general, not just for LLM’s. But LLM’s could make excellent use of it if there’s simply not enough context size to reasonably slurp up all the code.
LSP also kind of sucks. But the problem is all the big companies want big valuations, so they only chase generic solutions. That's why everything is a VS Code clone, etc..
Not coding agents but we do a lot of work trying to find the best tools, and the result is always that the simplest possible general tool that can get the job done always beats a suite of complicated tools and rules on how to use them.
It can be: What definition to jump to if there are multiple (e.g. multiple Translation Units)? What if the function is overloaded and none of the types match?
With grep it's easy: Always shows everything that matches.
Sure, there might be multiple definitions to jump to.
With grep you get lots of false positives, and for some languages you need a lot of extra rules to know what to grep for. (Eg in Python you might read `+` in the caller, but you actually need to grep for __add__ to find the definition.)
This isn’t completely the answer to what you want but skills do open a lot of doors here. Anything you can do on a command line can turn into a skill, after all.
I have the same response to people who ask me why I don’t leave the country since things are going so bad: fuck that, this is my home. I will always love this country. It is never beyond saving. We have been through worse (the civil war at the very most obvious, but there are plenty of other low points.) We can get through this. We can make it better, we can learn to love our neighbor again, we can learn to trust each other again. We can learn to avoid these tendencies towards hatred. We can’t give up.
reply