I'm sure we'll start hearing "I AI-posted on HN for 4 years, got X karma, and no one suspected" type stories pretty soon. Eventually people could start asking for badges or flair that verify users as not being AI. It's simultaneously exciting and daunting standing at the dawn of this.
My experience is that choosing Rust just for performance gains usually doesn't pay off. In your case, node already uses C/C++ under the hood, so some of what you're replacing could just be switching that for Rust.
The primary reason I reach for it is when I want the stability provided by the type system and runtime, and to prevent a litany of problems that impact other languages. If those problems aren't something I'm looking to solve, I'll usually reach for a different language.
> choosing Rust just for performance gains usually doesn't pay off
Performance is a complex topic. Other languages can be fast and you’re likely right that with simple initial benchmarks, Rust isn’t going to out-perform other languages by enough to make much of a difference.
But what about consistency of performance? Is your 1,752,974,468th response going to be as fast as the ones in your benchmark? To me, that’s been the eye opener of deploying Rust in production. We saw P100 response times within 10ms of P0. The absolute worst case was below the threshold for human observability from the absolute best case over many months of heavy use. The metrics graphs were literal flat lines for months on end across tens of billions of requests. I have never seen that in any garbage-collected language.
That kind of performance may not be necessary for your needs and you may be able to tolerate or otherwise live with occasional slowdowns. But there are plenty of cases where consistent performance is necessary or extremely desirable. And in those cases, it’s nice to have Rust as an option.
The article isn't related to repo structure, however repo structure can relate to the core issue of how to share code and maintain domain boundaries - search for the "umbrella" thread elsewhere in these comments.
Just because repo structure can relate doesn’t mean that such a relationship involves any properties of monorepo structure vs polyrepo structure.
Any messy code organization or dependency issue you can have in a monorepo, you can also have in polyrepos. Any automation or central controllership you can have in a monorepo, you can also have in polyrepos. Any scale-out chores or overhead work you can have with polyrepos, you can also have with monorepo.
The issues of monolith / poorly coupled dependency applications vs microservices has absolutely nothing to do with monorepo vs polyrepos. Neither option makes any class of tooling easier / harder to build or maintain. Neither makes conceptual organization easier / harder to get right, neither makes discoverability or cross-referencing easier or harder. That whole axis of concern is just fundamentally unrelated to the application monolith vs microservices question.
This is what I do in several of my backend Kotlin code bases and it's worked very well. I've also heard it called "distributed monorepo". There are a handful of ways to share code, and the tradeoffs of this approach are very manageable.
The biggest downside I've encountered is that you need to figure out the deployment abstraction and then figure out how that impacts CI/CD. You'll probably do some form of templating YAML, and things like canaries and hotfixes can be a bit trickier than normal.
Absolutely true that languages are built to fulfill specific roles and have associated trade-offs. I think you answered your own question - JavaScript's strengths are being very widespread in current software development and having a minimal syntax/type system. IMO it can be a good language for MVPs/POCs and serve as a higher form of pseudocode. Perhaps the author's choice of JS as a way of reaching the widest possible audience and focusing on the concepts more than the implementation details.
A relatively recent personal example: in working through some DSL and parser combinator examples in Rust, sometimes I get too distracted with lifetimes, annotations to deal with deeply recursive functions, etc. and just search for examples in a different language that allow me to focus on the concepts I'm trying to learn.
Since I am asked from time to time about the technology choices for my book, I decided to write a blog post to explain my reasoning: https://www.alex-lawrence.com/posts/why-my-book-uses-nodejs-... Some of my arguments pretty much match with your statements.
At a more agile org, getting 2-3 stories done in one day isn't too extraordinary. A PR that follows some of the rules from the OP (especially 5/6/7) might only take 5-10 minutes to review completely.
If you can't slow down enough to do that, or your stories are so large they require more attention, that might be a red flag situation? Writing code faster than you can review it will eventually catch up to the team/org.
This is great! Many of these things are slowly learned over time, but explicitly listing them out will be helpful when refreshing or teaching others.
The three that have been most helpful on our team are: #1 (review your own code first), #2 (write a changelist description), and #5/#6 (narrowly scope changes/separate functional/non-functional changes).
Tufte would disagree, and at least some consider his thoughts on design to be very solid.
Would you be amenable to "A well-designed web page does not need to rely on custom fonts, provided their impact on page load is absolutely minimal"?
You can unobtrusively serve up a single font embedded as base64 on a static site - a world of difference from some of these sites that load up 5MB of a UX person's vision off some remote CDN.
I didn't downvote, but I'm assuming that suggesting an equivalence between concerns with GMO and being antimask/antivaxx comes off as hyperbolic.
I sometimes have a difficult time separating legit benefits of GMO with shilling or astroturfing, which is unfortunate when trying to have a reasoned discussion.
Especially if you're using something like fzf that can easily search your command history, "every time someone has asked":