Hacker Newsnew | past | comments | ask | show | jobs | submit | herrkanin's commentslogin

I'm so sick of these performance benchmarks. I understand it's easy to spin them up to show that one framework is faster than another, but in general all these frameworks are fast enough for 99.9% of use cases.

Where frameworks lack today, in my opinion, are in providing the right tools further optimize the UX of interacting with web sites. It's a constant struggle of loading spinners and flicker and loss of scroll positions.

The only framework I see that actually tries to resolve these very hard problems is React, through their work on new asynchronous primitives like startTransition. Yes, they are currently hard to understand how to properly use, but I so wish the discourse would be around how to best resolve these actual UX issues than who can create 50M divs the fastest.


IMO, we desperately need standards or tooling to make frameworks easier to swap out and interoperate. Web Components was supposed to be this, but it's not quite there yet and requires awkward wrappers around everything.

No framework will stand the test of time. I encourage everyone to, at the very least, own your state and router libraries, as you'll be able to extend them when you want to jump ship in a more incremental fashion. Going all in on a single framework's state, router, and view libraries will create a ton of inertia...


At this point just ditch the browser and instead have a browser with html css and lua, maybe that will help the world (satire but I genuinely want the world to somehow move to this if it ever becomes popular)

Would this make me a bad guy if I tried but couldn't find the link? oof. wait for sometime so that I see the list of my github stars because its hidden somewhere in there...

Edit: found it! https://github.com/outpoot/gurted

Here's a video as well which I found later through the github username and some yt searches later

https://www.youtube.com/watch?v=37ISfJ2NSXQ


To me the only framework that has really pulled this off is Phoenix live view and spinoffs, because they solve the fundamental problem: pipelined latency. The frontend has to request object A, wait for the result, then it has to request object B, then wait, etc, etc. There's too many combinations of objects, so it would be impossible to have an endpoint per specific request (I suppose graphql has sort of done this, but it's still not flexible enough for complicated transforms). Live view solves this problem by not really solving the problem, but moving everything server-side so pipelined latency is dramatically lower.


Vue has great tools for a lot of this: https://vuejs.org/guide/built-ins/transition


Yeah I agree, these benchmarks are basically meaningless. E.g. they acclaim Vue's binding based approach as being faster, but it also leads to spaghetti code. React was specifically designed to avoid that so you can build big apps that aren't buggy.

Also isn't Preact meant to be a faster option if you really need performance?


Arent buggy? React had to introduce a compiler because the masses only write buggy code. The reason they likely waited so long to do it is because they felt like it was a waste to write software to fix something that theoretically has “no flaws” because it’s “just JavaScript”.

Literally every other JS framework figured this out years and years ago and some over a decade ago. Compilers help to raise the floor for everyone so we don’t need to worry about making a dumb mistake and drastically slowing down our programs. Compilers are the evolution of software.


> I'm so sick of these performance benchmarks. I understand it's easy to spin them up to show that one framework is faster than another, but in general all these frameworks are fast enough for 99.9% of use cases.

Yes, any framework is fast enough. At this point, everybody probably knows already. Nobody would ever say React is not appropriate because it's slower than Svelte. No sane person would ever argue for a migration from React to Svelte based on this benchmark.

But being against the performance benchmark is such a weird take. It's so strange that many times there are hidden agendas.

Many times because a person advocates for X over Y at Company Z. Then, there's some random benchmark saying Y is faster. Now the person needs some way to cope. The best way is to refute the benchmark in some ways, but this would take a huge amount of time and effort. The second best way is to simply say "it doesn't matter. I hate this useless benchmark. There are more important problems to solve!"... as if everyone on the planet has to always solve the most important problem first ... only one problem and no more. Haha


yes because these benchmarks are not akin to the real world; rendering a large list of data? You'd use a virtualized list etc


Most of that latency is coming from back ends across most major sites, anyway, so it's the wrong place to test.

As an addition to the general commentary here, "The Toilet Paper" is an unfortunate choice of label for this article, and maybe also indicative of the quality of the writing.


It really isn't - a ui framework should be able to properly handle backend latency and provide a great experience while waiting for a backend response with no flicker while not locking the entire ui. It's just way harder to set up good benchmarks for this.


That's not handling latency.


The difficulty to deploy Next.js is greatly exaggerated in my opinion. It's mostly if you care about some of the more advanced features, like image optimization and hosting static assets on a different origin it can become difficult, but these are features no Next.js alternative generally provide anyway.


> hosting static assets on a different origin it can become difficult

What's the alternative? Hosting the static assets on the same place as the backend? Usually adding the CORS headers is enough to solve that (on the backend side), the frontend is still just HTML,CSS and JS running from nginx.

Is it common to do a different type of deployment with Next.js? It's a pretty basic deployment scenario (having the frontend on a different origin than the backend it communicates with), so not sure why that'd be so difficult with Next.js compared to basically anything else.


It's the opposite, it's extremely easy to do that with Next.js - pretty much free - but only if you're deploying to Vercel. If you want to host somewhere else then you have to do that semi-manually the same way you would with any other framework.


Even with the optimizations it's not that difficult in my experience. Not terribly well documented (not worst-in-class either) but not that hard and mostly just works once you have a pipeline up and running. We set ours up about two years ago now and have had to make minor modifications maybe three times since then.


Same. I've deployed a half dozen or so Next.js apps and it's no more difficult than any other node app unless you're using some of the more advanced features. In fact, if you only need something static and can do SSG then it's far easier than other node apps because all you need is nginx.


I might be blindsided by using npm exclusively for years by this point, but what would be a better way to support iteratively migrating from v3 to v4 without having to do it all in one large batch?


Using npm's built in support for package aliases to simultaneously install zod@3 as zod and zod@4 as zod-next?

Edit: reading the rationale, it's about peer dependencies rather than direct dependencies. I am still a little confused.


As you allude to: your aliased "zod-next" dependency wouldn't be able to satisfy the requirements of any packages with a peer dep on Zod. But this approach has a more fundamental flaw. My goal is to let ecosystem libraries support Zod 3 and Zod 4 simultaneously. There's no reliable way to do that if they aren't in the same package.[0]

[0] https://github.com/colinhacks/zod/issues/4371


One possible solution: Publish a new package with a new name. I've personally been thinking of doing that for some libraries, where I'd commit to never change the public API, and if I figure out a nicer way of doing something, I'd create a new library for it instead, and people who wanna use/maintain the old API can continue to do so.


People wouldn't change to the new one, because names are extremely sticky, so what's the point?


Ok, that's fine by me. They can continue using the original library for whatever reason they want, including because of the name.


If they need the new one, they will switch to it. And you have the right to drop support for the old one after a while (hopefully giving everyone the time to migrate).


> If they need the new one, they will switch to it.

I already gave my argument on why this won't happen. People stick to what they know and trust. They don't change because there is some other thing that is supposedly better. It has to be 10x better before people will migrate off one thing to another. If you want to do incremental improvements to a thing, then you'll have to do incremental improvements to that thing.


> They don't change because there is some other thing that is supposedly better. It has to be 10x better before people will migrate off one thing to another.

That's the point, and it makes sense from their perspective, I'd probably do the same as well.

Creating a new library instead of changing the existing one lets people chose between those two approaches. Want the latest and greatest but with API breakage? Use this library. Wanna continue using the current API? Use this library.

Instead, we kind of force the first approach on people, which I personally aren't too much of a fan of.


As I’ve said in other comments. If you’re developing a library, you can commit to its API and do security and optimization fixes and build a new one where you try a new design/approach. Merging the two together is always a bad idea.


The point is API stability. Not enough people in the JS world care about this.


Same reason as why it's harder to solve a sudoku than it is to verify its correctness.


I should have made my post clearer :)

There isn't one perfect solution to SQL queries against complex systems.

A suduko has one solution.

A reasonably well-optimised SQL solution is what the good use of SQL tries to achieve. And it can be the difference between a total lock-up and a fast running of a script that keeps the rest of a complex system from falling over.


The number of solutions doesn't matter though. You can easily design a sudoku game that has multiple solutions, but it's still easier to verify a given solution than to solve it from scratch.

It's not even about whether or not the number of solutions is limited. A math problem can have unlimited amount of proofs (if we allow arbitrarily long proofs), but it's still easier to verify one than to come up with one.

Of course writing SQL isn't necessarily comparable to sudoku. But the difference, in the context of verifiability, is definitely not "SQL has no single solution."


If the current state of software is any indication, experts don't care much about optimization either.


I think arrogance has always been part of their culture – partners have _always_ hated working with Apple. Personally I believe it's the shifting dynamics of no longer being the underdog that slowly has eaten away at their core values.


Steve Jobs was able to get Microsoft to keep writing Office for the Mac, MS and Adobe to support OS X, the music labels to come to iTunes, Disney to iTunes, app developers to come to iPhone, and Netflix to come to the iPad.

Apple now can’t get anyone to support new platforms


> Steve Jobs was able to get Microsoft to keep writing Office for the Mac.

That was a contractual arrangement. Apple agreed to make Internet Explorer its default browser on macOS for 5 years, and Microsoft agreed to continue to offer the Mac version of Microsoft Office. Also, Microsoft was allowed to purchase $150 million in non-voting Apple stock.

I was in the audience when Steve announced this deal at MacWorld Boston in 1998. This was the famous "Microsoft doesn't have to lose in order for Apple to win" speech, after the audience booed when Bill Gates’ face was displayed on a large screen.

And let's be clear: Microsoft had run afoul of antitrust laws, so having Apple being seen as a viable company was in Microsoft's best interest.

After the 5 years were up, Apple introduced Safari.


They went from the company that everyone loves to hate, to the company that everyone just hates.


As a web dev, it’s been fun listening to Accidental Tech Podcast where Siracusa has been talking (or ranting) about the ins and outs of developing modern mac apps in Swift and SwiftUI.


The part where he said making a large table in HTML and rendering it with a web view was orders of magnitude faster than using the SwiftUI native platform controls made me bash my head against my desk a couple times. What are we doing here, Apple.


SwiftUI is a joke when it comes to performance. Even Marco's Overcast stutters when displaying a table of a dozen rows (of equal height).

That being said, it's not quite an apples to apples comparison, because SwiftUI or UIKit can work with basically an infinite number of rows, whereas HTML will eventually get to a point where it won't load.


I love the new Overcast's habit of mistaking my scroll gestures for taps when browsing the sections of a podcast.


Shoutout to iced, my favorite GUI toolkit, which isn't even in 1.0 yet but can do that with ease and faster than anything I've ever seen: https://github.com/iced-rs/iced

https://github.com/tarkah/iced_table is a third-party widget for tables, but you can roll out your own or use other alternatives too

It's in Rust, not Swift, but I think switching from the latter to the former is easier than when moving away from many other popular languages.


It's easy to write a quick and clean UI toolkit, but it's when you add all the stuff for localization (like support for RTL languages - which also means swapping over where icons are) and accessibility (all the screen reader support) is where you really get bogged down and start wanting to add all these abstractions that slow things down.


RTL and accessibility are on the roadmap, the latter for this next version IIRC

I'd argue there's a lot more to iced than just being a quick toolkit. the Elm Architecture really shines for GUI apps


I wish there were modern benchmarks against browser engines. A long time ago native apps were much faster at rendering UI than the browser, but that may performance rewrites ago, so I wonder how browsers perform now.


As a web dev I must say that this segment made me happy and thankful for the browser team that really knows how to optimize.


Hacker News loves to hate Electron apps. In my experience ChatGPT on Mac (which I assume is fully native) is nearly impossible to use because I have a lot of large chats in my history but the website works much better and faster. ChatGPT website packed in Electron would've been much better. In fact, I am using a Chrome "PWA App" for ChatGPT now instead of the native app.


It's possible to make bad apps with anything. The difference is that, as far as I can tell, it's not possible to make good apps with Electron.


Someone more experienced that me could probably comment on this more, but theoretically is it possible for Electron production builds to become more efficient by having a much longer build process and stripping out all the unnecessary parts of Chromium?


> In my experience ChatGPT on Mac (which I assume is fully native)

If we are to believe ChatGPT itself: "The ChatGPT macOS desktop app is built using Electron, which means it is primarily written in JavaScript, HTML, and CSS"


It’s not a symbolic link - it copies on modification. No need to worry!


I've been for many years a happily paying customer of iStat Menus [1], from which this seem to be the heavily inspired of.

[1] https://bjango.com/mac/istatmenus/


Latest redesign is pretty garbage though. I want my precise graphs back!


I agree, verschlimmbesserung...

- "The German word "Verschlimmbesserung" is a compound noun that combines "verschlimmern" (to make worse) and "verbessern" (to improve). It refers to a situation where an attempt to improve something unintentionally makes it worse. This term is often used humorously or critically to describe well-intentioned actions or changes that backfire and lead to negative outcomes instead of the desired improvements"


I paid for it, and paid for one upgrade, but stats looks like it covers all of what I am interested in.


++ to this rec.

I've tried Stats over the years as the project has evolved and I keep coming back to iStat Menus. Stats feels very inspired by iStats Menus's design as well. The one thing I appreciate about Stats though is support more SMC sensor values.


I've lost count of how many years I've owned a Bjango license. Amazing software.


One, two, three…Christ, 16 years here. This made me feel terrible. Thanks!


I think I paid for it a few times because it had been so long. I also got work to buy 12 licenses to monitor edit bays when they were overheating all the time. I could read the machines stats on an iPad!

Good times.


Been using stats for 4 years now, never had any issue with it. why pay when something free and open source is available.


so by increasing the cost of patient care, they can take out more profits.


He still is


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: