Hacker Newsnew | past | comments | ask | show | jobs | submit | dinoreic's commentslogin

Long article, and I have to say that I do not agree with ~ 90% said.

Problem with web components is that they have a few gotchas, one has to solve it he wants to use them effectively. I agree that SSR is a problem, if it is a problem. Crawlers now render JS, I have not see any benefits of SSR for a long time now.

Also there are a lot of mentions of Shadow DOM. Why would anyone want to use that? It is slower, detached from main process. Just use original DOM and modify it.

Also, if devs "want!" native components for React, Vue, etc., that does not mean Web components are bad, that only means devs want native components for their FW :).

I personally hacked my Web components micro FW in 2 weeks, and started to replace Svelte and Rails/StimulusJS with it. I am every happy, but you need to know how to handle gotchas.


> Also, if devs "want!" native components for React, Vue, etc., that does not mean Web components are bad, that only means devs want native components for their FW :).

You are right. For transparency, I also work for Corbado and was part of the discussion. I think most of the developers prefer a native component if they have a framework in use. In this case, because of the SSR trend, it's just a bit more difficult to use web components as a base for all of them. Would you not agree? Of course, you could offer native implementation in all variants, but that's very difficult.


You are right, and your points on SSR where all good. If SSR is a must have, I do not know how to solve it. I see "riot.js" manages to do it, but I did not go into details.


Crawlers rendering JavaScript isn’t good enough if you max out your crawl budget. This is a fundamental problem with CSR and until crawl budget doesn’t matter anymore, it will always be.


I agree the shadow DOM is annoying and best left alone - at my job we just avoid it completely, and our entire web component library is meant to just wrap ordinary light DOM elements with functionality.

Used this way, WC are light, easy, and do basically anything you'd need.


I am Rails dev, and I don't like the fact that Rails in every new version has to REINVENT ALL THE WHEELS, ALL THE TIME. That creates huge problems for Rails devs and stakeholders.

Rails is based on Ruby, which is backend language, and Rails should stay more on backend.

That means

  - session management
  - routing and controllers
  - many API options (REST, json-rpc, ...) + JSON object exporters
  - ORM, Mailing, etc
  - server template renders (ViewCells are still not part of Rails)
  - tasks, logging and other helpers
In the same way Rails adopted rake gem as default task runner, without re-inventing it as "ActiveCliTasks", they could have created standardised connector layer that will exchange objects between backend and frontend.

So no Stimulus, TurboLinks, HotWire and other stuff one HAS TO learn and re-learn between versions. Svelte, Vue and React should have been officially supported with thin connectors. Rails server render layer can easily be integrated in any existing frontend tech, as alternative to React server components insanity.

Also, there is no reason to have to have socket connection to pass HTML between server and client, good old RPC is just fine as 99% of big app scaling problems are Database problems, no need for another layers of complexity one has to ensure it works.

I think, in that way Rails could "shine" far into the future, using its strong points but adopting new frontend stuff on the way, not re-inventing it all the time.


And chances are, new Rails projects will just be API backends with React frontends.

Also, if trying to do lightweight Javascripty frontends within the framework is important, we might as well just move to Phoenix. Phoenix is arguably better already... it just requires a moderate jump to learn Elixir (which unfortunately looks very much like Ruby but requires a very different approach to writing).


Phoenix is super fun, but it is in completely different realm than rails :)

What about all the gems Ruby has, does one really have all he needs inside Elixir/Erlang? More complex stuff like NokoGiri, Haml, etc., that works well and is maintained.


Are you seriously comparing the speed of changes in Rails with React/Vue/Svelte?


People just look for ways to justify their already taken decisions.


I can't understand how Svelte is not more popular. From first time I saw it 4y ago I thought it was game changer, did not change opinion.


REST is for noobs, JSON RPC is silent pro's choice :)

Make all requests POST and enjoy easy life without useless debates on should creation of resource be on POST or PUT or should you return HTTP status 404 or 200 if resource/document on server is not found (of course if should be 200 because request was a success, 404 should only be used it api method is not found).

I 100% agree with Troy Griffitts beautiful take https://vmrcre.org/web/scribe/home/-/blogs/why-rest-sucks


JSON RPC:

- Everything is a POST, so normal HTTP caching is out of the question.

- JSON RPC code generators are non-existent or badly maintained depending on the language. Same with doc generators.

- Batching is redundant with HTTP2, just complicates things.

- Because everything is a POST normal logging isn't effective (i.e. see the url in logs, easy to filter etc). You'll have to write something yourself.

- Not binary like Protobufs or similar

But yeah, "the silent pro's choice"... Let's keep it silent.

JSON RPC is pretty much dead at this point and superseded by better alternatives if you're designing an RPC service.


> - JSON RPC code generators are non-existent or badly maintained depending on the language.

Very much so. It’s in a terrible state where I’ve looked. Most of the tooling is by OpenAPI or similar which comes with a bloatload of crap that is only marginally better than say SOAP. It needs to be much simpler.

> - Not binary like Protobufs or similar

Agreed. This is not an issue for small things that can be base64 encoded but once you need large blob transfers you don’t have any reasonable option. This is a problem in eg graphql which also misses the mark and you have to step outside for things like file uploads.

It feels like the whole standardization effort around json rpc is weak. It doesn’t address the needs of modern RPC-like systems. Which is unfortunate because there’s a real opportunity to improve upon the chaos of REST.


It's not ideal, but in practice GZIP base64 is only marginally larger than GZIP binary


Indeed, good point, and worth clarifying. A lot of people think the size overhead is the problem, which usually it isn't, like you say, because of fairly cheap compression.

However, the main issue with big base64 blobs is that you can and should never assume that JSON parsers are streaming. So you may need to load the whole thing in memory, which of course isn't good.

Note that I'm not necessarily blaming JSON for this. My gut feeling is that crusading for streaming parsers is a Bad Idea. Instead, this is something that should probably be a higher-level protocol, either by streaming chunks (a la gRPC) or by having separate logical data streams (see e.g. QUIC). JSON RPC does not, afaict, solve these issues.


Base64 multiplies the GZIP size by 1.33x (4/3, or a 33.3% increase in size)

SO (https://stackoverflow.com/questions/4715415/base64-what-is-t...)


That's for Base64-encoded GZIP, not GZIP-encoded Base64 :)


Have you tried zstd, now widely supported?


Thanks for this. I felt I was going crazy, decrying many professional and smart engineers work as not being 'expert' enough, as if they didn't weigh up and consider other options. Yes, there can be a bit of cargo culting, but to claim that only experts use JSON RPC is ridiculous.


i always fail to understand what kind of services there are that aren’t at least RPC-ish

thin CRUD wrappers obviously but usually when you are piping data from one source/format to another, you typically want to do something that is ever so slightly “not-CRUD” (call another API/service, etc.)


I'm with you on this one.

Probably the confusion comes from the fact a lot of people think having a verb in their URI makes the API RPC, while only having nouns is the proper REST.

But the whole verbs vs nouns debate in the context of REST sounds a bit like... arguing whether building a round or square control tower out of straw will attract more cargo.

HATEOAS is the cornerstone of REST, this is what sets it apart from RPC-style applications, not the absence or presence of verbs in URIs.

Think of a regular (that is non-SPA) Django, RoR, etc application.

The user points their browser to the app's home page. The backed receives the HTTP request, renders the HTML, and sends it back to the browser.

The browser renders the HTML and lets the user interact with all the control elements on the page. When the user clicks a button or follows a link, the browser sends the corresponding HTTP request to the backed which inspects it and decides what next HTML page (or maybe the same) representing the state of app should be transferred to the user.

This is basically REST. The key to notice here is at no point in this example the browser gets to decide what the app's "flow" is supposed to be -- this is the sole responsibility of the backend.

A consequence of this is the entire structure of pages (aka resources) can undergo a drastic change at any moment, but as long as the home page URI stays the same, the user doesn't suddenly need another browser to access the app.

If changing a resource's URI, or removing a resource altogether can break an existing client, if an existing client cannot make use of a new resource without changes to the client's sources -- that's RPC even if there's not a single verb in the API URIs.

Most likely this architectural style isn't something that first comes to mind when we think of today's mobile apps or SPAs as API clients. And in my opinion it's just not a good fit for most of them: the server isn't expected to drive their flow, it just exposes an API and lets each client come up with its own UX/UI.


Noob question, why is batching redundant in HTTP2?


It isn't.

Batching means combining multiple logical operations in a single physical request. HTTP/2 muxes N logical requests over 1 physical connection, which is good, but the application will still process requests independently. You always want to batch workloads into single requests if you can, HTTP/2 doesn't change this.


Doesn't seem redundant to me. Even if you can multiplex requests, batches still have certain advantages, e.g.

- compression is often more efficient with larger payloads

- can reduce per-request overheads, e.g. do authentication once rather than X times

- easier to coalesce multiple queries, e.g. merge similar requests to enable data retrieval via a bulk query, instead of X individual queries


HTTP/2 Supports multiplexing, so you can send multiple requests at once on the same connection


I don't like REST either, but JSON RPC is similarly hamstrung in some scenarios (examples: streaming, CDN caching, binary encoding).

I mostly dislike REST because nobody can agree on what it is and there are too many zealots who love to bikeshed. If you stick with the simple parts of REST and ignore the zealots, it's decent enough for many scenarios.

I've yet to find an RPC protocol that fills all requirements I've encountered, they all have tradeoffs and at this point you're better off learning the tradeoffs and how to deal with them (REST, JSON RPC, gRPC, WebSockets, etc.) and how they interact with their transports (HTTP/1.1, H2, QUIC, etc.), and then play the unfortunate game of balancing tradeoffs.


ReST makes sense in certain cases, where resources are a tree (like a typical web site is a tree), with collections of leaves, and these leaves make sense by themselves. Then you can go full HATEOAS and reap some actual benefits from that.

Most of the time (like 99.9%) what you happen to need is JSON RPC. Even if some parts of your API surface look like they would fit the ReST model, the bulk does not. Ignore that, build a protocol along the lines of your subject area. Always return 200 if your server did not fail or reject the request, use internal status signaling for details. Limit yourself to GET and POST. Use HTTP as a mere transport.


These seem arbitrary rules.

"Use internal status signaling" for example doesn't seem any better than deciding what status codes mean what; it's just a second layer of codes where the first one is now useless.

"Limit yourself to GET and POST." - delete and patch are pretty useful for documentation simplicity too. If there were a LIST verb that would be even handier, but nothing's perfect.

"build a protocol along the lines of your subject area" - I think you can do this (and well or badly) using REST or RPC forms.


+1 and I'll bump it up a notch... not only should you ignore REST you should ignore URLs. You want to write protocols, not APIs. Redis, for example, has a better "API" than any web API I've used. Easy to use, easy to wrap, easy to extend and version. HTTP is the other obvious example that I shouldn't have to go into.

If you'd like a good back and forth on the idea the classic c2 page is a great resource. http://wiki.c2.com/?ApiVsProtocol


Don't ignore URLs completely! They are great for namespacing and versioning.


Why add the additional complexity of multiple connection points? Protocols support both of those operations perfectly well and it seems that adding URLs would just confuse things.


Because at some point you will need to deprecate ciphers and when you do you don't want old clients to explode. The domain is the way you version connection requirements so you can support old clients with crappy ssl options without screwing up the security of new clients.


HTTP is itself a protocol, and URLs are part of that protocol. They're not really "connection points" in any meaningful sense.


Sometimes all you got is port 443, and adding subdomains is a non-zero hassle, especially if you serve all off the APIs from the same code anyway.


You don't need subdomains or other ports because you encapsulate everything in the protocol. A system that works on a protocol only really needs a data socket which can be simulated pretty easily via any URL with the POSTs working as a bursty stream.


Ahh, the 2000's called. They want their SOAP back.


I don't think the parent was referring to an XML-based protocol.


This article defines REST incorrectly, and doesn't seem to understand the concept of HTTP methods, calling them verbs (arguably fine) and types (huh?) seemingly arbitrarily. Methods are a core part of HTTP -- just because you can't specify them explicitly in a browser as a user doesn't mean they're "cryptic curl arguments" or worth ignoring. I'm not sure I'd put too much stock into this perspective.


Thank you all for the great comments.

I want to emphasize that I was not thinking about JSON RPC as a specific protocol, but more as a JSON format to transfer data, similar to how REST APIs usually do, and some kind of "HTTP method agnostic remote procedure call", it does not have to be JSON RPC standard.

Personally, I am a fan of just having API Class-es + methods that automatically map to API calls with automatic api interface and doc builders. I find that it would be super strange if I had to prefix my internal methods with DELETE or PUT based on do they remove or add to some Array. Using that logic, why do that in APIs.

I just find it super strange that people want to mirror their app logic + error response codes to some protocol like HTTP – ridiculous :) Why not go even lower as TCP and use some of that spec for our client <> server API conn. Many people will laugh, but if you think about it, where is the difference?


> I find that it would be super strange if I had to prefix my internal methods with DELETE or PUT based on do they remove or add to some Array. Using that logic, why do that in APIs.

It's true that POST ends up being a bit of a grab bag for all the non-CRUD API calls.

But I find it very useful when looking over someonje's API to find them using PUT, or DELETE. PUT in particular provides really useful signals about the nature of the resource we are dealing with.

And lets not get started with the in-built caching etc. you throw away by not using GET.


> I just find it super strange that people want to mirror their app logic + error response codes to some protocol like HTTP – ridiculous :)

Why is this ridiculous?

HTTP is the default protocol for network services, so it seems to me that it is perfectly sensible to design your API to be compatible with HTTP semantics.

> Why not go even lower as TCP and use some of that spec for our client <> server API conn. Many people will laugh, but if you think about it, where is the difference?

Because HTTP is the only protocol that can reliably transit arbitrary networks (middle-boxes, NAT, etc.) in practice.


REST conventions only make sense for externally consumed APIs. Even for those, there's GraphQL.


The Venn diagram overlap between REST and GraphQL is pretty small.


I've been a REST API developer for a few years now. For whatever reason, I've never bothered dipping my toes in the RPC realm. This article resonated with me. Looks like I'll be building an RPC API in the near future.


Ban begging and sleeping on a streets, make huge homeless shelters on outskirts of a city and provide basic healthcare to those people in need.

Is it that complicated and costly compared to this shame? USA people are so strange.


"The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread."

- Anatole France


The law also forbids rich-only things which poor people are not affected by, like white collar crimes.


I don't think I would compare shelter and food on the same level as "market manipulation" in this conversation.


I thought we were talking about the majestic equality of the law


The problem is they will stop after step one. They will make it illegal to beg or sleep on the streets, and then, they will expect those people to just magically disappear.


Because it is baked into the class itself. Ruby is global variables "fiesta", because all classes and modules (and functions that begin with first uppercase letter) and became global once loaded.

Because of that, you can not have 2 parallel versions on the same lib, in the sane app, not possible. You load classes once, and they are accessible everywhere, magic! No need for loader headers as in JS, Python, Java, Go, ...

That said, ruby devs like to have "sane" extensions like to_json. People proffer to write "@object.to_h.to_json" instead of "JSON.parse(@object.to_h)" and not to think about loading json lib. I agree that Rails is abusing this too much.


> you can not have 2 parallel versions on the same lib, in the sane app

with the very latest ruby version, you actually can:

https://rubyreferences.github.io/rubychanges/3.1.html#kernel...


I suppose you're referencing:

    Kernel#load: module as a second argument
(the link jumps to some random place for me)

Well, you always could, it was just a bit more involved:

https://github.com/lloeki/pak


I think point guy tried to make is, that ton of big companies, that would/could benefit from in-house solution still go for full cloud based one without thinking about alternatives.


I assume readership base per particular publication is lower then before plus when in open market they get lower $ for x views then they used to.


I cant understand how a developer can be aware of existence of Svelte (https://svelte.dev/) and still have React problems.

Just start with creating web components in Svelte, include them in existing React code, and slowly phase out slow, and old tech that React is. This is just frontend lib, why are people so religious about this. You still use Node, JS/TS, etc.

Im I troll? Don't know, but that is just what I did year ago. Now I just laugh when I see articles like this, buy I also feel sad for small entrepreneurs that have to pay big $ for code in "dated tech" - thinking "they are on the edge".


Svelte is be recommended left and right as the "solution to React" but I think it's still way too young of a project to be the cure-all that people think it is.

It's promising, especially for smaller projects/MVPs, but I still have no idea how well it will scale with app size/team size (and have yet to see good evidence that it does it well).

I just find it ironic that people dog on React for being the "default", but then also can't recommend Svelte fast enough, seemingly because its just "not React".


Svelte is so pleasurable to work with. I'm still a beginner but making my own components that can interact with dom elements (for things like three.js/d3.js/peaks.js) was clear from the documentation almost immediately. I'm in the process of reworking some static sites with Svelte because I can re-use the components I make everywhere and the build pipeline is so simple.


We just tried Svelte in a small project at work, and it was a pleasant experience. Like its small api footprint and closeness to plain html/js and its small bundle size.

In our setup with parcel we didn't get good error messages and line numbers (it spit up internal compiler errors), but it might have to do with our setup. Trying it at home with snowpack it gave comprehensible error messages.


React's big enough that large dev orgs have made big investments into it and can't switch to Svelte on a dime.


It's not only that, but there's always dragons in scaling a framework to accommodate teams of hundreds of devs and supporting millions of users. Svelte is less known in this regard. Marginally better along one or two dimensions is not enough to justify the risk.


I'm interested in this - do you recommend any documentation or writing on mixing React and Svelte codebases?


I created Ruby API lib named Joshua, out of frustration with Grape

https://github.com/dux/joshua

* Can work in REST or JSON RPC mode.

* Automatic routing + can be mounted as a Rack app, without framework, for unmatched speed and low memory usage

* Automatic documentation builder & Postman import link

* Nearly nothing to learn, pure Ruby classes

* Consistent and predictable request and response flow

* Errors and messages are localized


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: