Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the contrary, creating new HTTP connections introduces an irreducible source of latency compared to establishing and reusing a persistent connection.

You may end up building a single-tenant architecture where each tenant gets its own database and there are relatively few consumers that are able to respond quicker due to sticking with a long-lived connection model.



I don’t understand your point.

These are long polled connections.


If you look at other HTTP-based databases, like DynamoDB or S3, the latency involved in setting up new connections is a downside of those databases (not arguing that it's never worth it, architectural decisions are all trade-offs, but that is a trade-off).


Http doesn’t set up a new connection each time it supports persistent connections.

And in the case of message queue long polling the http connection stays open and waits for a message to become available.


Trying to build on persistent HTTP connections that don't close is a recipe for frustration when scaling horizontally, which is something you plan to do, since the reason why not to go with Postgres is so you can have more than one instance, right?

You can't juggle the connection between different servers. If a server drops out (because it's being scaled in), then you lose the connection, and you have to establish a new one, which introduces a hiccup/latency. No free lunches.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: