On the contrary, creating new HTTP connections introduces an irreducible source of latency compared to establishing and reusing a persistent connection.
You may end up building a single-tenant architecture where each tenant gets its own database and there are relatively few consumers that are able to respond quicker due to sticking with a long-lived connection model.
If you look at other HTTP-based databases, like DynamoDB or S3, the latency involved in setting up new connections is a downside of those databases (not arguing that it's never worth it, architectural decisions are all trade-offs, but that is a trade-off).
Trying to build on persistent HTTP connections that don't close is a recipe for frustration when scaling horizontally, which is something you plan to do, since the reason why not to go with Postgres is so you can have more than one instance, right?
You can't juggle the connection between different servers. If a server drops out (because it's being scaled in), then you lose the connection, and you have to establish a new one, which introduces a hiccup/latency. No free lunches.
You may end up building a single-tenant architecture where each tenant gets its own database and there are relatively few consumers that are able to respond quicker due to sticking with a long-lived connection model.