Strictly speaking we charge for compute and storage. You can create variously-sized Gel instances and within those instances an arbitrary number of branches.
We give you 1GB of space for free. I think you can fit three or four branches in that. But we'll announce a cheaper tier this week (spoiler!) with the same amount of disk space.
The paper seems to mostly focus on the quality of cardinality estimation (mostly driven by statistics) which is admittedly one of the frequent sore points in Postgres. There's been some progress in that area though (CREATE STATISTICS being a highlight).
Which is arguably the most important part of a planner. If you don't have good cardinality information, it doesn't matter if you have fancy planner strategies. They'll be employed in the wrong situation, and won't produce the good plans we all want.
You can use Docker (and Docker Compose) with Gel for local development [1], but of course you'd miss out on most management features of the CLI, because it's not built to supplant the docker/docker-compose CLI. Are there any particular issues you have currently with the Docker image approach?
> If I use some other extension like timescale is that compatible with gel [...] Postgres is so powerful partly because of its ecosystem, so I want to know how much of that ecosystem is still useable if I’m using gel on top of Postgres
Playing nice with the ecosystem is the goal. We started off with more of a walled garden, but with 6.0 a lot of those walls came down with direct SQL support and support for standalone extensions [1]. There is a blog post coming about this specifically tomorrow (I think).
> And is there a story for replication
Replication/failover works out of the box.
> subscribing to queries for real time updates?
Working on it.
> so can I add gel to an existing Postgres instance and get the benefits of the nicer query language or does it rely on special tables?
Gel is built around its schema, so you will need to import your SQL schema. After that you can query things with EdgeQL (or SQL/ORM as before).
EdgeDB is NOT an object store. It is "relational model enhanced" instead where a set is a fundamental building block [1] so not just relations are sets, but attributes can be sets also.
If your database client is any good, it should do the retries for you. EdgeDB uses serializable isolation (as the only option), and all our bindings are coded to retry on transaction serialization errors by default.
Transaction deadlocks are another common issue that is triggered by concurrent transactions even at lower levels and should be retried also.
I'm curious how you can handle transaction deadlocks at a low level - there might have been a lot of non-SQL processing code that determined those values and blindly re-playing the transactions could result in incorrect data.
We handle this by passing our transaction a function to run - it will retry a few times if it gets a deadlock. But I don't consider this to be very low level.
“We handle this by passing our transaction a function to run - it will retry a few times if it gets a deadlock. But I don't consider this to be very low level.”
Oh neat, I was just thinking about something like this the other day.
[1] https://www.phoronix.com/news/Arch-Linux-WoW64-Wine