Hacker Newsnew | past | comments | ask | show | jobs | submit | chuckhend's commentslogin

Love SQLx for my Rust projects. I would like to figure out a great way to use the compile time checks in python or js projects, but haven't explored it yet.


LiteLLM is quite battle tested at this point as well.

> it reimplements provider interfaces rather than leveraging official SDKs, which can lead to compatibility issues and unexpected behavior modifications

Leveraging official SDKs also does not solve compatibility issues. any_llm would still need to maintain compatibility with those offical SDKs. I don't think one way clearly better than the other here.


That's true. We traded API compatibility work for SDK compatibility work. Our bet is that providers are better at maintaining their own SDKs than we are at reimplementing their APIs. SDKs break less often and more predictably than APIs, plus we get provider-implemented features (retries, auth refresh, etc) "for free." Not zero maintenance, but definitely less. We use this in production at Mozilla.ai, so it'll stay actively maintained.


Being battle tested is the only good thing I can say about LiteLLM.


You can add in it's still 10x better than LangChain


Very relevant talk if anyone is interested in learning a bit more about how the project is so fast. https://www.youtube.com/watch?v=gSKTfG1GXYQ


Never thought to see if Jane Street had a YouTube or if they had programming related stuff but here we are thanks to you! Makes sense they do, and that it’s python related.


Automating this with linter and formatter is great. It moves the argument over style and format to a one liner change to a lint config instead of mingling it with the with the main code change.


It also hopefully only happens once.

If you continue to have people bringing up arguments over the linter and formatter after an initial agreement is made, then you can talk to those people


Congrats to the paradedb team!


you can send or read a single message at a time or as many as you want in a batch.

https://github.com/tembo-io/pgmq?tab=readme-ov-file#read-mes...


pop() or read() in a loop, yes. can read 1 message or many messages at a time.

what we do at Tembo in our infrastructure is pause for up to a few seconds if a read() returns no messages. when there are messages, then we read() with no pause in between. when the queues are empty it amounts to less than one query per second. there is not much cost to reading frequently if you use a client side connection pool, or a server side pool like pgbouncer.


The client libs are a nice convenience, but most users write the sql directly when integrating with other SQL statements, something like:

begin; select * from my table group by... select pgmq.send(); commit;


There is a long poll, https://tembo-io.github.io/pgmq/api/sql/functions/#read_with...

We have been talking about a push using Postgres 'notify', or even via an http, but we don't have a solid design for it yet.


This is exactly how pgmq is implemented, + the usage of VT.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: