Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You have the full power of PostgreSQL at your disposal, so there are many ways you can effectively tackle this issue.

- Decrement priority of all tenants jobs by 1 each time their job is executed, or increment other tenants' priorities (more ops but better behavior)

- Maintain a separate tenant priority table and join it + use for ordering when fetching the next job

And so on



I can think of plenty of inefficient ways to do this. The nice thing about the SKIP LOCKED queue is that it is very simple and pretty fast. Postgres just has to use an index to look at the jobs in some defined order and take the first one that isn't locked.

The first option here would create an enormous amount of writes for each job fetched and likely slow it down to a crawl if enough jobs are in the queue.


Many scheduling systems have a "time in queue increases priority" behavior; it is not an exotic proposition and could be implement efficiently in PostgreSQL.

Having too many jobs in queue is a problem on its own that should be addressed. Each tenant should be rate-limited or have a reasonable cap on number of waiting jobs.


As long as you don't need a strict priority and a worker can just grab the next waiting job:

  SELECT * FROM jobs ORDER BY RANDOM() LIMIT 1 SKIP LOCKED
should do the trick


This will give more resources to tenants that schedule more jobs.

If tenant A schedules 99 jobs and tenant B schedules 1 job, a "fair" algorithm would pick B's job either first or second, RANDOM() will not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: