I can think of plenty of inefficient ways to do this. The nice thing about the SKIP LOCKED queue is that it is very simple and pretty fast. Postgres just has to use an index to look at the jobs in some defined order and take the first one that isn't locked.
The first option here would create an enormous amount of writes for each job fetched and likely slow it down to a crawl if enough jobs are in the queue.
Many scheduling systems have a "time in queue increases priority" behavior; it is not an exotic proposition and could be implement efficiently in PostgreSQL.
Having too many jobs in queue is a problem on its own that should be addressed. Each tenant should be rate-limited or have a reasonable cap on number of waiting jobs.
- Decrement priority of all tenants jobs by 1 each time their job is executed, or increment other tenants' priorities (more ops but better behavior)
- Maintain a separate tenant priority table and join it + use for ordering when fetching the next job
And so on