To understand the problem one first has to know that for better or worse postgres does not use index organized tables. Additionally this ought to work not only on a primary key but also on additional unique constraints. Including partial unique indexes, potentially over expressions (say lower(username) or something).
That makes some approaches harder.
The bigger problem is that 'update if fails' is easier than it sounds. When you find a conflicting row in the index, that row's creating transaction might not yet have committed. So you need to first wait for it to commit, and then retry. But that implies that the transaction could actually roll back or delete the row again. And suddenly there's not actually a row to UPDATE. Thus you need to retry, right?
There's also some issues around avoiding deadlocks. If you "naively" just acquire a 'value lock' on the to-be-inserted row, you can easily get into deadlock territory if more than one row is inserted in one transaction. Such value locks have to be released after an attempt to update. Which then necessitates a looping attempt...
That makes some approaches harder.
The bigger problem is that 'update if fails' is easier than it sounds. When you find a conflicting row in the index, that row's creating transaction might not yet have committed. So you need to first wait for it to commit, and then retry. But that implies that the transaction could actually roll back or delete the row again. And suddenly there's not actually a row to UPDATE. Thus you need to retry, right?
There's also some issues around avoiding deadlocks. If you "naively" just acquire a 'value lock' on the to-be-inserted row, you can easily get into deadlock territory if more than one row is inserted in one transaction. Such value locks have to be released after an attempt to update. Which then necessitates a looping attempt...
Does that start to explain the problems a bit?