I would have thought they would be more rigorous, since mistakes for them could threaten the very viability of the business? Which is why I assume most are still on mainframes. (Never worked at a bank)
Banks exist since a long time before computers existed, and thus have ways to detect and correct errors that are not purely technological (such as double entry bookkeeping, backups, supporting documentation, different processes). So a bank can survive a db doing nasty things on a low enough frequency such that is not detected beforehand, so they don’t need to “prove in coq” that everything is correct.
Mistakes don't threaten them that much. When Equifax (admittedly not a bank) can make massive negligent fuckups and still be a going concern there isn't much heat there. Most fuckups a bank make can be unwound.
Mainframe systems aren't tested to the Jepsen level of standard just because they were build on mainframes in the 70s/80s. In fact, quite the opposite.
Banks are not usually ran by people who go for the first fad.js they see ahead; they usually also can think ahead further than 5 min.
Also, I'm sure they engineer their systems so that every operation and action is logged multiple times and have multiple redundancy factors.
A main transaction DB will not be a "single source of truth" for any event. It will be the main source of truth, but the ledger you see in your online bank is only a simplified view into it.