There was this one incident in China where the facial recognition system mistaked the face of a chinese celebrity on a bus for a jaywalker... so the system isn't perfect for special conditions/environments yet. However I do believe that results today are already outstanding and will only get better.
Funny you should say that. In another post I compared sitting congressmen to convicted felons
- 440 images of congressmen
- 1,756 mugshots
- 10 mismatches (?) with 70+% certainty
The highest was 86%, but to be fair, I wouldn't be able to tell you confidently for all of them that the convicts aren't the same person. And under 80% should be suspect anyway. It's just that you need to use the right statistical methods when comparing a person to a large pool because you'll have spurious matches.
This attitude disturbs me more than any other single aspect of the mass idiocy around adopting AI for critical things. 80% is horribly low accuracy for anything even remotely important.
For example, imagine you went to a store and could tell the cashier any price for anything so long as it was 80% accurate, as in, 80% of the original price. Just a 20% potential discount, nbd.
Or put another way, 80% of your items had to have a perfectly accurate price but you bought 5 items, 1 of them was a PlayStation 5 you priced at $1. It's fine. The rest were accurate!
80% is extremely low accuracy. It's absurd to think that's a good level to cut things off. We should demand systems like these demonstrate 99% or better accuracy. Until then they should be illegal to apply in any scenario where a decision is made about another person.
If public shaming like that was attempted in the US, I imagine in some circles it would become a goal to repost on social media a photo of yourself on the system jaywalking. Probably holding a sign with some meme.
https://www.theverge.com/2018/11/22/18107885/china-facial-re... https://you.com/search?q=facial+recognition+in+china