Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Evidence for safe and fair AI systems is possible as long as you define what "safe" and "fair" mean for your usecase. Fairness might look like "no cohort has >5% higher false positive rate than another" and safety might mean "the model must have a false negative rate of less than 15%". Safety more so encompasses the workflows around the model, including human intervention, auditing, monitoring, etc.

Here's a good overview of fairness: https://learn.microsoft.com/en-us/azure/machine-learning/con... and there's plenty of papers discussing how to safely use predictive analytics and AI in healthcare.

I don't know if this product can give proof for safe and fair ML systems, but it's not impossible to use these things safely and fairly.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: