If you have a program that looks at CCTV footage and IDs animals that go by.. is a human supposed to validate every single output? How about if it's thousands of hours of footage?
I think parent comment is right. It's just a platitude for administrators to cover their backs and it doesn't hold to actual usecases
I don't see it so bleakly. Using your analogy, it would simply mean that if the program underperforms compared to humans and starts making a large amount of errors, the human who set up the pipeline will be held accountable. If the program is responsible for a critical task (ie the animal will be shot depending on the classification) then yes, a human should validate every output or be held accountable in case of a mistake.
I take an interest in plane crashes and human factors in digital systems. We understand that there's a very human aspect of complacency that is often read about in reports of true disasters, well after that complacency has crept deep into an organization.
When you put something on autopilot, you also massively accelerate your process of becoming complacent about it -- which is normal, it is the process of building trust.
When that trust is earned but not deserved, problems develop. Often the system affected by complacency drifts. Nobody is looking closely enough to notice the problems until they become proto-disasters. When the human finally is put back in control, it may be to discover that the equilibrium of the system is approaching catastrophe too rapidly for humans to catch up on the situation and intercede appropriately. It is for this reason that many aircraft accidents occur in the seconds and minutes following an autopilot cutoff. Similarly, every Tesla that ever slammed into the back of an ambulance on the back of the road was a) driven by an AI, b) that the driver had learned to trust, and c) the driver - though theoretically responsible - had become complacent.
Theoretical? I don't see any reason that complacency is fine in science. If it's a high school science project and you don't actually care at all about the results, sure.
The problem is that the original statement is too black and white. We make tradeoffs based on costs and feasibility
"if the program underperforms compared to humans and starts making a large amount of errors, the human who set up the pipeline will be held accountable"
Like.. compared to one human? Or an army of a thousand humans tracking animals? There is no nuance at all. It's just unreasonable to make a blanket statement that humans always have to be accountable.
"If the program is responsible for a critical task .."
See how your statement has some nuance? and recognizes that some situations require more accountability and validation that others?
If some dogs chew up an important component, the CERN dog-catcher won't avoid responsibility just by saying "Well, the computer said there weren't any dogs inside the fence, so I believed it."
Instead, they should be taking proactive steps: testing and evaluating the AI, adding manual patrols, etc.
I think parent comment is right. It's just a platitude for administrators to cover their backs and it doesn't hold to actual usecases