The problem is that the original statement is too black and white. We make tradeoffs based on costs and feasibility
"if the program underperforms compared to humans and starts making a large amount of errors, the human who set up the pipeline will be held accountable"
Like.. compared to one human? Or an army of a thousand humans tracking animals? There is no nuance at all. It's just unreasonable to make a blanket statement that humans always have to be accountable.
"If the program is responsible for a critical task .."
See how your statement has some nuance? and recognizes that some situations require more accountability and validation that others?
"if the program underperforms compared to humans and starts making a large amount of errors, the human who set up the pipeline will be held accountable"
Like.. compared to one human? Or an army of a thousand humans tracking animals? There is no nuance at all. It's just unreasonable to make a blanket statement that humans always have to be accountable.
"If the program is responsible for a critical task .."
See how your statement has some nuance? and recognizes that some situations require more accountability and validation that others?