> Taking 1 millisecond from 1000 people, does not translate to 1 second of lost human productivity
I believe that it does. If you average over a large population even tiny imperceptible increases in e.g. page load time have a measurable effect. Perhaps 1000 people isn't a large enough sample to measure the effect from 1ms delay considering all the confounding factors at play, but I absolutely believe you could see it with precise enough measurement and a large enough sample.
Hmm I think the heart of the question is whether productivity is linear with time. I guess he's right that, for small values of t, productivity is highly superlinear. So giving 1 person an extra minute has an impact on that person, but because of superlinearity, if you distribute that among 60 persons the extra second won't allow anything new, like an original thought or whatever.
You're thinking about it wrong. Of course if you give 60 people 1 second each they won't be able to accomplish a task that takes a minute; that's obvious. Instead what if you give a group of people 59 seconds each to complete a task that takes 1 minute +-10 seconds, and then you give a different group of people 60 seconds each to complete the same task. More people in the second group will complete the task.
Well in your case there's a "phase transition" for the task around 1 minute, so in that case the return function is sublinear in t (if you assume given time G starts at 0. G=t), or linear in it (if you assume it starts at 50, G=50+t). I just wanted to highlight this principle: his statement is equivalent to "For small t, the return for freeing t seconds from computer tasks in superlinear on t". And in general: "The analysis of time redistribution hinges on the linearity of return function for the freed time t".
I do believe that for most persons and most situations, they do tasks in a segmented fashion, working to completion. So if you give them a small amount of extra time it's just going to shift their segments until it hits a synchronized event, like sleeping at a predetermined time or having a meeting -- so all that happens is they get e.g. a millisecond each before they sleep. Under this model it's quite clear to me for very small t the return function is superlinear, and it's better to give 1 person 100 seconds than 100k persons a millisecond (assuming this is one-off of course): for the 100k persons the millisecond is insufficient to elicit any change whatsoever. It's much less than a neuron firing time.
I believe that it does. If you average over a large population even tiny imperceptible increases in e.g. page load time have a measurable effect. Perhaps 1000 people isn't a large enough sample to measure the effect from 1ms delay considering all the confounding factors at play, but I absolutely believe you could see it with precise enough measurement and a large enough sample.