Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's surprising. You'd think spreading the workload start over 10 seconds would lower the size of spikes (integrated over a second) by at most a factor of 10.

But the above point is still true: many jobs take a few minutes to run. 60s of dispersion in start time is better than nothing, but you really want more.

(In this case, things are still quantized to a minute boundary, so you'd really want both).



> That's surprising. You'd think spreading the workload start over 10 seconds would lower the size of spikes (integrated over a second) by at most a factor of 10.

If the delay is at the reading side, away from Akami, through a cache, perhaps 10 concurrent requests for X would result in ten lots of data transfer as it isn't in cache yet, but 10 with a short delay is enough to prime the local cache on the first request before the rest start.

There are a number of reasons a sudden glut of activity could balloon bandwidth or CPU/memory costs more than you might expect.

Without a chunk more detail about the system in question, this is just random speculation of course.


Good points.

Thinking about this-- this is Akamai, who has historically charged for midgress. Liveness of cache could be very important.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: