Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A random sleep of up to 10 seconds turned a 150gbit/sec spike into a 5gbit/sec spike on the Akamai bill from a newspaper app I once worked on...


That's surprising. You'd think spreading the workload start over 10 seconds would lower the size of spikes (integrated over a second) by at most a factor of 10.

But the above point is still true: many jobs take a few minutes to run. 60s of dispersion in start time is better than nothing, but you really want more.

(In this case, things are still quantized to a minute boundary, so you'd really want both).


> That's surprising. You'd think spreading the workload start over 10 seconds would lower the size of spikes (integrated over a second) by at most a factor of 10.

If the delay is at the reading side, away from Akami, through a cache, perhaps 10 concurrent requests for X would result in ten lots of data transfer as it isn't in cache yet, but 10 with a short delay is enough to prime the local cache on the first request before the rest start.

There are a number of reasons a sudden glut of activity could balloon bandwidth or CPU/memory costs more than you might expect.

Without a chunk more detail about the system in question, this is just random speculation of course.


Good points.

Thinking about this-- this is Akamai, who has historically charged for midgress. Liveness of cache could be very important.


I'm not disputing that it doesn't prevent a subset of the same class of problem. It's just a wholly incomplete solution to the OpenBSD implementation to the degree that it's disingenuous to say netbsd already implemented it.


FreeBSD, not NetBSD.

They're on different timescales. The OpenBSD start times are still quantized to the minute, I believe.

Both solutions would complement each other.


ah you're right, my bsd




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: