Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Huh, never heard of that before. An interesting paper!

Running the numbers - assuming 4k record size instead of 1k, ignoring data size changes, ignoring cache, ignoring electricity and rack costs, selecting a $60 Samsung 980 with 4xPCIe and a $95 set of 2x16GB DDR5-6400 DIMMs...I get $0.003/disk access/second/year and $0.0000113 for 4k of RAM, a ratio of 264.

That is remarkably close to the original paper's ratio of 400, even though their disks only got 15 random reads per second, not 20,000, and cost $15,000, and their memory cost $1000/MB not $0.002/MB.

I'm not sure the "Spend 10 bytes of memory to save 1 instruction per second" works equally well, especially given that processors are now multi-core pipelined complex beasts, but working naively, you could multiply price, frequency, and core count to calculate ~$0.01/MIP (instead of $50k). $0.01 is about the cost of 3 MB of RAM. Dividing both by a million you should spend 3 bytes, not 10 bytes, to save 1 instruction per second.



> $60 Samsung 980

If this is a Hetzner machine then yes, but enterprise SSDs costs more, especially from enterprise vendors. But this only drives the storage cost up.

More so, if you tend to send some big amount of data every 5 minutes and you are somewhat constrained by memory (32 / 1000 x 100 = 3.2%) then it would be easier to just read it from the storage again. If you are not constrained by storage bandwidth, of course.

And by the way, the latest gaming consoles (at least PlayStation?) is designed around this concept - they trade having big amount of RAM (which in case of PS5 is shared between GPU and the OS) to just loading assets from the storage extremely fast 'just in time'. Which works fine for games.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: