Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow it's less than $7/hour on AWS for 1TB of RAM on demand. $27/hour for 4TB. If you reserve instance (for 3 years) you can even go to 24TB.


Not that it's entirely equivalent (man hours of maintaining the local hardware/software vs man hours of AWS), but you could buy ~22.5TB of RAM outright for the price of using the 1TB AWS instance for $7/hour, assuming a current price of $2.75/GB of RAM. If you only need the 1 TB (5% cost), you could put difference ($58,000) towards hardware and an engineer at quarter time to maintain the hardware.

Bonus: you get to keep the hardware.


Unfortunately, big corporations think its "not scalable" solution and prefer to overpay 60k to outsource to AWS...


For the "big data" stuff under discussion, you're probably spinning up an instance for some training then turning it off again. $7/hour for an hour or two a day, is about $4k/year. Maybe you retrain weekly and it's only $500/year. At that kind of price point, who cares? As the person making these algorithms, I'd hate to have to wait to order new machines for me to run something new, or to schedule time on the cluster or whatever, instead of just throwing an elastic amount of money at it.


I can see the same argument being used for the stuff you could run locally up to 500k in infrastructure per month. People are so used to managed servers that they do not even see a possibility to use something else. Now everyone wants to be cloud agnostic (looking at conferences) so they want to eat the pie and have it too. While companies with mixed on premis/cloud setups pay the least based on all the consulting Ive done in this regard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: