Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At scale, algorithms are commonly limited by memory bandwidth, not concurrency. Most code can be engineered with enough cheap concurrency to efficiently saturate memory bandwidth.

This explains why massively parallel HPC codes are mostly minimal mutable state designs despite seemingly poor theoretical properties for parallelism. Real world performance and scalability is dictated by minimization of memory copies and maximization of cache disjointness.



It's certainly true that some things are limited by memory bandwidth. But it's also common to be limited in other ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: