Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It looks like a dozen people found this helpful. As a related idea, it's one of the main reasons batch inference is so much more efficient in ML. You transmute the problem from memory-bound to compute-bound.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: