Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Inference throughout scales really well with larger batch sizes (at the cost of latency) due to rising arithmetic intensity and the fact that it's almost always memory BW limited.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: