Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPUs are notoriously bad on exploiting sparsity. I wonder if this architecture can do a better job. The groq engineers in this thread, if a neural network had say 60% of its weights set to 0, what would it do to cost & speed in your hardware?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: