Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People are using throughput and latency differently in different locations/contexts. Here they are referring to token throughput per user and first token/chunk latency. They don't mention the token throughput of the entire 576-chip system[0] that runs Llama 2 70b which would be the number we're looking for.

[0] https://news.ycombinator.com/item?id=38742581



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: