I'm confused by all the takes implying decode is more important than prefill.
There are an enormous number of use cases where the prompt is large and the expected output is small.
E.g. providing data for the LLM to analyze, after which it gives a simple yes/no Boolean response. Or selecting a single enum value from a set.
This pattern seems far more valuable in practice, than the common and lazy open ended chat style implementations (lazy from a product perspective).
Obviously decode will be important for code generation or search, but that's such a small set of possible applications, and you'll probably always do better being on the latest models in the cloud.
The gain is only in prefill and if the task/output is complex the gain will be totally minor. So the numbers are quitly exagerated here based on a prompt that is taking less than 2s to decode. So I guess we are not here doing complex tasks with 100's or 1000 token output.
For the cost of an M3 Ultra + DGX the gain seem minimal and most of all, exo didn't clarify the model used here and it's for sure not a dense model or an MoE with 1B or 2B experts otherwise the mac ultra too will suffer a lot and the layers will be bigger!
Anecdotally, even medium-sized prompts (a few thousand tokens) on pretty small models (8-2B) have resulted in extremely noticeable slowdowns (vast majority of total processing time) on my M1 Mac, leading me to appreciate the significance of the pre-fill step (and difficulty of processing large contexts locally).
This is a wonderful explanation of the two phases! I appreciate the hardware concerns for both now.
Reading the article I wished for a device that just does both things well and on that topic it might be noteworthy that Apple's just-released M5 has approximately 3.5x-ed TTFT performance compared to M4, according to their claims!
Blog author here. Actually, no. The model can be streamed into the DGX Spark, so we can run prefill of models much larger than 128GB (e.g. DeepSeek R1) on the DGX Spark. This feature is coming to EXO 1.0 which will be open-sourced soonTM.
IP over thunderbolt is definitely a thing, don't know whether IP over USB is also a thing. USB4x2 or TB5 can do 80Gib/s symmetrical or 120+40 asymmetrical (and boy is this a poster child for the asymmetrical setup). The Mac definitely supports that fine, so, as long as the Spark plays nice, USB is actually a legitimately decent choice.
How do you know this happened? I thought it was an abandoned project until I saw this post. I've been diligently checking weekly for new releases but nothing for almost a year...
There are an enormous number of use cases where the prompt is large and the expected output is small.
E.g. providing data for the LLM to analyze, after which it gives a simple yes/no Boolean response. Or selecting a single enum value from a set.
This pattern seems far more valuable in practice, than the common and lazy open ended chat style implementations (lazy from a product perspective).
Obviously decode will be important for code generation or search, but that's such a small set of possible applications, and you'll probably always do better being on the latest models in the cloud.