according to [0] it looks like the "Umbrel Home" device they sell (with 16GB RAM and an N150 CPU) can run a 7B model at 2.7 tokens/sec, or a 13B model at 1.5 t/s.
especially when they seem to be aiming for a not-terribly-technical market segment, there seems to be a pretty big mismatch between that performance and their website claims:
> The most transformative technology of our generation shouldn't be confined to corporate data centers. Umbrel Home democratizes access to AI, allowing you to run powerful models on a device you own and control.
It’s all subjective. Personally I think it would border on useless for local inference but maybe some people are happy with low quality models at slow speeds.