Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The 800 G/s bandwidth is amazing. I ordered a maxed out Mac mini with 32G ram and 200G/s bandwidth. For the LLMs I want to run right now, that is sufficient for my needs, although I did consider over-buying and getting a M2 Ultra. I also pay Google for Colab, and as long as I don’t over-use it, I can almost always get an A100. My strategy is to split my work as appropriate between the Mac mini when I get it in a week or two, and Colab. I used to run on Lambda Labs, also excellent, but setup time was non-negligible.


I have an M2 Pro 16GB — anybody on Apple Silicon can download DiffusionBee.app and immediately be generating images (from text prompts) with its default model/engine... drag-and-drop.

Incredible what (even with limitations of a single $1000 computer, costing less than a single nVDA 4090) a desktop mac mini can accomplish.

----

For comparison: the hard drive in the M2 Mini is FASTER THAN A MACPRO5,1's RAM!!!


The M1 Max and M2 Max are quite serviceable too in not jumping to an ultra.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: