Minimax has been great for super high speed web/js/ts related work. It compares in my experience to Claude Sonnet, and at times gets stuff similar to Opus.
Design wise it produces some of the most beautiful AI generated page I've seen.
GLM-4.7 like a mix of Sonnet 4.5 and GPT-5 (the first version not the later ones). It has deep deep knowledge, but it's often just not as good in execution.
They're very cheap to try out, so you should see how your mileage varies.
Ofcourse for the hardest possible tasks that GPT 5.2 only approaches, they're not up to scratch. And for the hard-ish tasks in C++ for example that Opus 4.5 tackles Minimax feels closer, but just doesn't "grok" the problem space good enough.
To prove you right, you can read up on the incredible giga-brained countrywide experiments by Kardelj in Socialist Yugoslavia [0]. The result being a country where no-one wanted to work, and everyone had a great standard of living (while the IMF didn't call in its loans). And then the entire country collapsed all at once under the accumulated mismanagement.
It's wild to me that despite tremendous resources and 100+ years of time capitalism still kills millions of people a year with starvation and preventable diseases. But every right winger has a pet wikipedia page about a failed communist state with no critical examination of why they failed beyond "communism bad".
To clarify my stance I'm an anarchist and that page has a lot of good examples of successful worker owned collectives.
There are good critically examined rebuttals if you actually look for them beyond Wikipedia which is not designed for that purpose. I read a book recently called Socialism: the failed idea that never dies, and while it has a clickbait title, the arguments are pretty cogent as to why people throughout history want to enact socialism based systems and why they eventually fail.
To me it's equally as wild that you say such a thing when no system in human history did as much as capitalism to alleviate hunger and disease. In fact all other systems combined still can't touch the progress we've made to eradicate famine and disease while "under capitalism".
> Letting people download 400GB just to find that out is also .. not optimal.
Letting people download any amount of bytes just to find out they got something else isn't optimal. So what to do? Highlight the differences when you reference them so people understand.
> DeepSeek's first-generation reasoning models are achieving performance comparable to OpenAI's o1 across math, code, and reasoning tasks! Give it a try! 7B distilled: ollama run deepseek-r1:7b
Are really misleading. Reading the first part, you think the second part is that model that gives "performance comparable to OpenAI's o1" but it's not, it's a distilled model with way worse performance. Yes, they do say it's the distilled model, but I hope I'm not alone in seeing how people less careful would confuse the two.
If they're doing this on purpose, I'd leave a very bad taste in my mouth. If they're doing this accidentally, it also gives me reason to pause and re-evaluate what they're doing.
I personally was affected by this fire, although I've always kept 3 month backups of production data, encrypted, on-site, just in case of emergencies like this. Haven't touched their services for anything production related ever since