Hacker Newsnew | past | comments | ask | show | jobs | submit | drilbo's commentslogin

I recommended FUTO keyboard in sibling comment. FlorisBoard is a nice FOSS option, but some features are still WIP. Personally, I've switched fully to ThumbKey, but that's got quite a learning curve.


You can always opt out of using gboard altogether.

FUTO Keyboard is quite nice.


Thanks for the suggestion. It supports multi-lingual typing which in a requirement for me. I haven't checked other keyboards for a long time so perhaps that has become more common.

The integration with whisper is nice too.


I don't find the multilingual features as polished as Gboard, this is what prevents me from switching, in Gboard you can install multiple languages and write without having to switch and it will provide autosuggestions and spelling support based on the language you're typing without having to manually change the language


Heliboard is another good one: https://github.com/Helium314/HeliBoard


erm acktually not an anime


while this is obviously a very damning example, tbf it does seem to be an extreme outlier.


Well Elon Musk is definitely an extremist, and he's certainly a bald faced out liar, and he's obviously the tin pot dictator of the prompt. So you have a great point.

Poor Grok is stuck in the middle of denying the Jewish Holocaust on one hand, while fabricating the White Genocide on the other hand.

No wonder it's so confused and demented, and wants to inject its cognitive dissonance into every conversation.


that looks like a dinosaur now


>It’s likely their is a higher parameter count model in the works and this makes it easy to distinguish between the two.

in this case it looks like this is the higher parameter count version, the 2b was released previously. (Not that it excludes them from making an even larger one in the future, altho that seems atypical of video/image/audio models)

re: GP: I sincerely wish 'Open'AI were this forthcoming with things like param count. If they have a 'b' in their naming, it's only to distingish it from the previous 'a' version, and don't ask me what an 'o' is supposed to mean.


Good point! I forgot there was a smaller one out there already.

OpenAI’s naming conventions have gotten out of hand.

I believe the “o” is supposed to mean “Omni” and indicate that the model is multi-modal.


days? Pretty sure I could survive at least a couple years off snicker bars


Probably not years. My guess is that scurvy, beriberi, or some other deficiency would get you in at most a year.


Though most of these diseases can be avoided with some minimal fortification to the snickers bar that wouldn't really require noticeably more energy.


>You probably want to replace Llama with Qwen in there. And Gemma is not even close.

Have you tried the latest, gemma3? I've been pretty impressed with it. Altho I do agree that qwen3 quickly overshadowed it, it seems too soon to dismiss it altogether. EG, the 3~4b and smaller versions of gemma seem to freak out way less frequently than similar param size qwen versions, tho I haven't been able to rule out quant and other factors in this just yet.

It's very difficult to fault anyone for not keeping up with the latest SOTA in this space. The fact we have several options that anyone can serviceably run, even on mobile, is just incredible.

Anyway, i agree that Mistral is worth keeping an eye on. They played a huge part in pushing the other players toward open weights and proving smaller models can have a place at the table. While I personally can't get that excited about a closed model, it's definitely nice to see they haven't tapped out.


It's probably subjective to your own use, but for me Gemma3 is not particularly usable (i.e. not competitive or delivering a particular value for me to make use of it).

Qwen 2.5 14B blows Gemma 27B out of the water for my use. Qwen 2.5 3B is also very competitive. The 3 series is even more interesting with the 0.6B model actually useful for basic tasks and not just a curiosity.

Where I find Qwen relatively lackluster is its complete lack of personality.


I feel like these are all great examples of things people think they want. Making a post about it is one thing, actually buying or using a product, I think the majority of nostalgic people will quickly remember why they don't actually want it in their adult life.


I see this a lot in vintage computing. What we want is the feelings we had back then, the context, the growing possibilities, youth, the 90s, whatever. What we get is a semi-working physical object that we can never quite fix enough to relive those experiences. But we keep acquiring and fixing and tinkering anyway hoping this time will be different while our hearts become torn between past and present.


Yeah this is not even faster horses. It's horses that can count like Clever Hans.


unless I somehow skimmed over it, they only appear to refer to "prompt injection"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: