Hacker Newsnew | past | comments | ask | show | jobs | submit | techwizrd's commentslogin

That makes sense for code or technical text, but it is less relevant for car UIs. In an infotainment system you almost never see ambiguous strings where O vs 0 or I vs l matters. Everything is highly contextual, short, and glance-based. These fonts are tuned for distance, motion, glare, and quick recognition, not for reading arbitrary identifiers. If it tested poorly in real driving conditions that would be a real problem, but judging it by programmer font rules feels like the wrong yardstick.

Have you looked at tldr/tealdeer[0]? It may do much of what you're looking for, albeit without LLM assistance.

0: https://tealdeer-rs.github.io/tealdeer/


I had no issues using Firefox on a 2021 M1 Pro or my Framework 13. Reader mode does not work, however.


I've had good results from the James Hoffman recipe [0], although I brew inverted. You can push the plunger down with just the weight of resting your arm on the plunger. For something very different, you can brew something not-quite-espresso using the Fellow Prismo cap for the Aeropress.

0: https://www.youtube.com/watch?v=j6VlT_jUVPc


Personal opinion is that the whole point of aeropress is that you don't need to follow any recipes to get a good result. The parameters are extremely flexible to the point of being close to foolproof. Start with good beans and water. Grind anywhere between French press and very fine pourover level. Brew anytime between 1 minute and 8 minutes. Add anywhere between 100ml and 200ml of water. Press reasonably slowly.

The results will always be good. Maybe not the level you'd get with extremely high quality light roasted beans and a very careful pourover technique, but maybean aeropress isn't the best brewer for those beans in the first place.


We have been fine-tuning models using Axolotl and Unsloth, with a slight preference for Axolotl. Check out the docs [0] and fine-tune or quantize your first model. There is a lot to be learned in this space, but it's exciting.

0: https://axolotl.ai/ and https://docs.axolotl.ai/


When do you think fine tuning is worth it over prompt engineering a base model?

I imagine with the finetunes you have to worry about self-hosting, model utilization, and then also retraining the model as new base models come out. I'm curious under what circumstances you've found that the benefits outweigh the downsides.


For self-hosting, there are a few companies that offer per-token pricing for LoRA finetunes (LoRAs are basically efficient-to-train, efficient-to-host finetunes) of certain base models:

- (shameless plug) My company, Synthetic, supports LoRAs for Llama 3.1 8b and 70b: https://synthetic.new All you need to do is give us the Hugging Face repo and we take care of the rest. If you want other people to try your model, we charge usage to them rather than to you. (We can also host full finetunes of anything vLLM supports, although we charge by GPU-minute for full finetunes rather than the cheaper per-token pricing for supported base model LoRAs.)

- Together.ai supports a slightly wider number of base models than we do, with a bit more config required, and any usage is charged to you.

- Fireworks does the same as Together, although they quantize the models more heavily (FP4 for the higher-end models). However, they support Llama 4, which is pretty nice although fairly resource-intensive to train.

If you have reasonably good data for your task, and your task is relatively "narrow" (i.e. find a specific kind of bug, rather than general-purpose coding; extract a specific kind of data from legal documents rather than general-purpose reasoning about social and legal matters; etc), finetunes of even a very small model like an 8b will typically outperform — by a pretty wide margin — even very large SOTA models while being a lot cheaper to run. For example, if you find yourself hand-coding heuristics to fix some problem you're seeing with an LLM's responses, it's probably more robust to just train a small model finetune on the data and have the finetuned model fix the issues rather than writing hardcoded heuristics. On the other hand, no amount of finetuning will make an 8b model a better general-purpose coding agent than Claude 4 Sonnet.


Do you maybe know if there is a company in the EU that hosts models (DeepSeek, Qwen3, Kimi)?


Most inference companies (Synthetic included) host in a mix of the U.S. and EU — I don't know of any that promise EU-only hosting, though. Even Mistral doesn't promise EU-only AFAIK, despite being a French company. I think at that point you're probably looking at on-prem hosting, or buying a maxed-out Mac Studio and running the big models quantized to Q4 (although even that couldn't run Kimi: you might be able to get it working over ethernet with two Mac Studios, but the tokens/sec will be pretty rough).


When prompt engineering isn't giving you reliable results.


only for narrow applications where your fine tune can let you use a smaller model locally , specialised and trained for your specific use-case mostly


finetuning rarely makes sense unless you are an enterprise and even generally doesn't in most cases there either.


What hardware do you train on using axolotl? I use unsloth with Google colab pro


This is what I use, and it works well. It's very straightforward to add apps and automatically update them as new releases are pushed to Github or wherever they are hosted.


This idea feels a little like bullet journaling or logseq [0] to me. For what it's worth, I do this in Obsidian and clean-up my thoughts on a regular basis. It hits the right balance of minimalism and usefulness for me.

0: https://logseq.com/


I tried to create more of a apple-notes type of app, but I get the appeal in full featured apps like Obsidian and logseq


Is this quote real? I'm familiar with George Pólya's, "If you cannot solve the proposed problem, try to solve first a simpler related problem" but I cannot find any source for the Lenstra quote.


I also found it connected to Polya [1]

https://www.pleacher.com/mp/mquotes/mobquote.html



I’ve heard him say it myself in a lecture on the AKS primality test. So, ehh, the source is oral tradition I guess.


The challenge I have is how to get bounding boxes for the OCR, for things like redaction/de-identification.


AWS Textract works pretty well for this and is much cheaper than running LLMs.


Textract is more expensive than this (for your first 1M pages per month at least) and significantly more than something like Gemini Flash. I agree it works pretty well though - definitely better than any of the open source pure OCR solutions I've tried.


yeah that's a fun challenge — what we've seen work well is a system that forces the LLM to generate citations for all extracted data, map that back to the original OCR content, and then generate bounding boxes that way. Tons of edge cases for sure that we've built a suite of heuristics for over time, but overall works really well.


Why would you do this and not use Textract?


I too have this question.


I'm working on a projet that uses PaddleOCR to get bounding boxes. It's far from perfect, but it's open source and good enough for our requirements. And it can mostly handle a 150 MB single-page PDF (don't ask) without completely keeling over.


This is definitely surprising given they announced a KMP standalone IDE only a few months ago. For now, Flutter still seems to make more sense than KMP while the KMP world is still maturing.


Dart is an amazing and underrated language too. It compiles to native assembly, has pattern matching, async/await, and null safety. The only thing it's missing in my opinion is some form of checked errors, currently they only have unchecked exceptions.


Oddly, I’m conflicted on Flutter so far, but I have loved working in Dart.

So much so that I ended up writing a queueing app for scheduling batches of sequential tasks on the server in Dart just to see how it could work as a NodeJS replacement, and thought the whole dev experience was great.


I just don't trust Google with a programming language. I feel like Golang has escaped the orbit of Google and could survive without it (I might be wrong). But for Dart I'm pretty sure it would die fast and I don't want to invest time into it as a result.


I read that AdWords is built on Dart’s JS transpiler so I wouldn’t expect for them to just get rid of it. I really wish they would have pushed it over Go tbh. I’d love to use it for back end services.


It looks cool ngl and I might get around to try it out but I wouldn't bring it at work for example.


> The only thing it's missing

I think biggest thing it is missing is any kind of Google commitment on its long term usage.


The modern language landscape is backing away from checked exceptions. Funnily enough Kotlin eschewed them as well, converting checked to unchecked exceptions on the JVM.


The modern language landscape has not backed away from checked errors. Rust is praised for its checked errors, countless posts on this forum praise Result<T> in multiple languages. Swift has checked errors and Kotlin is implementing them via union types in the near future.

Checked errors, via results or exceptions have never been the problem. It has always been Java the language that hasn't provided the syntax sugar for handling checked errors effectively.

There is no difference between:

    A doIt() throws B
    fun doIt(): Result<A, B>
It all comes down to what the language lets you do once you encounter that error.


There is a huge difference: the first is an _exception_, which:

- Unwinds the stack to a try/catch or exception handler, making exceptions practically difficult to deal with in concurrent programming.

- If unchecked, can be ignored, silently propagating during stack unwinding.

- If checked, infects the call stack with 'throws' annotations.

The second is a normal return value, with no try/catch needed, handling the error case is mandatory in order to handle the success case, and there is not a separate execution regime occurring whenever an error case is encountered.


I genuinely disagree. There is no difference between a checked exception and a Result.

- In concurrent programming uncaught exceptions won’t leave the future. Both values are available to you just like Results. I also don’t think arguments for concurrent programming are valid though. 99% of all code is single threaded.

- It is checked.

- Result infects the call stack as well.

- handling the error case with a checked exception is also mandatory with handling the success case. There is no separate “execution regime”. What is the difference here:

    val a = try {
        a();
    } catch (b) {
        onB();
    }
    
    val a = match (a()) {
        Success(a) => a
        Failure(b) => onB()
    }


The difference is you can just call

    val a = a();
And start the stack unwinding. The `try/catch` is not mandatory to call a().


You definitely just don't know how checked exceptions work. This is not true at all. The compiler will not let you call a(); without handling the exception or checking it up the stack. The same way results work.


Is Rust praised for its checked errors? I've personally found it extremely verbose since there essentially is no possibility for unchecked errors.

Also, external crates like "anyhow" are required if you don't want to account for literally every single possible error case. Really seems like a pedantic's dream but a burden to everyone else.

Effective Java recommends checked exceptions only in the case where the caller may recover, but in practice you're usually just propagating the error to the caller in some form, so almost everything just becomes unchecked runtime exceptions.


I'm only saying what I've seen here. I typically see praise for Rust's checked errors. Especially since they provide ? to panic and uncheck them. Personally I disagree with Bloch, if you are the thrower you can't possibly know if the caller can or cannot recover from your error so in my opinion its best to check it. If you are not the thrower and you can't recover from it I prefer to uncheck them, because if I can't recover my caller most likely can't either.

The issue really just arises with Java not giving you the capability to uncheck that error easily if you can't recover from it. For example, you need a ton of lines to uncheck:

    A doIt() throws B {
       throw new B();
    }

    void usingIt() {
       A a;
       try {
           a = doIt();
       } catch (B b) {
           throw new RuntimeException(b);
       }
       
       a.woohoo();
    }

My ideal situation would be for some sort of throws unchecked operator (or whatever syntax we want to bikeshed over) that turns them into unchecked exceptions.

    void usingIt() throws unchecked B {
        var a = doIt();
        a.woohoo();
    }


Have you heard of the language elm? The language elm is so safe that it is literally impossible to crash the program short of a memory error.

That's essentially the direction of modern programming languages. It's part of that feeling you get when programming haskell. Once you get it running, it just works. It's very different from the old paradigm where once you get it working, it can crash and you have to debug and do more to get it working better.


Looks like a huge difference to me. The first function throws an error. The second function may not even return an error.


The first function may not return an error either.


i find its biggest problem is its json ecosystem; very clunky and boilerplate-y


I always thought a really good use of KMP would be in writing shared non-visual code, e.g. a library that interacts with your API(s) and any non-visual like that. Then paint a dumbish, platform-specific frontend over the top and link together.


As someone who has to manage native ios and android apps I thought this would be the perfect solution as well. I wanted to write all my data models, api calls, sql cache and business logic as a separate library written with kmp, but what i didn't like was that the ios framework that was generated was a black box with just objc headers. If it generated full swift code that i could inspect for correctness and tweak if needed, I would have jumped on using it right away.


That's interesting - I can sort of see it both ways. Would applying unit tests to the exposed functions not have sufficed?


not the op, but had similar experience

kmp exposes everything as obj-c meaning c headers and not very good type annotations (enums are int only so you cant have full checking on each switch, everything is obj-c reference semantics meaning multi-threading gets tricky, kotlin exceptions are not catchable from swift) so there are a lot of edge cases to write unit tests for (on the client side) which negates a lot point of using kmp, and thats in addition to all of the kotlin-isms that leak out...


Ah, right. Yeah, that is surprisingly bad! Why wouldn't they at least generate enums!


  > Why wouldn't they at least generate enums!
they do but its going through objective-c which inherits c enums (which are basically integers) so when you use it from swift its like switching over an unbounded set so you always have to handle "default" cases leading to bugs compared to usage from android


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: