Hacker Newsnew | past | comments | ask | show | jobs | submit | Daneel_'s commentslogin

On windows at least, I almost always use 'alt+space; x' to maximise windows, as well as winkey+left/right/up/down, which is really the only resizing I do. Having to use the mouse is a pain.

I fully agree, but the winkey + tab order is simply in order of last used, with the most recent being at the upper left and oldest at the lower right.

Interesting! Thanks for that

No need to make small-talk with someone is a positive for many people. That might make it worth it?


Interesting little read. I always find it fascinating when old code holds up really well - especially structurally. Great trip down memory lane!


Try Pass4Wallet. It has a long list of supported barcode types and it's free, so win-win.

I'm not affiliated, I've just found it to be very flexible over a few years of using it.


I've been using Pass4Wallet (the app you linked) for a number of years and it's been fantastic. I'd recommend it.


Look at Pass4Wallet - free from the start.


Commented in another reply, this is the answer. Works great and supports a dozen barcode types.


Try Pass4Wallet from the app store. It's free and supports a huge array of barcode types, including codabar. It's been my go-to custom card app for a number of years.


I’m curious. What are the critical features you’re looking for? I always like to hear the specifics of how people want to use fonts.


To me it seems that they're banking on it becoming indispensable. Right now I could go back to pre-AI and be a little disappointed but otherwise fine. I figure all of these AI companies are in a race to make themselves part of everyone's core workflow in life, like clothing or a smart phone, such that we don't have much of a choice as to whether we use it or not - it just IS.

That's what the investors are chasing, in my opinion.


It'll never be literally indispensible, because open models exist - either served by third-party providers, or even ran locally in a homelab setup. A nice thing that's arguably unique about the latter is that you can trade scale for latency - you get to run much larger models on the same hardware if they can chug on the answer overnight (with offload to fast SSD for bulk storage of parameters and activations) instead of just answering on the spot. Large providers don't want to do this, because keeping your query's activations around is just too expensive when scaled to many users.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: