Hacker Newsnew | past | comments | ask | show | jobs | submit | hinkley's commentslogin

That’s literally faster to do a full table scan below a particular table size.

Business people have a nasty habit of identifying two independent pieces of data you have and finding ideas to combine them to do something new. They aren’t happy until every piece of data is copied with every other piece and then they still aren’t happy because now everything is horrible because everything is coupled to everything.

Any robust system isn’t going to rely on reading logs to figure out what to do about undelivered email anyway. If you’re doing logistics the failure to send an order confirmation needs to show up in your data model in some manner. Managing your application or business by logs is amateur hour.

There’s a whole industry of “we’ll manage them for you” which is just enabling dysfunction.


Log4j has the ability to filter log levels by subject matter for twenty years. In Java you end up having to use that a lot for this reason.

Logging in rust also does that, you can set logging levels for individual modules deep within your dependency tree.

Oh that library that gives you a write() wrapper in exchange for RCE vulns

Log4j is basically a design pattern. If you don’t like the library, Slf4j/logback are based on the same principles.

I think unfortunately we sometimes ascribe to powers of two supernatural powers that are really about caches being built in powers of two.

Intel is still 64 byte cache lines as they have been for quite a long time but they also do some shenanigans on the bus where they try to fetch two lines when you ask for one. So there’s ostensibly some benefit of aligning data particularly on linear scans to 128 byte alignment for cold cache access.


But there's a reason that caches are always sized in powers of two as well, and that same reason is applicable to high-performance ring buffers: Division by powers of two is easy and easy is fast. It's reliably a single cycle, compared to division by arbitrary 32bit integers which can be 8-30 cycles depending on CPU.

Also, there's another benefit downstream of that one: Powers of two work as a schelling point for allocations. Picking powers of two for resizable vectors maximizes "good luck" when you malloc/realloc in most allocators, in part because e.g. a buddy allocator is probably also implemented using power-of-two allocations for the above reason, but also for the plain reason that other users of the same allocator are more likely to have requested power of two allocations. Spontaneous coordination is a benefit all its own. Almost supernatural! :)


CPU Caches are powers of two because retrieval involves a logarithmic number of gates have to fire in a clock cycle. There is a saddle point where more cache starts to make the instructions per second start to go back down again, and that number will be a power of two.

That has next to nothing to do with how much of your 128 GB of RAM should be dedicated to any one data structure, because working memory for a task is the sum of a bunch of different data structures that have to fit into both the caches and main memory, which used to be powers of two but now main memory is often 2^n x 3.

And as someone else pointed out, the optimal growth factor for resizable data structures is not 2, but the golden ratio, 1.61. But most implementations use 1.5 aka 3/2.


Fwiw in this application you would never need to divide by an arbitrary integer each time; you'd pick it once and then plumb it into libdivide and get something significantly cheaper than 8-30 cycles.

powers-of-two are problematic with growable arrays on small heaps. You risk ending up with fragmented space you can't allocate unless you keep growth less than 1.61x, which would necessitate data structures that can deal with arbitrary sizes.

It's not just Intel is it, AMD is also using 64 byte cache lines afaik.

I just tried it? And it did something that seemed weird but reasonable.

CSS outline would be a good visual indicator here as it won’t modify the page layout.

I wonder if a stack metaphor would work for that. So that N items are no bigger than 2 items.

These feel like fixable things.

Somewhat shockingly, apparently DOM does not give access to the raw scrollwheel data.

The ones that are moving the wrong item to the wrong place are fixable, but there will always be a problem when scrolling is quantized if there is a possibility that any item in the list is shorter than the scroll quantum.

dom scroll event interceptor maybe?

It’s been a long time since I’ve said “woah” out loud to something that wasn’t a science video.

This is much better on mobile and I suspect for accessibility.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: