How many bytes is a node, in practice? If you stuff hundreds or thousands of pointers in a node, plus a bunch of log, isn't that a lot of data to clone when you do a path-copy? It seems like there's a major trade-off there, unless you always write in large batches. I suppose you could have fast "cons-ing" of events onto the root log without any copying. It would be interesting to know what choices lead to good performance in a sample use case.
The code actually includes a benchmarking tool that's meant to help in figuring out this decision. Once you've selected your key size (or distribution), value size (or distribution), and backend, you can play with these factors. Intuitively, targeting "block" size should improve perf. So, 4k-1MB if you're in memory or local, 1.5k-9k if you're over the network (and depending on configuration).
I did some work to split nodes based on size rather than number of children, but that requires accurate & fast size estimation of the decompressed objects, which isn't possible in general.