Many of the comments seem to address the design of key hashing. The reason for using hashed keys inside B-tree nodes instead of the string keys directly is threefold:
1) The implementation is simplified.
2) When performing a lookup, it is faster to compare fixed-sized elements than it is to do variable length string comparison.
3) The key length is unlimited.
I should say the documentation page is out of date regarding hash collisions. The format now supports probing thanks to a PR merged yesterday. So inserting colliding keys will actually work.
It is true that databases and other formats do store string keys directly in the nodes. However as a memory format, runtime performance is very important. There is no disk or IO latency to 'hide behind'.
Right now the hash function used is DJB2. It has the interesting property of somewhat preserving the lexicographical ordering of the key names. So hashes for keys like "item_0001", "item_0002" and "item_0003" are actually more likely to also be placed sequentially inside the B-tree nodes. This can be useful when doing a sequential scan on the semantic key names, otherwise you are doing a lot more random access.
Also DJB2 is so simple that it can be calculated entirely by the C preprocessor at compile time, so you are not actually paying the runtime cost of hashing.
We will be doing a lot more testing before DJB2 is finalized in the spec, but might later end up with a 'better' hash function such as XXH32.
Finally, TRON/Lite³ compared to other binary JSON formats (BSON, MsgPack, CBOR, Amazon Ion) is different in that:
1) none of the formats mentioned provide direct zero-copy indexed access to the data
2) none of the formats mentioned allow for partial mutation of the data without rewriting most of the document
This last point 2) is especially significant. For example, JSONB in Postgres is immutable. When replacing or inserting one specific value inside an object or array, with JSONB you will rewrite the entire document as a result of this, even if it is several megabytes large. If you are performing frequent updates inside JSONB documents, this will cause severe write amplification. This is the case for all current Postgres versions.
TRON/Lite³ is designed to blur the line between memory and serialization format.
Really cool project. Any plans to port the API to other languages? My use case for something like this is to represent types that are used on either side of FFI (Rust <--> Some other language). Pairing this with a code generator for the shared types would be great. That's something flatbuffers/capnproto do well that isn't just pure speed.
That's really impressive. As you wrote it in C it gets automatically compilable to webasm and usable in js. I wonder how Java would behave here... As JNI is not the fastest (used to be not the fastest?)
Hey, I'm sorry, but your Postgres example is completely wrong: because of MVCC, a new version of the data will be stored on every update regardless of the choice of data representation, making the in-place mutability much less of an advantage. (It might be faster than a pair of a compact immutable format + mutable patch layer on top, or it might be slower; the answer ain't immediately obvious to me!)
What you should be imagining instead is a document database entirely built around Lite³-encoded documents, using something like rollback journals instead of MVCC.
We're doing something similar in my company, storing zero-serialization immutable [1] docs in a key-value store (which are read via mmap with zero copying disk-to-usage) and using a mutable [2] overlay patch format for updates. In our analytics use cases, compact storage is very important, in-place mutability is irrelevant (again because of Copy-on-Write at the key-value store level), and the key advantage is zero serialization overhead.
What I'm saying is that Lite³ is a very timely and forward-looking format, but the merging of immutable and mutable formats into one carries tradeoffs that you probably want to discuss, and the discussion into the appropriate use cases is very much worth having.
Hi, you are right in calling out the Postgres example in the context of DBs/MVCC. The purpose of JSONB is to be an indexable representation of JSON inside a Postgres database. It is not trying to be a standalone format for external interchange and therefore it is fulfilling very different requirements.
A serialization format does not care about versioning or rollbacks. It is simply trying to organize data such that it can be sent over a network. If updates can be made in-place without requiring re-serialization, then that is always a benefit.
Write amplification is still a fact however that I think deserves to be mentioned. To tackle this problem in the context of DBs/MVCC, you would have to use techniques other than in-place mutation like you mention: overlay/COW. Basically, LMDB-style.
And yes I think databases is where this technology will eventually have the greatest potential, so that is where I am also looking.
First of all, hello Hacker News :)
Many of the comments seem to address the design of key hashing. The reason for using hashed keys inside B-tree nodes instead of the string keys directly is threefold:
1) The implementation is simplified.
2) When performing a lookup, it is faster to compare fixed-sized elements than it is to do variable length string comparison.
3) The key length is unlimited.
I should say the documentation page is out of date regarding hash collisions. The format now supports probing thanks to a PR merged yesterday. So inserting colliding keys will actually work.
It is true that databases and other formats do store string keys directly in the nodes. However as a memory format, runtime performance is very important. There is no disk or IO latency to 'hide behind'.
Right now the hash function used is DJB2. It has the interesting property of somewhat preserving the lexicographical ordering of the key names. So hashes for keys like "item_0001", "item_0002" and "item_0003" are actually more likely to also be placed sequentially inside the B-tree nodes. This can be useful when doing a sequential scan on the semantic key names, otherwise you are doing a lot more random access. Also DJB2 is so simple that it can be calculated entirely by the C preprocessor at compile time, so you are not actually paying the runtime cost of hashing.
We will be doing a lot more testing before DJB2 is finalized in the spec, but might later end up with a 'better' hash function such as XXH32.
Finally, TRON/Lite³ compared to other binary JSON formats (BSON, MsgPack, CBOR, Amazon Ion) is different in that:
1) none of the formats mentioned provide direct zero-copy indexed access to the data
2) none of the formats mentioned allow for partial mutation of the data without rewriting most of the document
This last point 2) is especially significant. For example, JSONB in Postgres is immutable. When replacing or inserting one specific value inside an object or array, with JSONB you will rewrite the entire document as a result of this, even if it is several megabytes large. If you are performing frequent updates inside JSONB documents, this will cause severe write amplification. This is the case for all current Postgres versions.
TRON/Lite³ is designed to blur the line between memory and serialization format.