I also use UUID's everywhere - primarily due to concurrency requirements - but I ended up writing a bunch of functions (`MACRO`s) that lookup UUIDs and join them to other tables where needed, instead of using VIEWs or JSON queries.
I've not seen this approach documented much online - but it works really well for me. It has the advantage of keeping all my tables flat, while still being able to encode business logic into the database.
A typical query I might type, would look something like..
`SELECT ,trip_start_date(id) FROM trips WHERE trip2region(id) = 'AU'`
Where the two macros are just simple table joins, saving me the boilerplate work.
Where this approach has really shined, though, is through function composition...
`SELECT FROM trips WHERE loc2region(trip_origin(id)) NOT IN loc2region(location_is_hotel(trip_locations_visited(id)))`
So something like the above, which is a mix of scalar and table functions, would give me all
the trips where the traveler stayed in hotels outside of their home region.
Maybe not the best example, because I don't actually work with trip/hotel/travel data at all, but I'd be interested reading more about this approach. I was surprised how well-optimized the queries are.... \endyap
I am beaming with joy upon opening this game, I've had a long day and this is just what I needed. Perfect game, the alert math question popup made me laugh out loud
I've been learning Postgres and SQL on the job for the first time over the last six months - I can confirm I've learnt all of these the hard way!
I'd also recommend reading up on the awesome pg statistics tables, and leverage them to benchmark your things like index performance and macro call speeds.
Definitely checking this out today! I use postgres for ~30 GB of machine learning data (object detection) and have a couple workflows which go through the Postgres->Parquet->DuckDB processing route.
A couple questions, if you have time:
1. How do you guys handle multi-dimensional arrays? I've had issues with a few postgres-facing interfaces (libraries or middleware) where they believe everything is a 1D array!
2. I saw you are using pg_duckdb/duckdb under the hood. I've had issues calling plain-SQL functions defined on the postgres server, when duckdb is involved. Does BemiDB support them?
1. good to hear!
2. The bulk of them are convenience wrappers which resolve UUIDs into other values, so most are read-only with only a single table lookup.
Regarding the custom PID controller script: I could have sworn the Linux kernel had a generic PID controller available as a module, which you could setup via the device tree, but I can't seem to find it! (grepping for 'PID' doesn't provide very helpful results lol).
I think it was used on nVidia Tegra systems, maybe? I'd be interested to find it again, if anyone knows. :)
A lotta negativity in here... My immediate reaction was "fuck yeah I always wanted to host OSM but I couldn't figure out how last time". thanks dude :)
I've not seen this approach documented much online - but it works really well for me. It has the advantage of keeping all my tables flat, while still being able to encode business logic into the database.
A typical query I might type, would look something like..
`SELECT ,trip_start_date(id) FROM trips WHERE trip2region(id) = 'AU'`
Where the two macros are just simple table joins, saving me the boilerplate work. Where this approach has really shined, though, is through function composition...
`SELECT FROM trips WHERE loc2region(trip_origin(id)) NOT IN loc2region(location_is_hotel(trip_locations_visited(id)))`
So something like the above, which is a mix of scalar and table functions, would give me all the trips where the traveler stayed in hotels outside of their home region. Maybe not the best example, because I don't actually work with trip/hotel/travel data at all, but I'd be interested reading more about this approach. I was surprised how well-optimized the queries are.... \endyap