Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes — PMTiles is exactly that: a production-ready, single-file, static container for vector tiles built around HTTP range requests.

I’ve used it in production to self-host Australia-only maps on S3. We generated a single ~900 MB PMTiles file from OpenStreetMap (Australia only, up to Z14) and uploaded it to S3. Clients then fetch just the required byte ranges for each vector tile via HTTP range requests.

It’s fast, scales well, and bandwidth costs are negligible because clients only download the exact data they need.

https://docs.protomaps.com/pmtiles/





PMTiles is absurdly great software.

I know right! I'd never heard of HTTP Range requests until PMTiles - but gee it's an elegant solution.

Hadn't seen PMTiles before, but that matches the mental model exactly! I chose physical file sharding over Range Requests on a single db because it felt safer for 'dumb' static hosts like CF. - less risk of a single 22GB file getting stuck or cached weirdly. Maybe it would work

My only gripe is that the tile metadata is stored as JSON, which I get is for compatibility reasons with existing software, but for e.g. a simple C program to implement the full spec you need to ship a JSON parser on top of the PMTiles parser itself.

A JSON parser is less than a thousand lines of code.

And where most of CPU time will be wasted in, if you care about profiling/improving responsiveness.

At that point you're just io bound, no? I can easily parse json at 100+GB/s on commodity hardware, but I'm gonna have a much harder time actually delivering that much data to parse.

What's a better way?

How would you store it?

That's neat, but.. is it just for cartographic data?

I want something like a db with indexes


Look into using duckdb with remote http/s3 parquet files. The parquet files are organized as columnar vectors, grouped into chunks of rows. Each row group stores metadata about the set it contains that can be used to prune out data that doesn’t need to be scanned by the query engine. https://duckdb.org/docs/stable/guides/performance/indexing

LanceDB has a similar mechanism for operating on remote vector embeddings/text search.

It’s a fun time to be a dev in this space!


> Look into using duckdb with remote http/s3 parquet files. The parquet files are organized as columnar vectors, grouped into chunks of rows. Each row group stores metadata about the set it contains that can be used to prune out data that doesn’t need to be scanned by the query engine. https://duckdb.org/docs/stable/guides/performance/indexing

But, when using this on frontend, are portions of files fetched specifically with http range requests? I tried to search for it but couldn't find details




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: