You're confusing network proximity with application architecture. Edge deployment helps connection latency. Stateless runtime destroys it by forcing every cache access through the network.
The whole point of edge is NOT to make latency-critical APIs with heavy state requirements faster. It's to make stateless operations faster. Using it for the former is exactly the mismatch I'm describing.
Their 30ms+ cache reads vs sub-10ms target latency proves this. Edge proximity can't save you when your architecture adds 3x your latency budget per cache hit.
Realistically, they should be able to do sub ms cache hits which land in the same datacenter. I know cloudflare doesn't have "named" datacenters like other providers but at the end of the day, there are servers somewhere and if your lambda runs twice in the same one there is no reason why a pull-through cache can't experience a standard intra data-center latency hit.
I wonder if there is anything other than good engineering getting in the way of this and even sub us intra-process pull through caches for busy lambda functions. After all, if my lambda is getting called 1000X per second from the same point of presence, why wouldn't they keep the process in memory?
The whole point of edge is NOT to make latency-critical APIs with heavy state requirements faster. It's to make stateless operations faster. Using it for the former is exactly the mismatch I'm describing.
Their 30ms+ cache reads vs sub-10ms target latency proves this. Edge proximity can't save you when your architecture adds 3x your latency budget per cache hit.