First - simple use cases sure great. But imagine you have to update some element out of the from tree. Now you need to have OOB swaps and your HTML must contain that fragment.
Not just that your server template code now has to determine if it is HTMX request and only render OOB fragments if so.
Even at decent size app, soon it turns super brittle.
Yet to talk about complicated interfaces. Let's not go complicated just think of variants in an E-commerce admin panel.
3 variants with 5 values each these are 125 SKU rows that must be collapsed group wise.
htmx can do it but it's going to be very very difficult and brittle.
So it is surely very useful but it is NOT the only tool for all use cases.
Charging for the self hosted runners - That's close to flat $90 per month for a machine that you host yourself no matter how small or large the machine is.
IMO the minimum is to be able to read a “hello world / first triangle” example for any of the modern graphics APIs (OpenGL/WebGL doesn’t count, WebGPU does), and have a general understanding of each step performed (resource creation, pipeline setup, passing data to shaders, draws, synchronization). Also to understand where the pipeline explosion issue comes from.
Bonus points if you then look at CUDA “hello world” and consider that it can do nontrivial work on the same hardware (sans fixed function accelerators) with much less boilerplate (and driver overhead).
Thanks. It was a stupid most idea for MOST shops. I think maybe it works for AWS, Google and Netflix but everywhere in my career, I saw 90% of the problem was due to microservices.
Diving system into composable parts is a very very difficult problem already and it is only foolish to introduce further network boundaries between them.
Next comeback I see is away from React and SPAs as view transitions become more common.
If anyone from cloudflare comes here - it's not possible to create D1 databases on the fly and interact them because databases must be mentioned in the worker bindings.
Try Durable Objects. D1 is actually just a thin layer over Durable Objects. In the past D1 provided a lot of DX benefits like better observability, but those are increasingly being merged back into DO directly.
What is a Durable Object? It's just a Worker that has a name, so you can route messages specifically to it from other Workers. Each one also has its own SQLite database attached. In fact, the SQLite database is local, so you can query it synchronously (no awaits), which makes a lot of stuff faster and easier. You can easily create millions of Durable Objects.
Thank you! That's great and it is possible but... With some limitations.
The idea is from sign up form to a D1 Database that can be accessed from the worker itself.
That's not possible without updating worker bindings like you showed and further - there is an upper limit of 5000 bindings per worker and just 5000 users then becomes the upper limit although D1 allows 50,000 databases easily with further possible by requesting a limit increase.
Transactions are supported in Durable Objects. In fact, with DO you are interacting with the SQLite database locally and synchronously, so transactions are essentially free with no possibility of conflicts and no worry about blocking other queries.
Extensions are easy to enable, file a bug on https://github.com/cloudflare/workerd . (Though this one might be trickier than most as we might have to do some build engineering.)
I'm always a little hesitant to use D1 due to some of these constraints. I know I may not ever hit 10GB for some of my side projects so I just neglect sharding, but also it unsettles me that it's a hard cap.
reply