Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Their problem isn't serverless, rather Cloudflare Workers and WebAssembly.

All major cloud vendors have serveless solutions based on containers, with longer managed lifetimes between requests, and naturally the ability to use properly AOT compiled languages on the containers.



At that point, why should I use serverless at all? If I have to think about the lifetime of the servers running my serverless functions?


Serverless only makes sense if the lifetime doesn't matter to your application, so if you find that you need to think about your lifetime then serverless is simply not the right technology for your use case.


Because it is still less management effort than taking full control of the whole infrastructure.

Usually a decision factor between more serverless, or more DevOps salaries.


I would doubt that this is categorically true. Serverless inherently makes the whole architecture more complex with more moving parts in most cases compared to classical web applications.


> Serverless inherently makes the whole architecture more complex with more moving parts

Why's that? Serverless is just the generic name for CGI-like technologies, and CGI is exactly how classical web application were typically deployed historically, until Rails became such a large beast that it was too slow to continue using CGI, and thus running your application as a server to work around that problem in Rails pushed it to become the norm across the industry — at least until serverless became cool again.

Making your application the server is what is more complex with more moving parts. CGI was so much simpler, albeit with the performance tradeoff.

Perhaps certain implementations make things needlessly complex, but it is not clear why you think serverless must fundamentally be that way.


Depends pretty much where those classical web applications are hosted, how big is the infrasture taking care of security, backups, scalability, failovers, and the amount of salaries being paid, including on-call bonus.


Serverless is not a panacea. And the alternative isn't always "multiple devops salaries" - unless the only two options you see are server serverless vs outrageously stupid complicated kubernetes cluster to host a website.


There's a huge gap between serverless and full infra management. Also, IMO, serverless still requires engineers just to manage that. Your concerns shift, but then you need platform experts.


A smaller team, and from business point of view others take care of SLAs, which matters in cost center budgets.


Pay 1 devops engineer 10% more and you'll get more than twice the benefit of 2 average engineers.


It can be good for connecting AWS stuff to AWS stuff. "On s3 update, sync change to dynamo" or something. But even then, now you've got a separate coding, testing, deployment, monitoring, alerting, debugging pipeline from your main codebase, so is it actually worth it?

But no, I'd not put any API services/entrypoints on a lambda, ever. Maybe you could manufacture a scenario where like the API gets hit by one huge spike at a random time once per year, and you need to handle the scale immediately, and so it's much cheaper to do lambda than make EC2 available year-round for the one random event. But even then, you'd have to ensure all the API's dependencies can also scale, in which case if one of those is a different API server, then you may as well just put this API onto that server, and if one of them is a database, then the EC2 instance probably isn't going to be a large percentage of the cost anyway.


Actually I don't even think connecting AWS services to each other is a good reason in most cases. I've seen too many cases where things like this start off as a simple solution, but eventually you get a use case where some s3 updates should not sync to dynamo. And so then you've got to figure out a way to thread some "hints" through to the lambda, either metadata on the s3 blob, or put it in a redis instance that the lambda can query, etc., and it gets all convoluted. In those kinds of scenarios, it's almost always better just to have the logic that writes to s3 also update dynamo. That way it's all in one place, can be stepped through in a debugger, gets deployed together, etc.

There are probably exceptions, but I can't think of a single case where doing this kind of thing in a lambda didn't cause problems at some point, whereas I can't really think of an instance where putting this kind of logic directly into my main app has caused any regrets.


For a thing, which permanently has load it makes little sense.

It can make sense if you have very differing load, with few notable spikes or on an all in on managed services, where serverless things are event collectors from other services ("new file in object store" - trigger function to update some index)


Agree, it seems like they decided to use Cloudflare Workers and then fought them every step of the way instead of going back and evaluating if it actually fit the use case properly.

It reminds me of the companies that start building their application using a NoSQL database and then start building their own implementation of SQL on top of it.


Ironically, I really like cloudflare but actively dislike workers and avoid them when possible. R2/KV/D1 are all fantastic and being able to shard customer data via DOs is huge, but I find myself fighting workers when I use them for non-trivial cases. Now that Cloudflare has containers I'm pushing people that way.


Hey! Bet I can guess who


In that scenario, how do you keep cold startup as fast as possible?

The nice thing about JS workers is that they can start really fast from cold. If you have low or irregular load, but latency is important, Cloudflare Workers or equivalent is a great solution (as the article says towards the end).

If you really need a full-featured container with AOT compiled code, won't that almost certainly have a longer cold startup time? In that scenario, surely you're better off with a dedicated server to minimise latency (assuming you care about latency). But then you lose the ability to scale down to zero, which is the key advantage of serverless.


Apparently not nice enough, given that they rewrote the application in Go.

Serverless with containers is basically managed Kubernetes, where someone else has the headache to keep the whole infrastructure running.


Cloudflare has containers now too, and having used AppRunner and Cloud Run, it's much easier to work with. Once they get rid of the container caps and add more flexibility in terms of container resources, I would never go back to the big cloud containers, the price and ease of use of Cloudflare's containers just destroy them.


I doubt that the bill would be that much cheaper, nonetheless thanks for making me aware they are a thing now.


They're much cheaper, they're just DOs, and they get billed as such. They also have faster cold start times and automatic multi-region support.


What does DO mean in this context?


Durable Object


Indeed.

They get to the bottom of the post and drop:

> Fargate handles scaling for us without the serverless constraints

They dropped workers for containers.


You're saying serverless can have really low latency and fast 24/7?

Isn't serverless at the base the old model, of shared vms, except with a ton of people?

I'm old school I guess, baremetal for days...


Yes, check Cloud Run, AWS Lambda, Azure Functions with containers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: