Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It can be good for connecting AWS stuff to AWS stuff. "On s3 update, sync change to dynamo" or something. But even then, now you've got a separate coding, testing, deployment, monitoring, alerting, debugging pipeline from your main codebase, so is it actually worth it?

But no, I'd not put any API services/entrypoints on a lambda, ever. Maybe you could manufacture a scenario where like the API gets hit by one huge spike at a random time once per year, and you need to handle the scale immediately, and so it's much cheaper to do lambda than make EC2 available year-round for the one random event. But even then, you'd have to ensure all the API's dependencies can also scale, in which case if one of those is a different API server, then you may as well just put this API onto that server, and if one of them is a database, then the EC2 instance probably isn't going to be a large percentage of the cost anyway.



Actually I don't even think connecting AWS services to each other is a good reason in most cases. I've seen too many cases where things like this start off as a simple solution, but eventually you get a use case where some s3 updates should not sync to dynamo. And so then you've got to figure out a way to thread some "hints" through to the lambda, either metadata on the s3 blob, or put it in a redis instance that the lambda can query, etc., and it gets all convoluted. In those kinds of scenarios, it's almost always better just to have the logic that writes to s3 also update dynamo. That way it's all in one place, can be stepped through in a debugger, gets deployed together, etc.

There are probably exceptions, but I can't think of a single case where doing this kind of thing in a lambda didn't cause problems at some point, whereas I can't really think of an instance where putting this kind of logic directly into my main app has caused any regrets.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: