Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

BEAM will automatically distribute processes across a cluster, right?


No, typically you register a node and instruct it on what processes to run. But there are libraries to help instrument this kind of behavior.

For elixir:

- https://github.com/derekkraan/horde

- https://github.com/bitwalker/swarm


A zero-cost alternative that has worked well for me so far is to use a front-end load balancer to distribute requests to multiple Phoenix instances (in k8s), and then just let those requests' background tasks run on the node that starts them.

The whole app is approximately a websocket-based chat app (with some other stuff), and the beauty of OTP + libcluster is that the websocket processes can communicate with each other, whether or not they're running on the same OTP node.


not automatically but its pretty easy to configure in your supervision tree file. I don't know the details because whatever happens by default has taken care of our needs so far.

IT does automatically distribute processes across all the cores on a cpu though.


What I like about the Erlang platform is that it seems like it has the most sensible “microservice” story: deploy your language runtime to all the nodes of a cluster and then configure distribution in code. Lambdas, containers, etc. all push this stuff outside your code into deployment tooling that is, inevitably, less pleasant to manage than your codebase.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: