Last year I wrote a simple system using Semantic Kernel, backed by functions inside Microsoft Orleans, which for the most part was a business logic DSL processor by LLM. Your business logic was just text, and you gave it the operation as text.
Nothing could be relied upon to be deterministic, it was so funny to see it try to do operations.
Recently I re-ran it with newer models and was drastically better, especially with temperature tweaks.
What's the technology behind this. I'm working on something myself, using a distributed actor model (setup like a graph) to create a living reactive model.
The model is a multi-threaded Go script running on a 512-thread AMD EPYC server. It's a trend based model so it's just trying to figure out how best to measure and predict trend changes. Not day trading or HFT.
It conducts millions of simulations daily for each asset, then provides a snapshot of the top-performing results to GPT-4o for final selection.
I'm really pushing the limits of GPT-4o currently. I started testing with o1 just last week and it performs better. It's just so much more expensive.
My sentiments exactly - well said. The big mistake is forcing OOP onto everything. It's why I really have an affinity with the virtual actor, it has just enough OOP for me to have classes, methods and internal state - I don't need inheritance, polymorphism, etc - just naive little classifications of things we call objects.
I also don't have to think of my system in a hierarchical supervised manner like earlier actor models. I just need cloud native distributed little objects.
You can see some modern products coming out that are just that:
I like supervision enough that I reimplemented it in Go myself. I have never found a very compelling reason for hierarchical supervision despite using supervision for well over a decade now in one form or another. I support it, because the supervisors themselves are also services pretty trivially. I've gotten a request to implement the other strategies, but when I asked "why", basically, there was no answer, other than "Erlang implemented it".
I find it useful architecturally to be able to have "a thing that is a service", but, if it turns into "two services", I can have that "thing" simply wrap two services internally with its own supervisor and not change its public interface to need to provide multiple services.
Reading Erlang documentation leads people to believe that they're going to create these elaborate hierarchies of actors that crash other actors if they crash and have complicated restart patterns... and I've never found anything useful other than "if this crashes, please restart it".
I'm sure the other use cases exist. It's a big world and there's lots of needs out there. However from what I can tell at least 90% of the actors in the world don't need much more than basic supervision. That is pretty useful, though. It's amazing what that can paper over out in the field sometimes.
As I mentioned there's nothing novel in this post, especially for a senior. This is more about getting some context out of the way so that I can show some techniques in a future post.
Nothing could be relied upon to be deterministic, it was so funny to see it try to do operations.
Recently I re-ran it with newer models and was drastically better, especially with temperature tweaks.