Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The observability world still regards itself as a system for monitoring, but reading (and sometimes seeing) how these systems just go so bad continues to drive a conviction that perhaps their strategies and tools should become bigger. That they should converge with business pipines.

We shouldn't just have wide events/big spans emitted... We should have those spans drive the pipeline. Rather than observability being a passive monitoring system, if we write code that reacts to events we are capturing, then we shuffle towards event sourcing.

Given how badly coupled together with shoestring glue & good wishes so many systems are, how opaque these pain zones are, it feels like the centralization upon existing industry standard protocols to capture events (which imo include traces) is a clear win.

(Obvious downside, these systems become mission critical, business process & monitoring both.)



What’s this look like in practice? Is this something like business process modeling and workflow engines, or something else?


Totally agree. Observability is just another dataset and should be modeled, managed and governed as other datasets. Data quality controls should be equal or of higher standard than regular data sets.

Monitoring, dashboarding and alerting should leverage other BI-class tooling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: