I went for cursor on the $200 plan but i hit those limits in a few days. Claude code came out after i got used to cursor but I've been intending to switch it up on the hope the cost is better.
I go api directly after i hit those limits. That’s where it gets expensive.
I haven’t used Cursor since I use Neovim and it’s hard to move out.
The auto-complete suggestions from FIM models (either open source or even something Gemini Flash) punch far above their weight. That combined with CC/Codex has been a good setup for me.
> another factor to consider is that if you have a typical Prometheus `/metrics` endpoint that gets scraped every N seconds, there's a period in between the "final" scrape and the actual process exit where any recorded metrics won't get propagated. this may give you a false impression about whether there are any errors occurring during the shutdown sequence.
Have you come across any convenient solution for this? If my scrape interval is 15 seconds, I don't exactly have 30 seconds to record two scrapes.
This behavior has sort of been the reason why our services still use statsd since the push-based model doesn't see this problem.
reply