Hacker Newsnew | past | comments | ask | show | jobs | submit | more 7moritz7's commentslogin

They got big with mysql with optimizations (maybe mariadb?). neon would be postgres aas


Vitess (sharded MySQL) is how they became relevant. But broadly they've spent a lot of time making a great DaaS. There plan is to do the same with Postgres.


Can someone explain to me the neon pricing?

5 minutes of inactivity makes it idle.

If I get one query every 5 minutes and each query takes 100ms for whole month, do I get changed for 720 hours or for 14 minutes (total compute time)?


update: It's 720 hours of compute cost. Not really serverless. It's just managed database service and it can scale to zero. That's it.


5 bucks gets you 8gb ram 4 vcpu 75gb nvme at contabo actually

i know this is apples and oranges but that's 16 times the ram


you get all of those resources execpt what you need: a managed postgresql.

the difference in price is really the value added by having someone else managing postgresql for you.


what is there to manage on a single instance, single VM...


pitr, setting up a replica, observability, performance reports etc


When I first saw their LLM integration on Facebook I thought the screenshot was fake and a joke


Greater China is never used to describe a region. It means China, Tibet, Macao, Hong Kong and Taiwan according to Apple.


That has been solved with RAG, OCR-ish image encoding (deepseek recently) and just long context windows in general.


RAG is like constantly reading your notes instead of integrating experiences into your processes.


Not really. For example we still can’t get coding agents to work reliably, and I think it’s a memory problem, not a capabilities problem.


On the other hand, test-time weight updates would make model interpretability much harder.


Hasn't RLHF and with LLM feedback been around for years now


Large latent flow models are unbiased. On the other hand, if you purely use policy optimization, RLHF will be biased towards short horizons. If you add in a value network, the value has some bias (e.g. MSE loss on the value --> Gaussian bias). Also, most RL has some adversarial loss (how do you train your preference network?), which makes the loss landscape fractal which SGD smooths incorrectly. So, basically, there's a lot of biases that show up in RL training which can make it both hard to train, and even if successful, not necessarily optimizing what you want.


We might not even need RL as DPO has shown.


> if you purely use policy optimization, RLHF will be biased towards short horizons

> most RL has some adversarial loss (how do you train your preference network?), which makes the loss landscape fractal which SGD smooths incorrectly


I love umami. Tried several solutions but umami

- is a breeze to set up

- has every data point you need, but no bloat

- has an intuitive UI


*anywhere that has the python runtime installed

ironically that makes jupyter more portable via colab


Title and first paragraph make it sound like this is a project by the same people as (or endorsed by them) Jupyter. Apparently that's not the case and also it looks very similar to google colab so jupyter + better UI + some LLM integrations

But kudos for going oss


They are digital private equity essentially


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: