Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I haven't seen any evidence that these systems are capable of structuring and executing large-scale changes in entire repositories; but given you're still a student, your definition of large might be different.

The middle of an S-curve looks like an asymptote, which is where we're at right now. There's no guarantee that we'll see the same kind of exponential growth we saw over the past three years again. In fact, there's a ton of reason to believe that we won't: models are becoming exponentially more expensive to train; the internet has been functionally depleted of virgin training tokens; and chinks in the armor of AI's capabilities are starting to dampen desire for investment in the space.

Everyone says "this is the worst they'll be"; stated as a fact. Imagine its 2011 and you're running Windows 7. You state: "This is the worst Windows will ever be". Software is pretty unpredictable. It does not only get better. In fact, software (which absolutely includes AI models) has this really strange behavior of fighting for its life to get worse and worse unless an extreme amount of craft, effort, and money is put into grabbing the reins and pulling it from the brink, day in, day out. Most companies barely manage to keep the quality at a constant level, let alone increase it.

And that's traditional software. We don't have any capability to truly judge the quality of AI models. We basically just give each new one the SAT and see the score go up. We can't say for certain that they're actually getting better at the full scope of everything people use them for; a feat we can barely accomplish for any traditionally observable software system. One thing we can observe about AI systems very consistently, however, is their cost: And you can bet that decision makers at Microsoft, Anthropic, Meta, whoever, obsess about that just as much if not more than capability.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: