Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The obvious difference between UNIX tools and LLMs is the non-determinism. You can't necessarily reason about what the output will be, and then continue to pipe into another LLM, etc., and eventually `eval` the result. From a technical perspective you can deal do this, but the hard part seems like it would be how to make sure it doesn't do something you really don't want it to do. I'd imagine that any potential deviations from your expectations in a given stage would be compounded as you continue to pipe along into additional stages that might have similar deviations.

I'm not saying it's not worth doing, considering how the software development process we've already been using as an industry ends up with a lot of bugs in our code. (When talking about this with people who aren't technical, I sometimes like to say that the reason software has bugs in it is that we don't really have a good process for writing software without bugs at any significant scale, and it turns out that software is useful for enough stuff that we still write it knowing this). I do think I'd be pretty concerned with how I could model constraints in this type of workflow though. Right now, my fairly naive sense is that we've already moved the needle so far on how much easier it is to create new code than review it and notice bugs (despite starting from a place where it already was tilted in favor of creation over review) that I'm not convinced being able to create it even more efficiently and powerfully is something I'd find useful.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: