Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can call it perfectionism or you can call it "doing it right." I think this gets at a fundamental difference in philosophy among [software] engineers: We have a problem with a lot of edge cases, where a "good enough" solution can be done quickly. What do we do? There's a class of engineers who say 1. Do the "good enough" solution and ignore/error on the edge cases--we'll fix them later somehow (may or may not have an actual plan to do this). And there's a class of engineers who say 2. We cannot solve this problem correctly yet and need more research and better data.

Unfortunately (in my view), group #1 is making all the products and is responsible for the majority of applications of technology that get deployed. Obviously this is the case because they will take on projects that group #2 cannot, and have no compunction against shipping them. And we can see the results with our eyes. Terrible software that constantly underestimates the number and frequency of these "edge cases" and defects. Terrible software that still requires the user to do legwork in many cases because the developers made an incorrect assumption or had bad input data.

AI is making this problem even worse, because now we don't even know what the systems can and cannot do. LLMs nondeterministically fail in ways that sometimes can't even be directly corrected with code, and all engineering can do is stochastically fix defects by "training with better models."

I don't know how we get out of this: Every company is understandably biased towards "doing now" rather than "waiting" to research more and make a better product, and the doers outcompete the researchers.



> Unfortunately (in my view), group #1 is making all the products and is responsible for the majority of applications of technology that get deployed.

This is an interesting take, and I think I see where you're coming from..

My first thought on "why" is that so many products today are free to the user, meaning the money is made elsewhere, and so the experience presented to the user can be a lot more imperfect or non-exhaustive than it would otherwise have to be if someone was paying to use that experience.

So edge cases can be ignored because really you're looking for a critical mass of eyeballs to sell to advertisers or to harvest usage data from, etc.. If a small portion of your users has a bad time or experiences errors, well, you get what you pay for as they say..

And does that kind of pervasiveness now mean that many engineers think this is just the way to go no matter what?


Engineering is about tradeoffs -- are the resources invested in improving a system worth the return of said improvement? We know full well how to build bridges that will last a thousand years, we just choose not to because it's not an effective use of public funds compared to a fifty year bridge.

The same applies to software engineering -- each additional edge case you handle increases cost but has diminishing returns. At some point you have to say good enough and ship. The cost of perfection is infinite -- you have finite resources, and a great part of engineering is deciding how to allocate them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: