Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not "spurious psychological reasons". It is being honest that issues will never, ever meet the bar to be fixed. Pretending otherwise by leaving them open and ranking them in the backlog is a waste of time and attention.


I've seen both types of organizations:

1. The bug tracker is there to document and prioritize the list of bugs that we know about, whether or not they will ever be fixed. In this world, if it's a real issue, it's tracked and kept while it exists in the software, even though it might be trivial, difficult, or just not worth fixing. There's no such thing as closing the bug as "Won't Fix" or "Too Old". Further, there's no expectation that any particular bug is being worked on or will ever be fixed. Teams might run through the bug list periodically to close issues that no longer reproduce.

2. The bug tracker tracks engineering load: the working set of bugs that are worthy of being fixed and have a chance to be fixed. Just because the issue is real, doesn't mean it's going to be fixed. So file the bug, but it may be closed if it is not going to be worked on. It also may be closed if it gets old and it's obvious it will never get worked on. In this model, every bug in the tracker is expected to be resolved at some point. Teams will run through the bug list periodically to close issues that we've lived with for a long time and just won't be fixed ever.

I think both are valid, but as a software organization, you need to agree on which model you're using.


There have been a couple times in the past where I’ve run into an issue marked as WONT FIX and then resolved it on my end (because it was luckily an open source project). If the ticket were still open, it would have been trivial to put up a fix, but instead it was a lot more annoying (and in one of the cases, I just didn’t bother). Sure, maybe the issue is so low priority that it wouldn’t even be worth reviewing a fix, and this doesn’t apply for closed source projects, but otherwise you’re just losing out on other people doing free fixes for you.


it's more fun/creative/CV-worthy to write new shiny features than to fix old problems.


I think a more subtle issue is that fixing old bugs can cause new bugs. It's easier to fix something new, for instance because you understand it better. At some point it can be safest to just not touch something old.

Also, old bugs can get fixed by accident / the environment changing / the whole subsystem getting replaced, and if most of your long tail of bugs is already fixed then it wastes people's time triaging it.


> I think a more subtle issue is that fixing old bugs can cause new bugs.

Maybe it's years of reading The Old New Thing and similar, maybe it's a career spent supporting "enterprise" software, but my personal experience is that fixing old bugs causing new bugs happens occasionally, but far more often it's that fixing old bugs often reveals many more old bugs that always existed but were never previously triggered because the software was "bug compatible" with the host OS, assumptions were made that because old versions never went outside of a certain range no newer versions ever would, and/or software just straight up tinkered with internal structures it never should have been touching which were legitimately changed.

Over my career I have chased down dozens of compatibility issues between software packages my clients used and new versions of their respective operating systems. Literally 100% of those, in the end, were the software vendor doing something that was not only wrong for the new OS but was well documented as wrong for multiple previous releases. A lot of blatant wrongness was unfortunately tolerated for far too long by far too many operating systems, browsers, and other software platforms.

Windows Vista came out in 2006 and every single thing that triggered a UAC prompt was a thing that normal user-level applications were NEVER supposed to be doing on a NT system and for the most part shouldn't have been doing on a 9x system either. As recently as 2022 I have had a software vendor (I forget the name but it was a trucking load board app) tell me that I needed to disable UAC during installs and upgrades for their software to work properly. In reality, I just needed to mount the appropriate network drive from an admin command prompt so the admin session saw it the same way as the user session. I had been telling the vendor the actual solution for years, but they refused to acknowledge it and fix their installer. That client got bought out so I haven't seen how it works in 2024 but I'd be shocked if anything had changed. I have multiple other clients using a popular dental software package where the vendor (famous for suing security researchers) still insists that everyone needs local admin to run it properly. Obviously I'm not an idiot and they have NEVER had local admin in decades of me supporting this package but the vendor's support still gets annoyed about it half the time we report problems.

As you might guess, I am not particularly favorable on Postel's Law w/r/t anything "big picture". I don't necessarily want XHTML style "a single missing close tag means the entire document is invalid" but I also don't want bad data or bad software to persist without everyone being aware of its badness. There is a middle ground where warnings are issued that make it clear that something is wrong and who's at fault without preventing the rest of the system from working. Call out the broken software aggressively.

tl;dr: If software B depends on a bug or unenforced boundary in software A, and software A fixing that bug or enforcing that boundary causes software B to stop working, that is 100% software B's problem and software A should in no way ever be expected to care about it. Place the blame where it belongs, software B was broken from the beginning we just hadn't been able to notice it yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: