Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I hate this. Delaying real bugfixes to achieve some nebulous poorly defined security benefit is just bad engineering.


The point is to apply a cooldown to your "dumb" and unaccountable automation, not to your own professional judgment as an engineer.

If there's a bugfix or security patch that applies to how your application uses the dependency, then you review the changes, manually update your version if you feel comfortable with those changes, and accept responsibility for the intervention if it turns out you made a mistake and rushed in some malicious code.

Meanwhile, most of the time, most changes pushed to dependencies are not even in the execution path of any given application that integration with them, and so don't need to be rushed in. And most others are "fixes" for issues that were apparently not presenting an eminent test failure or support crisis for your users and don't warrant being rushed in.

There's not really a downside here, for any software that's actually being actively maintained by a responsible engineer.


You're not thinking about the system dependencies.

> Meanwhile, most of the time, most changes pushed to dependencies are not even in the execution path of any given application that integration with them

Sorry, this is really ignorant. You don't appreciate how much churn their is in things like the kernel and glibc, even in stable branches.


> You're not thinking about the system dependencies.

You're correct, because it's completely neurotic to worry about phantom bugs that have no actual presence of mind but must absolutely positively be resolved as soon as a candidate fix has been pushed.

If there's a zero day vulnerability that affects your system, which is a rare but real thing, you can be notified and bypass a cooldown system.

Otherwise, you've presumably either adapted your workflow to work around a bug or you never even recognized one was there. Either way, waiting an extra <cooldown> before applying a fix isn't going to harm you, but it will dampen the much more dramatic risk of instability and supply chain vulnerabilities associated with being on the bleeding edge.


> You're correct, because it's completely neurotic to worry about phantom bugs that have no actual presence of mind but must absolutely positively be resolved as soon as a candidate fix has been pushed.

Well, I've made a whole career out of fixing bugs like that. Just because you don't see them doesn't mean they don't exist.

It is shockingly common to see systems bugs that don't trigger for a long time by luck, and then suddenly trigger out of the blue everywhere at once. Typically it's caused by innocuous changes in unrelated code, which is what makes it so nefarious.

The most recent example I can think of was an uninitialized variable in some kernel code: hundreds of devices ran that code reliably for a year, but an innocuous change in the userland application made the device crash on startup almost 100% of the time.

The fix had been in stable for months, they just hadn't bothered to upgrade. If they had upgraded, they'd have never known the bug existed :)

I can tell dozens of stories like that, which is why I feel so strongly about this.


If your internal process is willing to ship and deploy upgrades--whether to your code or that of a third party--without testing even so minimally to notice that they cause almost a 100% chance of crashing, you need the advice to slow down your updates more than anyone...


Obviously, they caught the bug and didn't deploy it.

The point is that a change to a completely unrelated component caused a latent bug to make the device unusable, ended up delaying a release for weeks and causing them have to pay me a bunch of money to fix it for them.

If they'd been applying upgrades, they would have never even known it existed, and all that trouble would have been avoided.

I mean, I'm sort of working against my own interest here: arguably I should want people to delay upgrades so I can get paid to backport things for them :)


Do you upgrade all your dependencies every day? If not, then there’s no real difference in upgrading as if it were 7 days ago.


Unattended upgrades for server installations are very common. For instance, for Ubuntu/Debian this updates by default daily (source: https://documentation.ubuntu.com/server/how-to/software/auto...). No cooldown implemented, AFAIK.

Of course we talk about OS security upgrades here, not library dependencies. But the attack vector is similar.


I upgrade all dependencies every time I deploy anything. If you don't, a zero day is going to bite you in the ass: that's the world we now live in.

If upgrading like that scares you, your automated testing isn't good enough.

On average, the most bug free Linux experience is to run the latest version of everything. I wasted much more time backporting bugfixes before I started doing that, than I have spent on new bugs since.


> zero day is going to bite you in the ass

Maybe your codebase is truly filled with code that is that riddled with flaws, but:

1) If so, updating will not save you from zero days, only from whatever bugs the developers have found.

2) Most updates are not zero day patches. They are as likely to (unintentionally) introduce zero days as they are to patch them.

3) In the case where a real issue is found, I can't imagine it isn't hard to use the aforementioned security vendors, and use their recommendations to force updates outside of a cooldown period.


My codebase runs on top of the same millions of lines of decades old system code that yours does. You don't seem to appreciate that :)


If you mean operating system code, that is generally opaque, and not quite what the article is talking about (you don't use a dependency manager to install code that you have reviewed to perform operating system updates - you can, and that is fantastic for you, but not I imagine what you mean).

Although, even for Operating Systems, cooldown periods on patches are not only a good thing, but something that e.g. a large org that can't afford downtime will employ (managing windows or linux software patches, e.g.). The reasoning is the same - updates have just as much chance to introduce bugs as fix them, and although you hope your OS vendor does adequate testing, especially in the case where you cannot audit their code, you have to wait so that either some 3rd party security vendor can assess system safety, or you are able to perform adequate testing yourself.


Upgrading to new version can also introduce new exploits, no amount of tests can find those.

Some of these can be short-lived, existing only on a minor patch and fixed on the next one promptly but you’ll get it if you upgrade constantly on the latest blindly.

There is always risks either way but latest version doesn’t mean the “best” version, mistakes, errors happens, performance degradation, etc.


Personally, I choose to aggressively upgrade and engage with upstreams when I find problems, not to sit around waiting and hoping somebody will notice the bugs and fix them before they affect me :)


That sounds incredibly stressful.


> I upgrade all dependencies every time I deploy anything. If you don't, a zero day is going to bite you in the ass: that's the world we now live in.

I think you're using a different definition of zero day than what is standard. Any zero day vulnerability is not going to have a patch you can get with an update.


Zero days often get fixed sooner than seven days. If you wait seven days, you're pointlessly vulnerable.


Only if you already upgraded to the one with the bug in it, and then only if you ignore "this patch is actually different: read this notice and deploy it immediately". The argument is not "never update quickly": it is don't routinely deploy updates constantly that are not known to be high priority fixes.


> The argument is not "never update quickly": it is don't routinely deploy updates constantly that are not known to be high priority fixes.

Yes. I'm saying that's wrong.

The default should always be to upgrade to new upstream releases immediately. Only in exceptional cases should things be held back.


But that isn't what you said? ;P "f you wait seven days, you're pointlessly vulnerable." <- this is clearly a straw man, as no one is saying you'd wait seven days to deploy THAT patch... but, if some new configuration file feature is added, or it is ported to a new architecture you aren't using--aka, the 99.99% of patches--you don't deploy THOSE patches for a while (and I'd argue seven days is way way too small) until you get a feel that it isn't a supply chain attack (or what will become a zero day). Every now and then, someone tries to fix a serious bug... most of the time, you are just rolling the die on adding a new bug that someone can quickly find and exploit you using.


You're completely missing the point.

> this is clearly a straw man, as no one is saying you'd wait seven days to deploy THAT patch...

The policy being proposed is that upgrades are delayed. So in a company where that policy was enforced, I would be required to request an exception to the policy for your hypothetical patch.

That's unacceptable for me. That's requiring me to do extra work for a nebulous poorly quantified security "benefit". It's a waste of my time and energy.

I'm saying the whole policy is unjustified and should never be applied by default. At all. It's stupid. Its harmful for zero demonstrable benefit.

I'm being blunt because you seem determined to somehow misconstrue what I'm saying as a nitpicky argument. I'm saying the whole policy is terrible and stupid. If it were forced on me by an employer, I would quit. Seriously.


Known vulnerabilities often get fixed sooner than seven days.

You will not know how long it takes to get a zero day fixed, because zero in "zero day" ends when the vendor is informed:

> "A zero day vulnerability refers to an exploitable bug in software that is unknown to the vendor."


Renovate (dependabot equiv I think) creates PRs, I usually walk through them every morning or when there's a bit of downtime. Playing with the idea to automerge patches and maybe even minor updates but up until now it's not that hard to keep up.


Your CI/CD might be setup to upgrade all your dependencies on every build.


I’ve seen a lot of CI/CD setups and I’ve never seen that. If that were common practice, it would certainly simplify the package manager, since there would be no need for lockfiles!


I do see some CI running without lockfiles, and there's still a contingent that believes that libraries should never commit their lockfiles. It's a reasonably good idea to _test_ a configuration without the lockfile, since any user of your dependency is using _their_ lockfile that their local solver came up with, not yours, but this ought to be something you'd do alongside the tests using the lockfile. So locking down the CI environment is a good idea for that and many other reasons.

Realistically, no one does full side-by-side tests with and without lockfiles, but it's a good idea to at least do a smoke test or two that way.


I didn't necessarily say they were good CI/CD practices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: