Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microsoft’s February security update release delayed to March (technet.microsoft.com)
54 points by amitmittal1993 on Feb 16, 2017 | hide | past | favorite | 50 comments


Since Flash update is now bundled with Windows Updates it means that Edge users will be using vulnerable Flash for one more month, wow :/


The "Disable Falsh" button is under Advanced Settings on Edge. Switched it off and I barely notice anything is missing these days.


True, but for the 90% of users of Edge that aren't technical, going into advanced settings and disabling flash is probably beyond their abilities.


wasn't edge originally supposed to be updated via store?


Flash still exists?

I used to avoid annoyances by not having Flash. Now, thanks to the hard work of WHATWG on HTML5, I'm scrod.



I'd like to know if the "project zero" publishes to everybody the security issues discovered in Google products before Google has a chance to update the software? Or does this policy exist only for the other companies? Can we even know?



I don't get it. So because of one issue, they're not going to deliver any other security patch either?


Microsoft stopped distributing individual patches, all updates are now rolled up into one package. Therefore, if one patch causes issues, none of them can be released.

Windows 10 already worked like that, last fall they started doing the same for older OS'es.

See: https://blogs.technet.microsoft.com/windowsitpro/2016/08/15/...


More of a reason to call that bundling "dumb".

Seriously, think about all the different issues Windows 10 has caused to people's computers. So if one issue damages a computer, they can't send any of the other updates to that model of laptop?

They should be striving towards higher modularity, not a lower one.



I've followed the links. The argument "if the patch is broken on, for example, Windows 10 but not on Windows 7 they could still release for Windows 7" is wrong: once the patches for Windows 7 are available every criminal can investigate them and then apply the results to the unpatched (delayed) 10 systems. So it's either all platforms or none.

Now they distribute all patches in one month together, so they possibly limited themselves to "all or nothing" also for one platform.

But regarding all platforms, they only had the choice of releasing all platforms delayed: e.g. one or two weeks or the whole month. If the issue is not trivial, as probably isn't, the whole month alternative wins.


One foreign government organisation must be hacked this month, but NSA doesn't have enough time, so they asked MS to delay patches.


I wish people thought a bit more critically when invoking NSA conspiracies in these matters. If the NSA was the primary cause, wouldn't it be much easier to simply silently exclude those specific unwanted updates from an otherwise regular Patch Tuesday, instead of having Microsoft announce very publicly and vocally that something is 'off' in this patching round?

Not saying the NSA doesn't influence Microsoft or others to withhold patches, but seeing the invisible hand of the NSA everywhere is not helpful for determining and criticizing when they do influence things. People seem to be able to suspend their critical thinking too easily whenever the NSA can be invoked.


It was a joke.


Poe's law I guess then, sorry about that. You didn't make it easy to see, judging by the serious responses you got!


>judging by the serious responses you got!

Yup, I'm surprised too ;) At least it shows how much respect MS and NSA have now.


It did cross my mind as well. Considering the whole Russia drama right now, I wonder if the NSA just asked Microsoft to delay its patches for this month so it doesn't interrupt the agency's on-going operations against Russia.

I doesn't help that Microsoft has been moving in a direction where it provides less and less information about what its updates do these days, while sneaking through dozens of new root certificates at once every now and then.

http://www.theverge.com/2017/1/25/14381174/microsoft-thailan...

http://www.networkworld.com/article/2348143/security/microso...

https://hexatomium.github.io/2016/10/11/unannounced-root-cer...

https://hexatomium.github.io/2015/06/26/ms-very-quietly-adds...


What do you base that on? Faith? :)


Previous leaks :)

Google is in relationship with NSA

Yahoo let them tap their cables

Now it's MS turn


Wow. That is a BIG screw up if they're having to push an entire month's security updates across the board.

If anyone from Microsoft reads this: This is why cumulative updates suck, and you shouldn't force them on everyone. :)


It has nothing to do with cumulative updates.

They push once a month because back in the day they pushed whenever they had an update, and enterprises really hated that because it meant that sometimes 1000s of computers were all out of commission running updates at the same time.

So MS and the enterprises agreed on a specific day of the month that updates would get pushed, so that the enterprises could plan accordingly as best fit their needs.

Some enterprises just run the updates that night and let everyone know to expect some slowness or downtime, and some of them only let the update run on their testing machines so they can validate the update in their environment before allowing it out to all the other machines.

But the main point is that the updates are predictable because that is what the customers asked for.


enterprises really hated that because it meant that sometimes 1000s of computers were all out of commission running updates at the same time.

If a computer has to go out of commission for a security update, you are doing it wrong (as an OS vendor). Doing cumulative updates is only band-aid. The real solution is make the OS modular and reliable enough to replace/restart components while it is running.


To the downvoters: Red Hat, et al. can roll out security updates on running systems. Except for kernel updates, though kexec avoids long restarts.


This is a somewhat unpleasant semi-misconception. You can, indeed, update everything but the kernel without rebooting. In fact, I suspect you could even replace the kernel image and the modules while they're running (but this will certainly break any attempt to load modules at a later point without rebooting first). (Edit: most distributions choose to keep the old image along in case the new one breaks. It's relatively unfrequent now, but back in 2003...)

Generally, however, processes don't get restarted after updates and libraries don't get reloaded, so without rebooting, you're still running the unpatched versions.

I don't know if RHEL has a clever way to figure out what needs to be restarted (it's not entirely impossible, thanks to systemd), but pretty much everyone under "et al." has this problem.

See Peter Larsen's comment here: https://lwn.net/Articles/702664/ for a more authoritative take on this, I deserted to BSD land long ago...

tl;dr Rolling out the updates without restarting is one thing, and it's done, and Microsoft could do it too, they just take the easy route. Applying them without restarting is a very different and far murkier story.


An important part to me is that usual linux updates don't cause the next reboot to take longer, or require multiple reboots. You can install the update without rebooting; it only applies on the next reboot.


> Generally, however, processes don't get restarted after updates and libraries don't get reloaded, so without rebooting, you're still running the unpatched versions.

It's possible determine what processes run outdated library code. There are tools which hook into the package manager which do this, like https://github.com/liske/needrestart


Like I said, it's not entirely impossible (a while ago I was using checkrestart on my Debian machine with pretty good results), but the result is still somewhat clunky. There are a lot of things that aren't so easily checked: changes in interpreted code (needrestart can, fortunately, deal with Java, Perl, Python and Ruby, but I don't know how well, and there's no shortage of packages that rely on old-fashioned bash scripts or -- hey, Polkit! -- JavaScript), changes in configuration files and so on.

This is a reasonable solution if you're running relatively non-critical processes on a single machine. If you really want to avoid downtime, a cluster with rolling updates seems like a solution with far fewer headaches.

Non-critical covers a lot these days, fortunately :-).


Yes, clustering and virtualizing things is pretty much how this has been handled at a scale, be it modern web applications, (Open)VMS or mainframes. It's simpler and has other advantages to architect the application for this than to do the custom integration work required to make it work on a process/application level.


> (Open)VMS

It'll eventually get reinvented :-).


If memory serves, Microsoft cannot actually do it, due to differences in file system semantics. In Windows, it's not possible to replace a file that's in use.


Unfortunately, the last time I had to do system-level Windows programming was such a long time that all I remember are a bunch of things starting with hwndsomethingsomething, so I certainly don't remember if this is the case, nor the specifics (if I ever knew them, I was very young and therefore very stupid at the time).

However, the opposite problem - that of (thread-safely) ensuring that you're not stepping on another process' file when you're writing, wiping or moving it - is pretty tedious under Unix. The only way to do it reliably - that I know of - is via flock, which is opt-in and therefore not always an option (e.g. the other process is a third-party application that doesn't lock its files), and doesn't work on remote filesystems

There is no design decision without at least one compromise hanging on its tail.


DLLs and other components installed system-wide are almost never the same file: Updates install new versions of most DLLs into the SxS system and compatible applications load newer versions when they are restarted.


Unless they bring their own version, something that has been a issue at least once.


Your memory is serving you incorrectly.

* https://news.ycombinator.com/item?id=11415366


Unlinking and replacing are not the same thing.


Applying updates without degrading service has always been somehwat difficult to do; but it can be done in ways similar to "graceful reloading" that is relatively widely used in web servers. However, it requires custom integration work, which means you need a capable sysadmin and perhaps a developer or two to implement it, so usually you just don't. Clean restarts also avoids bugs that you might introduce there.


Even if that were the case, many enterprises would still want to perform testing before deployment or require updates to be part change management.


I could just as easily argue the opposite: If your operations can't handle individual system reboots you are doing it wrong. Rebooting and even rebuilding entire systems are not bad things in my book. In any case saying that this needs to be dealt with solely at the OS layer does not make sense to me.

Having said that, is is nice to be able to fully update an OS, including kernel, whilst running, but ultimately it is just a matter of abstraction levels and marketing.


Please note that some security patches can be installed using hotpatching. Here, have some sample code:

https://www.codeproject.com/Articles/1043089/HotPatching-VER...

The executable needs to be compiled with support for hotpatching, and this doesn't address all possible exploit vectors.


> It has nothing to do with cumulative updates.

well. Let's say you have 10 security flaws to patch. 9 patches are fine, but in 1 you have detected a show-stopping issue.

If all you can deploy is one cumulative update, then that one issue is stopping the whole update. If you can deploy patches one-by-one (all on the same patch day, yes, but still separate from each other), then you can ship 9 patches for 9 holes and hold the one patch back until next month.

Of course you could also create a new cumulative update containing only 9 patches, but I assume that's more difficult to do and will require more testing.


If anything the cumulative patch is better, not worse. It's much harder to validate nine individual patches than a single cumulative update.

What happens if patch seven fails? How about six? How about six and seven? There are an exponential number of failure cases with multiple patches vs one.


What are these security fixes that they fail so much? Although there are of course tough bugs with tough fixes, the vast majority presumably consists of off-by-ones, simple buffer overflows, use-after-free, etc.


Because the PC is an open platform, people can modify it in all kind of ways. A patch can for example fail if the user has for one reason or another removed or disabled a system component that is needed to validate or processes that patch.

I think the cumulative patches work fairly well, they install much faster than the old ones. I am however troubled by the amount of security fixes Microsoft has to do every month. This shows that security is mostly an afterthought at Redmond.


Not really. It was in the bad old days, pre-XP SP1. Nowadays it's part of the design process for everything.

I think it's more due to the sheer surface.

And for comparison, I think Ubuntu updates almost as much. It just reboots less due to architectural differences.


I think Ubuntu (and almost all other Linux distros) suffer from the same problem: there is no real security coordination during development. Compare that to OpenBSD which very rarely needs to do emergency security updates.

With that said, Canonical updates the whole system and all applications. Microsoft updates only the core system, even Office is not updated unless you manually opt in.


Then patch seven fails, and it is re-offered. Pre-cumulative updates, it was very rare for there to be a pre-requisite / dependency list each month.


Indeed. A single poor update that breaks everything can cost an enterprise the ability to get security updates in general as well. About a year ago we had to choose between having current security updates and being able to print because Microsoft screwed something up in an update.


As an enterprise IT admin who has handled Windows update deployment for over six years... you missed the point.

The updates should all ship on Patch Tuesday for the reasons above. The problem is that "an issue" has interfered with "all updates" because "all updates" is now "one update".

For instance, one of the things that most articles did not cover about this issue is that Adobe released a new version of Flash Player on Tuesday to coincide with Patch Tuesday, when Microsoft would've released theirs as well. IE and Edge get their Flash Player updates from Microsoft through Windows Update now.

However, that didn't happen this month, because the cumulative broke. And so now people can look at the Chrome and Firefox Flash Player updates and exploit the remote code execution vulnerabilities in IE and Edge, which are now at risk.

In other examples, Microsoft has released updates which broke things like network printing on Windows 10. Enterprises had to make the choice to either not be able to print, or not get security updates until Microsoft finally fixed it two months later. Without the ability to pick and choose updates, when an update has a problem or compatibility issue, companies are going to end up just stopping updates, which is bad for everyone. In the past, we'd just hold the problematic update, but with cumulatives, it's not possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: