The platform feature in question was normally cheap and just made artificially expensive by Defender intercepting calls to it and blocking until analysis was performed. I don't think it's the FireFox' team's responsibility to be aware of and take into account arbitrary software intercepting system calls.
> I don't think it's the FireFox' team's responsibility to be aware of and take into account arbitrary software intercepting system calls.
One of the first, hard lessons I had to learn about web development (like, stare-at-a-wall-and-consider-my-career-hard) is that web development is way more about network effects than application architecture.
Real people run systems with real configurations, and when you're targeting "the public" as your userbase you must account for that. And Mozilla knows this: if you go into the source code (circa 2009, YMMV) and look through the initialization and boot-up logic, you would find places where the system used heuristics to figure out whether some extensions had been installed in odd places instead of the "Extensions" directory (because the tool had been installed before Firefox) and hot-patch paths to pull in that component. Because if a user installs Flash and then installs Firefox and Flash doesn't work in Firefox, it's not Flash that's broken... It's Firefox.
It doesn't matter if the bug is in "Microsoft's code" or "Mozilla's code." That's unimportant. If you're a Mozilla engineer, all that matters is whether this bug would cause a user to get pissed off and uninstall Firefox.
I completely agree with you and have been on the other side of this too, having worked on a native enterprise app running on various MacOS, Windows, iOS and Android versions. Customers don't care if you have a great explanation why stuff with your app doesn't work. That being said, it's completely unreasonable to have the proactive expectation of something working well today (writing many files) breaking tomorrow (due to defender heuristics changing) and proactively trying to prevent this by optimizing. Mozilla reacting to this by both reporting the bug to Microsoft and optimizing to work around the problem is really the best you can do.
"They shouldn't have written so many files in the first place" is not a valid preventative strategy, but a one way road to premature optimization hell.
It's the application owner's responsibility to make it the app run as best as it can on a given platform. Platforms are messy, but you have to deal with it. You should escalate to the platform owner, sure, but you can't rely on them fixing it in any reasonable time-frame.
I worked on a desktop<->cloud file sync app. On Windows, only one badge can show up on a file's icon in Explorer. If there's multiple apps trying to set the badge, who wins? Well, it depends on the lexicographical order of the registrants names. So what did we do? We added some spaces to our registration name to make them show up first. Good for the user, as best as we can know - since the user or their admin had to install the app to get these badges in the first place. And they were useful ones too - whether a file was synced or not. We tried our best, and escalated.
They also don't say anything about "sane" usage, and while I don't have an MBA, I'm pretty sure they don't teach anything about `VirtualProtect` ratios when doing competitor analysis.
One possibility is that the Chrome team's implementation was more efficient due to luck, or they invested the resources to identify the performance characteristics of this function call, whereas the Firefox team missed it. I don't think "Chrome has more development resources than Firefox" is news to anybody.
Did you read the bug report? This is literally about writing to files in a temp folder. Surely you can optimize that but you should also be able to assume that this does not use excessive amounts of CPU on a modern operating system.
Why is Search Indexer constantly rescanning the same files? Can they not cache the results from the previous scan? That and OneDrive are constantly making my work laptop scream.
Come on, anyone that has even unzipped Linux-centric stuff on Windows knows how slow individual file operations are compared to Mac or Linux.
It's very common knowledge that on Windows you will get terrible performance if you have many many small files.
I don't know why Microsoft doesn't fix that. Maybe they can't for compatibility reasons or something. But that's the way it is, and any software that wants to run well on Windows needs to deal with it by using fewer bigger files.
Yes, I have read the bug report. It mentions that Firefox writes wayyyyy too much in the temp folder. It also mentions that the team should fix this behaviour independently of the fact that some of those calls are more costly than they should be because of the bug in Defender:
> With a standard Firefox configuration, the amount of calls to VirtualProtect is currently very high, and that is what explains the high CPU usage with Firefox. The information that the most impactful event originates from calls to VirtualProtect was forwarded to us by Microsoft, and I confirm it. In Firefox, disabling JIT makes MsMpEng.exe behave much more reasonably, as JIT engines are the source of the vast majority of calls to VirtualProtect.
> On Firefox's side, independently from the issue mentioned above, we should not consider that calls to VirtualProtect are cheap. We should look for opportunities to group multiple calls to VirtualProtect together, if possible. Even after the performance issue will be mitigated, each call to VirtualProtect will still trigger some amount of computation in MsMpEng.exe (or third-party AV software); the computation will just be more reasonably expensive.
> It mentions that Firefox writes wayyyyy too much in the temp folder.
> > the amount of calls to VirtualProtect is currently very high
Calling VirtualProtect is not writing to the temp folder. The VirtualProtect call is to change the permissions of the in-memory pages. It should be an inexpensive system call (other than the cost of TLB flushes and/or shootdowns).
- we implement a feature, test it thoroughly for functional and non-functional requirements
- when we are happy, we release it
I don't see myself being responsible for a third party software company coming along years later and introducing a bug in code that injects itself between my software and the operating system that users of the software I wrote happens to install at some point.
Maybe you're not responsible, but if someone says "something changed in the OS and your previous method is now adding substantial overhead", you could either a) report the change to the OS and mitigate or b) report the change to the OS and ignore the problem for years. It sounds like Mozilla chose b, for whatever reason.
As a software developer, I've had to workaround many many bugs in OSs, especially when dealing with updates to Android. It's just part of the job.
The OS isn't some random third party software, it's one of your dependencies. Your software doesn't work without the OS and if it also doesn't work with the OS, it just plain doesn't work.
That's really not a tenable mindset to be taking these days. With how much Windows has become a constantly-moving target rather than a stable platform, you need to regard it first and foremost as your adversary, whether you are developing against it or are simply an end user. And the days of being able to thoroughly test against every relevant version of the OS are long gone; Microsoft has ensured your QA will be Sisyphean.
If your users are on Windows, you have to be where they are. Moving target, wonky API, warts, and all.
Yes, it's Sisyphean. That's why my shop had a whole room stuffed with parallel Windows installs. We couldn't afford to have our users be the first ones to notice Microsoft pulled the rug out from under us again.
Windows Defender isn't "arbitrary software" - it's built into the OS and enabled by default. To anyone building an application for Windows, it should be considered part of the platform.