I think it goes beyond just this particular attempt. I bet they were looking for other targets, targets that are widely-used but maintained by resource-deficient projects. They wanted to do maximum damage with the fewest possible supply chain attacks.
The fact that many projects are now searching deep and wide through their commit histories, looking at just who they're taking code from, beginning to develop frameworks for attack mitigation and remediation... an entire type of previously very-promising attack is completely burned. This has been a massive defeat. And it happened all by chance.
>an entire type of previously very-promising attack is completely burned.
I fear it's not just the attack that is burned. If new contributors have to be distrusted and/or go through some sort of vetting that isn't based on the merit of their contributions, that is a terrible blow to the entire open source movement.
The threshold for young coders without a great deal of history and without a network of contacts to become contributors to important open source projects has just gone up massively.
Nah. Hardly going to change anything for the worse, only for the better.
The bar is still trivial. The worst part is that it will be difficult to be anonymous. There are a lot of valid, even life & death reasons for anonymity, and that will now be much harder.
So the loss is all the contributors who don't dare let their government or employer see them helping the wrong projects, or in some cases don't want to disclose that they are any good at coding or particularly interested in security etc.
I bet a bunch of employers don’t want any unauthorized contributions to open source. For governments it seems much more niche—-only a few specific projects would raise red flags.
The average business still isn’t going to have any clue who GitHub user coderdude57 is in real life. But coderdude57 may be forced to hop on a video meeting to prove who he is to a project lead before his code is accepted.
> The main vector of attack was patching the code as part of running tests.
No, it was not.
The test files were used as carriers for the bulk of the malicious code, but running the tests did nothing malicious; the supposedly corrupted test file did act as a corrupted file in the test. What extracted and executed the code hidden in these test files was a bit of code injected in the configuration steps, which in turn modified the compilation steps to extract and execute more code hidden in these test files, which in turn extracted and injected the backdoor code. This all happened even if the tests were never run.
> I would expect to see more projects try to protect against this in general by attempting to separate the building and testing procedures.
The suggestions I've seen on this line of thinking were to not only separate the building and testing steps (so that the testing steps cannot affect the output of the build step), but also to remove all test files before doing the build, and adding them back for the testing step. The important part being to remove all binary test files before doing the build, and having two separate steps only as a side effect of not being able to test without putting the test files back.
I don't think this approach is going to be adopted, as it's too much work for little gain; people are focusing more on making sure all files on the tarball used for the build either come from the corresponding tag in the repository, or can be proven to have been deterministically generated from files found in the corresponding tag in the repository. That would have either caught or ignored the key file added to the tarball which started the extraction and injection steps.
The fact that many projects are now searching deep and wide through their commit histories, looking at just who they're taking code from, beginning to develop frameworks for attack mitigation and remediation... an entire type of previously very-promising attack is completely burned. This has been a massive defeat. And it happened all by chance.