All of this complexity that's being added to lock a system down, purportedly in the name of "security" is actually being done for control by the vendor, at the expense of control by the user. Microsoft wants an enforceable walled garden just like Apple's.
The increased attack surface that all this brings, implemented in a way that's opaque and non-removable, makes me incredibly nervous. You think cars and IoT devices are insecure? We're going to end up with that level of danger baked into every PC on the planet.
I thought this was initially FUD, but this in particular sounds bad. TPMs really do enhance security since it massively increases the security for even weak passwords (which 90-95% of them are). However, any security feature should be zero cost such that if you don't use it/want it, it doesn't restrict what you can do.
Secure Boot at least can be disabled most of the time and that is one of the most locked down features.
Random tangent: I really wish we had a better way to customize code signing and verification. For very sensitive environments, you should be able to only trust certain certs/roots for particular binaries. I'd like to enforce that for some particular python script, it has been signed by one of *my* CI servers. Obviously this is possible, but I think you would need to roll your own code.
Microsoft wants OEMs that manufacture Windows on ARM devices to not allow a user to disable secure boot and not allow a user to add CAs or trust individual certificates. In essence, they want secure boot to not be subvertible.
This is both good and bad: for the generic end-user, not being able to 'accidentally' disable root-of-trust via secure boot and not being able to have 'tech savvy nephews' try to 'help' them (and not overseeing the long term consequences) is a net win. It's going to be pretty hard to rootkit a system that has a secured root-of-trust. On the other hand, the device is now worthless the second the OEM, ODM or Microsoft decides they don't want to support it. They also completely control all features, so even if it would be hardware-capable of doing something you simply aren't going to be able to do it.
There is a third aspect and that is that it is very hard to inspect the device as a researcher. Knowing that closed chassis debugging is a point of vulnerability, it would be great if it was still possible to boot a research OS and check the device. At this point, you can only do that with reverse-engineered drivers and a shim loader and hoping the CA in the firmware is up-to-date enough to contain a trust relation for the intermediate CA used to sign the shim certificate...
This only applies to the (dead) Windows RT and Windows Phone/10 Mobile platforms.
Windows on Arm64 devices are regular Windows desktop devices, just running on a different CPU architecture. As such, they share the same security policies.
You can disable UEFI Secure Boot just fine on a Surface Pro X, and use custom keys there too. This also applies for devices from other OEMs.
This is useful info - I didn't know that they'd eased up now, and assumed they were still playing the locked-down game with at least their own arm devices.
So ... theoretically I could put a linux distro or android build on a Surface Pro X?
Cool. I'm not exactly interested in getting one, there are enough devices in my life at the moment, but it's definitely an improvement that microsoft are allowing this on their own devices, let alone mandating the entire eco-system follow them, as they previously attempted.
> It is decreed that ARM systems not be user deactivatable.
There are certain consumer Android ARM tablets/phones that for whatever reason, do not have secure boot enabled (which are not devkits from Aliexpress). So not all of them. But they are indeed quite rare, and is one of the reasons why you can't just replace the Android bootloader with something else (you need the appropriate signing keys to do so).
On Qualcomm devices, secure boot, once enabled, blows an irreversible hardware fuse. You then need the appropriate signing keys and software to sign images and you can, I believe, continue to blow fuses to overwrite old signing keys should you want to invalidate them. Thanks to the fuses, you can never deactivate secure boot (without, like, replacing the SoC with a miraculous solder job, or whatever). It's really irreversible.
> continue to blow fuses to overwrite old signing keys
That is definitely wrong.
By their very nature efuses can be programmed only once. You can never overwrite the old values with arbitrary new values.
In theory you can flip more zeroes to ones, since the fusing operation is only performed on the ones (or only on the zeroes -- the polarity choice is arbitrary but it is made at manufacturing time). But this is not really helpful; you can't feasibly generate your own signing key under the constraint that certain bits must be one... the cost of doing this grows exponentially with the number of constrained bits.
In practice I have never ever seen an efuse array that didn't also have a "permanent write-protect" fuse. If that fuse exists, the programming software will certainly activate it after verifying that the key was programmed correctly.
> By their very nature efuses can be programmed only once. You can never overwrite the old values with arbitrary new values.
My understanding is this: you cannot overwrite the old values, but you can blow an additional fuse. If you have five fuses, then you blow the first fuse for the first key, then the second fuse for the second key. When you blow the second fuse, the system will only look at the second fuse, then the third, etc.
> In practice I have never ever seen an efuse array that didn't also have a "permanent write-protect" fuse. If that fuse exists, the programming software will certainly activate it after verifying that the key was programmed correctly.
Please refer only to iOS as a walled garden. I can easily install Windows and Linux (natively) on my Macbooks, and this is not more cumbersome as on a regular PC (disabling Secure Boot). With the Apple Silicon its not that easy, but mostly because there are just not mature-enough drivers (Linux) or no interest by the manufacturer (Windows). macOS is also not very locked-down - yes you might need to disable some security feature and reboot to install certain kernel modules, but this is nothing out of the ordinary. I can still run kernel-level code without jailbreaking/rooting the hardware on Macintoshs.
> I can still run kernel-level code without jailbreaking/rooting the hardware on Macintoshs.
I don’t think you’ll be able to say that in a few releases. Apple is doing all that it can to make it almost impossible. Currently it’s already comically hard.
I mean, installing audio highjack on an M1 is a quick 19 steps involving a reboot (I kid you not):
> I mean, installing audio highjack on an M1 is a quick 19 steps involving a reboot (I kid you not):
As a Mac user, installing software that can capture user input (like audio or keyboard strokes) in as few steps as possible is not a priority for me.
I think there's a fine line between (a) requiring extra steps as part of a genuine concern for user security, and (b) preventing the user from running custom code to "protect them from themselves". As a developer, I'm willing to tolerate the former but not the latter.
Sure, it's tolerable and somewhat understandable. I'm just worried if the curve continues on this trend it will eventually go from “very hard” to touching the “not allowed” axis.
> I can still run kernel-level code without jailbreaking/rooting the hardware on Macintoshs.
Only if you’re willing to accept the loss of Apple Pay via Touch ID as well as the entire iOS-apps-on-Mac feature. It’s been years since Apple allowed you to run kernel-level code without losing features.
iOS apps on Mac work when your security policy is modified, it’s just FairPlay decryption that fails. I’m running several iOS apps on my Mac right now that are not encrypted (either because I built them from source or stripped it off) including some “unsupported” apps like Slack and Discord, which quite frankly provide a better experience as iOS apps than their Electron versions do.
(I don’t fundamentally disagree with your point, though, although I am also obligated to mention you can load kexts from certain developers without disabling SIP which I guess is truly “without compromise” if you were lucky enough to get one of these certificates years ago when Apple handed them out like candy, but I digress.)
Can you elaborate what you mean here? Maybe I missed something, but Apple does not provide any iOS-apps-on-Mac feature? Except of course for/through their development tools, but no Appstore thingy or alike.
No, I didn't. The first link is a 3rd party software (iMazing) and the second is a developer guide how to develop an App in a way that it will also run on macOS without recompilation. Also note, that just because that software exists that checks if the the system is run outside of secure mode and then refuses to run, that doesn't make the system a "walled garden". There is a whole class of Software that has been doing this for ages, it's called Anti-Cheat Software and mostly home to Windows.
M1 Macs can run iOS apps natively. The only catch is that it's up to devs to allow them to be installed from the Mac App store. Using iMazing is a workaround for apps that the devs don't enable to be installed from the App Store.
True, but you can't deny that Microsoft has moved a lot into the direction of walled garden / cloud based recently. I don't have the feeling that I own my PC as much as I did in the past (e.g. Windows 7).
This is revisionism. Walled gardens exist upon a spectrum. Even AOL was a walled garden. Both Windows and macOS have varying degrees of walled garden features and restrictions.
The ISA is really irrelevant for these discussions on freedom. Once it gets popular for long enough, companies will start putting in these hostile features. x86, RISC-V, ARM, POWER, whatever. They all started out without the "security". I could just as easily imagine an alternative world where "Power11 gets worse, but Intel might get better" or any other combination.
>By the way, don't expect AMD to act any better. Remember that they're the company bringing you Pluton: quoted from the article, "Pluton will also prevent people from running software that has been modified without the permission of developers." It wouldn't be surprising to see AMD's Platform Security Processor pick up additional lock-in capabilities to reinforce this and other vendor controls.
So the title should be Intel and AMD are getting worse.
They learned from the whole "trusted computing" experiments in the late 90s and early 2000s that restrictions need to be introduced slowly. It's not hard to see where they're going though.
Related discussion in which I make it extremely clear that there's no meaningful evidence to support this assertion, and that Pluton doesn't change the status quo compared to basically every system being sold today having a TPM anyway
You can try to further the industry narrative as much as you want, but it's clear that people are waking up to the disinformation campaign as the walls start closing in. Stallman already warned us over two decades ago, and plenty of dystopian sci-fi before that showed what this "treacherous computing" future could be like if we don't fight back.
And, "no meaningful evidence"? Really? You sound like the WHO talking about COVID not being humanly transmissible ~2y ago, or that there was no evidence masks did anything; and we all know how that went...
But do feel free to continue helping them build better nooses to put around our necks --- including your own --- while continuing to preach about how good they are.
I think there's also a change in the server market where there's going to be (already is?) two pillars.
So cloud vendors can pressure Intel/AMD into exclusive features, or turning off features they don't like, etc. Where Dell/HP/others increasingly lose that kind of leverage.
Basically less of the vendor leverage bubbling down to normal end user owners of hardware.
It's not a training blob. It is an entire CPU, with its own firmware, sitting between the CPU and the DRAM chips. It has the bandwidth to examine every single memory access, which is something the Intel ME and PSP never had.
I am extremely impressed with Tim's integrity. His decision to stand up to this nonsense has and will cost his business dearly, but it has won him him the kind of credibility that can't be bought at any price. People remember stuff like this for a long, long time.
I'd never buy a POWER11 system given that IBM barely supported our POWER9 cluster, and we ended up compiling everything ourselves for it, with many, many errors and problems. Any difference in cost is more than made up for with the amount of staff time spend trying to fix bizarre compiler bugs.
Some of the Linux distros (like Debian) already include that sort of stuff. For eg the Debian Science Team have packaged a lot of difference types of scientific software. The Debian Python modules team packages a lot of different Python modules too.
Edit: of course a lot of more modern science and ML stuff is notoriously difficult to package, with pre-generated files, vendored code, lock files and so on.
Yeah, I am aware of that, but Linux distros package very old versions, and we need to be able to support multiple for users. Realistically, academic researchers like to use 'bleeding edge' type things. Generally a big HPC cluster comes with RHEL too and a support contract, because you need the drivers for the hardware - things like Infiniband networking, etc. so switching distro isn't an option really.
Power11 is good, but doesn't it also use a novel serial RAM interface with closed proprietary firmware? This is less in the way than proprietary mobo firmware, but for purists, or those with extraordinary needs, it may still be an issue.
The blobs on POWER10 as I remember were firmware for the memory sticks themselves since it used OMI to talk to the DIMMs, and IBM had to get third party OMI<->DDRX controllers. I see why they could have reason to believe that situation could change.
So really it's just one guy on phoronix who made that claim. Probably just wishful thinking. But if it is true, everybody with first-hand knowledge of it would be under NDA. So we'll probably never know.
1. Today we have FSP blobs but in the future we're going to have (menacing voice) Scalable FSP blobs. I consider Coreboot people to be unreliable narrators since Coreboot is feature-poor and isn't free from blobs anyway.
2. Pluton is going to be required for Windows so Intel, AMD, and Qualcomm will all have it. Resistance is futile. (It is really dumb that MS is FUDding themselves by not documenting Pluton.)
3. AFAIK Power10 was mostly finished before the pandemic. The switch from GloFo to Samsung and IBM's lack of experience with the Samsung process is more likely the cause of the third-party IP. Even if it was fully open the price/performance would still suck like all Power generations.
Since Windows is going to be irrelevant anyway (run everything in a browser, stream everything 'heavy', now the local OS is just a 'viewer'), and most 'connected' users on the planet are using a phone and whole generations grow up only ever using a phone for media/games/productivity and never touching a laptop/pc, all this will do is make Windows a niche.
Coreboot might not be perfect, but until there is enough powerful hardware (like ARM and RISC-V that isn't just multi-core but also has enough single-threaded performance) it's not like you're going to be able to run free and libre firmware on reasonable hardware (and no, pre-CSME hardware and pre-FSP hardware is not reasonable, it's just old at this point).
I hope POWER and ARM and later on RISC-V are getting powerful enough to not have to trust a CSME or FSP in the long run, but I doubt it. Even just having a PAVP DRM is something that is pushed by various corporate legal departments so hard that all the hardware that ends up in consumer land is going to be hobbled one way or another.
I'm afraid we'll end up in a world where you can get that libre and free stack on high performance hardware, but won't get access to all the IP, including IP cores for embedded FPGAs and IP like streaming media including audio, video and games.
>I'm afraid we'll end up in a world where you can get that libre and free stack on high performance hardware, but won't get access to all the IP, including IP cores for embedded FPGAs and IP like streaming media including audio, video and games.
Exactly. Also what worries me is that all it takes is one significant terrorist event to be organized over a libre software stack to get all this treachery in computing to become law too.
The increased attack surface that all this brings, implemented in a way that's opaque and non-removable, makes me incredibly nervous. You think cars and IoT devices are insecure? We're going to end up with that level of danger baked into every PC on the planet.