Android and ChromiumOS are likely the most trustable computing platforms out there; doubly so for Android running on Pixels. If you don't prefer the ROM Google ships with, you can flash GrapheneOS or CalyxOS and relock the bootloader.
Pixels have several protections in place:
- Hardware root of trust: This is the anchor on which the entire TCB (trusted computing base) is built.
- Cryptographic verification (verified boot) of all the bootloaders (IPL, SPL), the kernels (Linux and LittleKernel), and the device tree.
- Integrity verification (dm-verity) of the contents of the ROM (/system partition which contains privileged OEM software).
- File-based Encryption (fscrypt) of user data (/data partition where installed apps and data go) and adopted external storage (/sdcard); decrypted only with user credentials.
- Running blobs traditionally run in higher exception levels (like ARM EL2) in a restricted, mutually untrusted VM.
- Continued modularization of core ROM components so that they could be updated just like any other Android app, ie without having to update the entire OS.
- Heavily sandboxed userspace, where each app has very limited view of the rest of the system, typically gated by Android-enforced permissions, seccomp filters, selinux policies, posix ACLs, and linux capabilities.
- Private Compute Core for PII (personally identifiable information) workloads. And Trusty Execution Environment for high-trust workloads.
This is not to say Android is without exploits, but it seems it is most further ahead of the mainstream OSes. This is not a particularly high bar because of closed-source firmware and baseband, but this ties in generally with the need to trust the hardware vendors themselves (see point #1).
IMO we have to step back and be honest that the Linux kernel is simply not equipped to run trusted code and untrusted code in the same memory. New bugs are found every few weeks.
If history is of any guide Android and ChromiumOS likely still have many critical bugs the public does not know about yet.
Sadly the only choice is to burn extra ram to give every security context a dedicated kernel and virtual machine. Hypervisors are the best sandbox that exists anchored down to a hardware IOMMU.
QubesOS being VM based is thus the best effort secure workstation OS that exists atm. SpectrumOS looks promising as a potential next gen too.
I've never used Qubes. Rather I heavily segment with manually configured VMs. The ones that run proprietary software (eg webbrowsing, MSWin, etc) generally run on a different machine than my main desktop. It's quite convenient as I can go from my office to the couch, and I just open up the same VMs there and continue doing what I was doing.
I define the network access for each VM in a spreasheet (local services and Internet horizon), which then gets translated into firewall rules. I can simultaneously display multiple web browsers, each with a different network nym (casual browsing, commercial VPN'd, TOR, etc).
The downsides include needing an ethernet cable on my laptop (latency), and that this setup isn't great at going mobile. Eventually I'll get around to setting up a medium-trust laptop that runs a web browser and whatnot directly (while not having access to any keys to the kingdom), one of these days real soon now.
Which brings me to the real downside is the work required to administer it - you already have to be in the self-hosting game. This is where an out-of-the-box solution could excel. Having recently become a NixOS convert, SpectrumOS looks very interesting!
Thinking of my kids' future has also made me much more energy-conscious. Meaning I've stopped running my VM host 24/7 like I was, because neither ESX nor Proxmox is really set up for saving energy easily (automated suspending and waking, etc). Which is a shame, since I'm actually finding that with gigabit fiber at home, even on mobile connections I can work pretty decently on homelab VMs.
Running something like it on a laptop directly makes sense, but I worry about bringing some workloads back to my laptop that I really prefer to keep off it. In terms of raw performance my laptop isn't even close, especially with heavy graphic workloads. And then there's heat, etc.
I feel like this is the all too common pattern of individuals taking environmental responsibility to absurd levels, while corporations dgaf. How much electricity is burned in datacenters, especially doing zero-sum surveillance tasks?
My Ryzen 5700G ("router") idles around 20-25W, which seems like a small price to pay to not be at the mercy of the cloud. That's around 60 miles of driving per month (gas or electric), which seems quite easy to waste other ways.
My Libreboot KGPE ("desktop/server") burns about 160W. This is much higher than a contemporary computer should be, but that's the price of freedom. I could replace it with a Talos II (~65W from quick research), but the payback for electricity saved would take several decades.
To cut back on the environmental impact, I do plan to install solar panels with battery storage, which will also replace the need for UPSes. I've got another KGPE board for which it's interesting to think about setting up as a parallel build host, only running during sunny days rather than contributing to electricity storage requirements.
I totally agree, but I do what I can. And while setup takes me some extra effort, I can now run my 24/7 workloads on a small box and run the big box only when needed.
I used to leverage VMs more (and still do in certain cases) but I've moved to disposable/containerized by leveraging Kasm [0]. There's other ways to stream environments, but it's another option. Definitely check it out if you're looking for other options.
i think tanenbaum will be vindicated in the end. monolithic kernels are like 90s computer networks with perimeter security. if i were to guess, i'd guess that the future is microkernels with some sort of hardware accelerated secure message passing facility. zero-trust at kernel design scale.
Obligatory reminder of the human angle here: we need also a way for untrusted code to not be able to nag the user into granting them extra permissions.
That's not capability based security, though everyone seems to think it is. (Perhaps it's just survivor bias effecting replies?)
You never get nagged to take $5 out of your wallet to hand to an untrusted person, why should you get nagged to drop a file into an application you don't trust.
"You never get nagged to take $5 out of your wallet to hand to an untrusted person"
I suppose, you never have been to a significantly poorer country? (outside of the protected tourist areas).
Or well, spend time with kids, who really want something.
Begging can get very intense.
And about file permissions, well - are you aware, what kind of permissions the standard free app on the google play store will ask of people? And yes, I won't use them. But I use WhatsApp. Did not wanted to give permission to read contacts, or wide file access. But denying it, means it is allmost unusable, so I also eventually gave in ..
The operating system should allow you to make the choice, then enforce it. Open file X, save file Y.... the user should make those choices (via the OS) and the OS should enforce those decisions... the way applications are currently run, that's not true.
The application still needs to communicate the things it needs, the things on which the OS/the user should make choices. And if the application can communicate this, it can communicate it again. And again. And again. Or flat out refuse to work with "incorrect" choices, and bully the users into compliance.
You'd think that would be really rude of the app. That may have mattered 20-30 years ago. Today, most consumer-facing tech companies - big corporation and small startups alike - adopted "being a rude, obnoxious asshole" as a business model.
Note that this includes all the major commercial OS vendors too - i.e. Apple, Google and Microsoft. This creates a new challenge: how do we design secure systems when neither the apps nor the OS itself are trusted parties? How do we develop this security framework, when untrusted parties are the ones gatekeeping adoption, and also most likely to be developing it?
In other words: how do we maintain security for hens, when the foxes are guarding the hen house?
To be fair, Pixels (and all modern Android phones by my understanding) use some kind of trusted execution environment. So if you have a Pixel 6 or later you're using Trusty to perform some trusted actions, which is not Linux and gets some of its own SoC die space. That doesn't mean you can't get kernel pwned and lose private info.
Linux is a security shit show but it is at least publicly auditable, which is a prerequisite to form reasonably confidence in the security of software, or to rapidly correct mistakes found.
OpenBSD by contrast has dual auditing and a stellar security reputation, but development is much slower and compatibility is very low.
seL4 as an extreme is a micro-kernel with mathematically provable security by design, but no workstation software runs on it yet.
MacOS, iOS, and Windows are proprietary so they are dramatically worse off than Linux in security out of the gate. No one should use these that desires to maximize freedom, security and privacy.
In the case of seL4, don't confuse formal verification with security. The code matches the spec, and security properties can be extracted very precisely, but spec might contain oversights/bugs which would allow an attacker to perform unexpected behaviors.
If you define security as a "lack of exploitable bugs", then security can never be proven, because it's impossible to prove a negative. Also many formally verified systems have had critical bugs discovered, like the KRACK attacks in WPA. The formal verification wasn't wrong, just incomplete, because modeling complex systems is inherently an intractable problem.
The fact that seL4 doesn't even offer bug bounties should be a huge red flag that this is still very much an academic exercise, and should not be used in places where security actually matters.
Besides spec bugs, the seL4 treat model is focused on making sure components are kept isolated. They do not deal with most of what we understand as attacks on a workstation at all.
In fact, in a seL4 system, most of the vulnerabilities we find on Linux wouldn't even be on the kernel, and their verification logic can't test something that isn't there.
That said, the seL4 model does probably lead to much better security than the Linux one. It's just not as good as the OP's one-liner implies.
> The formal verification wasn't wrong, just incomplete, because modeling complex systems is inherently an intractable problem.
I’m not involved in this kind of research or low level auditing, but I have some mathematical training and fascination with the idea of formal verification.
I ran across this thing called “K-Framework” that seems to have invested a lot in making formal semantics approachable (to the extent that’s possible). It’s striving to bridge that gap between academia and practicality, and the creator seems to really “get” both the academic and practical challenges of something like that.
The clarity of Grigore’s explanations and the quality of what I’ve found here: https://kframework.org/ makes me think K has a lot of potential, but again, this is not my direct area of expertise, and I haven’t been able to justify a deep dive to judge beyond impressions.
You’re correct in pointing out that complex systems are inevitably difficult to verify, but I think stuff like K could help provably minimize surface area a lot.
It sure makes auditing that code conforms to an expected design a lot easier, which is most security bugs. This is a fantastic design choice for a security focused kernel.
I will grant that proving something was implemented as designed does not rule out design flaws so, fair enough.
I find it hard to believe that the Linux codebase being auditable makes Linux more secure by default than MacOS, iOS, and Windows. I doubt it is humanly feasible to fully read and grok the several million LOC running within Linux.
I would, however, trust a default MacOS/iOS/Windows system over a default Linux system. The Linux community has a track record of being hostile to the security community - for their own good reasons. Whereas Apple and Microsoft pay teams to secure their OS by default.
If you install something like grsecurity or use SELinux policies, I could buy the argument. I have yet to see these used in production though.
Also seL4 is not mathematically proven to be secure, it is formally verified which means it does what it says it does. Part of that spec may include exploitable code
I regret my poor description of seL4. It has proofs for how it functions, and how code execution is isolated, etc. That is not -every- security issue by any means but reviewing a spec is easier than reviewing code, and a small code footprint that forces things out of the kernel that do not need to be there is a major win. I hope more projects follow their lead.
As for Linux, piles of companies pay for Linux kernel security, though many bugs are found by academics and unpaid independent security researchers. Linux is one of the best examples of many-eyes security. None of those brilliant and motivated researchers are allowed to look at the inner workings of MacOS or Windows though Darwin is at least partly open source so I put it way ahead of Windows here.
On system-call firewalling tactics like SELinux it is true very few use these in practice as most devs have no idea what a system call is, let alone how to restrict it. That said Kernel namespacing features have come into very wide mainstream use thanks to Docker and similar containerization frameworks which cover much of the scope of things like SELinux while being much easier to use.
As for most Linux /distributions/, I sadly must agree they favor compatibility and ease of use over security basically always. I will grant that Windows/MacOS enable basic sandboxing features that, while proprietary, are likely superior to nothing at all like most Linux distros. Other choices exist though.
QubesOS is the Linux distro for those that want high security. It is what I run on all my workstations and I would trust it out of the box over anything else that exists today as hardware access and application workflows are isolated by virtual machines and the base OS is offline.
> I would, however, trust a default MacOS/iOS/Windows system over a default Linux system. The Linux community has a track record of being hostile to the security community - for their own good reasons. Whereas Apple and Microsoft pay teams to secure their OS by default.
I think we can have the best of both worlds here: OS distributions that are being maintained by paid teams of security experts, and that can be audited by anybody.
What are the major ones? Android, Chromium OS, RedHat (Fedora, CentOS), and SUSE.
seL4 actually makes proofs for some core isolation promises, like realtime-ness and data flow adhering to capabilities (though with neglect of side channels for that aspect, which can be corrected for by also verifying the code that runs on top to not do shady stuff to probe side channels).
> MacOS, iOS, and Windows are proprietary so they are dramatically worse off than Linux in security out of the gate. No one should use these that desires to maximize freedom, security and privacy.
Not sure how fair this is, even though I agree with you with regards to Linux being auditable and the others not.
Windows and macOS these days have vendor-provided code signing authorities that can be leveraged (and are by default), which provides at least some protection against malware at the macro level (in that the certificates can be revoked if something nefarious is identified). This doesn't exist at all in Linux, although third party products are in the early stages.
Windows 11 and macOS have hardware-backed root-of-trust. In Windows the root of trust is the TPM, on macOS it's the T2 (Intel) or the chip package (ARM).
Any of these features could be compromised without your knowing, but at least where you have control authorities for these systems you can draw some comfort in knowing that once new malware has been identified spreading on pretty much any machine, it can be stopped quite rapidly on all machines by revocation until the bug can be patched.
> Windows and macOS these days have vendor-provided code signing authorities that can be leveraged (and are by default), which provides at least some protection against malware at the macro level
The code-signing is relatively trivial to workaround, you just have to get users to run xattr(1) to remove quarantine status.
Because even for FOSS stuff, unless you are an expert on all levels of the stack you will not be able to assert there are some hidden exploits disguides as perfectly safe code.
And if you have to rely on third parties to assert that for you, then you will have to trush their honesty and technical skills to be able to assert such statements.
So there is only hope that all players are experts, don't do any mistakes, keep being honest, and exercise their certification for every new release for all products that make up a standard installation.
It's... complicated. Linux is just the kernel, but good modern OS security requires the kernel, the userspace, and the kernel/userspace boundary to all be hardened a significant amount. This means defense in depth, exploit mitigation, careful security and API boundaries put in place to separate components, etc.
Until pretty recently (~3-4 years) Linux the kernel was actually pretty far behind in most respects versus competitors, including Windows and mac/iOS. I say this as someone who used to write a bunch of exploits as a hobby (mainly for Windows based systems and windows apps). But there's been a big increase in the amount of mitigations going into the kernel these days though. Most of the state of the art stuff was pioneered elsewhere from upstream but Linux does adopt more and more stuff these days.
The userspace story is more of a mixed bag. Like, in reality, mobile platforms are far ahead here because they tend to enforce rigorous sandboxing far beyond the typical access control model in Unix or Windows. This is really important when you're running code under the same user. For example just because you run a browser and SSH as $USER doesn't mean your browser should access your SSH keys! But the unix model isn't very flexible for use cases like this unless you segregate every application into its own user namespace, which can come with other awkward consequences. In something like iOS for example, when an application needs a file and asks the user to pick one, the operating system will actually open a privileged file picker with elevated permissions, which can see all files, then only delegate those files the user selects to the app. Otherwise they simply can't see them. So there is a permission model here, and a delegation of permissions, that requires a significant amount of userspace plumbing. Things like FlatPak are improving the situation here (e.g XDG Portal APIs for file pickers, etc.) Userspace on general desktop platforms is moving very, very slowly here.
If you want my honest opinion as someone who did security work and wrote exploits a lot: pretty much all of the modern systems are fundamentally flawed at the design level. They are composed of millions of lines of unsafe code that is incredibly difficult to audit and fix. Linux, the kernel, might actually be the worst offender in this case because while systems like iOS continue to move things out of the kernel (e.g. the iOS WiFi stack is now in userspace as of iOS 16 and the modem is behind an IOMMU) Linux doesn't really seem to be moving in this direction, and it increases in scope and features rapidly, so you need to be careful what you expose. It might actually be that the Linux kernel is possibly the weakest part of Android security these days for those reasons (just my speculation.) I mean you can basically just throw shit at the system call interface and find crashes, this is not a joke. Windows seems to be middle of the pack in this regard, but they do invest a lot in exploit mitigation and security, in no small part due to the notoriety of Windows insecurity in the XP days. Userspace is improving on all systems, in my experience, but it's a shitload of work to introduce new secure APIs and migrate things to use them, etc.
Mobile platforms, both Android and iOS, are in general significantly further ahead here in terms of "What kind of blast radius can some application have if it is compromised", largely because the userspace was co-designed along with the security model. ChromeOS also qualifies IMO. So just pick your poison, and it's probably a step up over the average today. But they still are comprised using the same fundamental building blocks built on lots of unsafe code and dated APIs and assumptions. So there's an upper limit here on what you can do, I think. But we can still do a lot better even today.
If you want something more open in the mobile sector, then probably the only one I would actually trust is probably GrapheneOS since its author (Daniel Micay) actually knows what he's doing when it comes to security mitigation and secure design. The FOSS world has a big problem IMO where people just think "security" means enabling some compiler flags and here's a dump of the source code, when that's barely the starting point -- and outside of some of the most scrutinized projects in the entire world, I would say FOSS security is often very very bad, and in my experience there's no indication FOSS actually generally improves security outside of those exceptional cases, but people hate hearing it. I suspect Daniel would agree with my assessment most of the fundamentals today are fatally flawed (including Linux) but, it is what it is.
I keep thinking about this too, everything should be containerised and you can control access to shared resources like the clipboard or file system with absolute certainty, even stopping certain app from access to anything.
I can’t discuss my former role in too much detail, but it has convinced me that all the above is insufficient in a number of very realistic threat models.
One issue is that software has vulnerabilities and bugs. I’m not talking about the software that users run in sandboxes environments. I’m talking about the sandboxes environments. I’m talking about cryptography implementations. I’m talking about the firmware running in the “trusted” hardware.
The other major issue is as you alluded to: the need to trust vendors and hardware. Without protection and monitoring at the physical level, the user has no way to verify the operation of the giant stack of technology designed to “protect them”. Without the ability to verify operations, how is the user to trust anything? Why do companies tell users to “trust them” without any proof they are trustworthy?
This may seem like a minor point, but this is really the crux of the issue. Building this giant house of cards on top of a (potentially) untrustworthy hardware root of trust does not buy anyone anything. Certainly it does not buy “security”.
Large companies and nation states are the most likely adversaries one wants to be wary of these days (e.g. journalists, whistleblowers, etc.). What good does the technology do them if the supply chain is compromised or vendors are coerced to insert backdoors? These are the threats that actually face people concerned about security, not whether or not their executables are run in a sandboxed VM or not. Great, you’ve stopped the adversary from inserting malicious code into your device after purchase. Good thing for them, they did it prior to or during manufacture.
The technology you alluded to above is mainly useful for protecting company IP from end users, IMO. That’s how I’ve mainly seen it used, and the marketing of “security for the user” is a gimmick to justify the process.
EDIT: I forgot to mention this entire class of security issue since I am used to working on air gapped systems. I don’t care if you are operating in a sandboxed VM with a randomized MAC over a VPN over Tor. If you’re communicating with any other device over the Internet, you have to trust every single other machine along the way. And you shouldn’t.
Why do companies tell users to “trust them” without any proof they are trustworthy?
You know the answer here, they are not to be trusted.
Samsung phones, for example, have a gpsd, which phones home at random times. This runs as root, ignores vpn settings (so no netguard for you!), and if it is just getting updated agps info, it sure seems to send a lot of data for that.
So no, they don't want a legitly auditable device. Too many questions, you see.
It is also worth mentioning, since I didn’t realize it until I worked in depth in the space: your CPU is not the only place to execute code, or the only place with access to hardware.
> The other major issue is as you alluded to: the need to trust vendors and hardware. Without protection and monitoring at the physical level, the user has no way to verify the operation of the giant stack of technology designed to “protect them”. Without the ability to verify operations, how is the user to trust anything? Why do companies tell users to “trust them” without any proof they are trustworthy?
At the end of the day you need to trust Qualcomm or MediaTek. The Oracle of the hardware world, with more lawyers than engineers, or... MediaTek.
AFAIK there are no phones on the market with open-source baseband firmware, so you have to trust one of Qualcomm, Broadcom et al with access to all cellular communication. Do you have a best of breed supplier you’ve vetted?
And even if you _could_ trust the baseband on your device, there’s the problem that the cell tower is running software you have no visibility of.
If I were the NSA, that’s where I’d be focussing at least some of the attention of the “exploit people’s phones” department. If you want cellular connectivity, the cellular provider needs a real time way to identify your device and it’s location (at least down to neatest few cell towers accuracy).
(And once I had some capability there, the people running the “most secure basebands/devices would be the ones I kept the closest eye on. I’ve heard my local intelligence service are very interested in phone switch on/off events, because they are a signal that someone might be attempting to evade surveillance, and the rarity of “normal people” switching their phone off (or disconnecting from the cellular network) makes it worthwhile collecting all that “metadata” so they can search it for the “interesting” cases.
I don't think you can trust any commercial baseband, period. They all have to adhere to complex radio standards, nobody wants to implement them because their design-by-committee stuff is boring/lame/hard/painful, so what you get is a few stacks that pass the tests and everyone builds on top of that. Same goes for any other RTOS-style firmware, it's really hard to get right, and because most of them are built by/for the 'device' world, they often have very long release cycles, slow development etc. just like say, head units in cars or painfully slow touch screens on devices that should just have buttons (like office style coffee machines, ATMs etc).
WiFi is in a similar position, but at least the diversity is a bit better causing integration tests to fail better when too many bad implementations try to talk to each other.
That leaves all the other chips, which I think are best trusted in a divide and conquer setup where they all have to independently verify their blobs, and not be allowed to mess with each others memory/internal state. It can make them more expensive, but it also compartmentalises them in a way that side-channel attacks within CPU cores are completely mitigated.
Only a very very very small amount of companies in the world will have the capital, expertise and manpower to make devices where enough of the stack can be trusted, and out of all of those, only some actually seem to try:
- Google (mostly for UX, but also to make fat stacks of cash via ads)
- Apple (mostly for UX, but also to make fat stacks of cash via ecosystem)
- Microsoft (console, ARM windows, mostly for DRM, but also UX)
- Sony (console, mostly for DRM, but also UX)
- Nintendo (console, mostly for DRM)
Besides the vertical integration they could make, there is the problem of their 'personalities' usually not being a good fit for people that want to go off the deep end in terms of security, privacy, control, feelings etc. But anyone and everyone else simply cannot do to silicon what needs to be done, even if just because of the lack of IP.
The more a company does _not_ want to get burned on their security/privacy positions and keeps iterating making better designs, the more you _could_ trust them. The only realistic alternative is going back to 80's computing, and nobody has time for that.
I agree with your point about how the personality of (technology) companies doesn’t fit the security minded person well. As a software engineer, I feel I am in the small minority whenever I mention security or privacy concerns at work. I feel that most technology is evil at this point because of the immense ability of it to violate people’s privacy. It is sad to me that this is the state of the modern tech ecosystem.
It's just back to the worldview of the 1960's. The only computers were mainframes and were seen as the tools of "The Man", conspiring in the shadows against a free humanity. PCs were a liberating force, making computing accessible to the common man. Modern total reliance on global networks reversed the power balance back to central instances.
There are open source LTE stacks, they just suffer from lack of power efficiency. Not too bad in a gaming laptop form factor, but quite bad in a smartphone.
While respecting your “I can go into details” comment, I’m curious to hear whatever you _can_ comment on about what sort of adversary has the capabilities you describe and do you have an opinion on whether they use those in tightly targeted attacks only, or do they compromise the entire hardware/software supply chain in a way that they can do “full take surveillance” using it?
If I’m not a terrorist/sex-trafficker/investigative-journalist, can I reasonably ignore those threats even if I, say, occasionally buy personal use quantities of illegal drugs or down/upload copyright material? (With, I guess, the caveat that I’d need to assume the dealer/torrent site at the other end of those connections isn’t under active surveillance…)
All of this is public knowledge and has nothing to do with my role:
Nation states, especially the US, should be suspected of having compromised everything. Look at all the things Edward Snowden released. Look at the way the NSA has corrupted cryptographic standards in the past (e.g. Dual EC DRBG). There are countless instances of similar situations.
All that looks good on paper, but a lot of apps require full disk access and can easily run in the background, so how "trustable" can that really be in practice?
With iOS at least I know that apps really are sandboxed and cannot access anything unless I grant permission. No app can ever attempt to access my photos unless I explicitly pick a photo or grant partial/total access. Even then it's read-only or "write with confirmation, every time"
Well both of your complaints were already addressed. Android introduced the scoped storage system to remove and fix abuse of "full" disk access, and they also added the foreground notification system which forces a system notification to be displayed if any app is doing work in the background, so that you know about.
Right, but if the average real-world Android experience lags behind say iOS in terms of security, then the point, even if outdated, still serves to disprove the parent’s premise that AOSP is the most secure.
On GrapheneOS you can choose specifc storage scopes, even if the app is requesting full user storage access.
And you can deny the file access permission like any normal permission, most modern apps request music or videos and photos, rarley an app requests full file access.
Hasn’t this all been long true for iOS as well? There are reasons to hate it, but a walled garden is safer in many ways (as long as you trust Apple). You mention baseband - Android hardware comes with max 3 years of baseband support, compared to 7-9 years on iPhone. The story is similar when it comes to stock OS support. So from my pov, iPhones can be a comparable value (security and otherwise) to the best Android has to offer, specifically because of their (usable to me and the next guy to own my phone) 7-9 years of life, compared to 3 years max with a Pixel. What am I missing here?
Accuracy. You're missing "making accurate statements" here.
And context - neither Apple's proprietary practices nor "trust" of them are applicable solutions to the problem domain here.
A 7-9 year old iOS device is crippled today, with non-user-replaceable parts (both software and hardware). A 3-year-old Pixel is newer than the one I use, running sshd and accepting connections only from the bearer of my private key.
The market for old iPhones drops sharply, and is just for folks with an iCloud account. The market for old Pixels allows a Pixel 6 to be sold for >$1000 less than half a year ago (with a more trustworthy third party OS installed), ready for the user to choose whether this trust model of a pre-installed OS is even sufficient.
> A 7-9 year old iOS device is crippled today, with non-user-replaceable parts (both software and hardware).
A 7-9 year old iPhone has user replacable parts (hardware), as does the equivelant Pixel. It also runs perfectly well.
> A 3-year-old Pixel is newer than the one I use, running sshd and accepting connections only from the bearer of my private key.
That's a lovely thought. The baseband manufacturers with their own blobs disagree though.
> The market for old iPhones drops sharply, and is just for folks with an iCloud account. The market for old Pixels allows a Pixel 6 to be sold for >$1000 less than half a year ago (with a more trustworthy third party OS installed), ready for the user to choose whether this trust model of a pre-installed OS is even sufficient.
This is simply untrue. The Pixel 6 is available for $900 AUD new and about $500-600 used. The iPhone 13 (same year) starts at $1200 AUD new and about $800-900 used. The value held is extremely similar, and in fact drops less for older Apple phones than Android ones likely - as the GP mentioned - due to more than double the years of OS support from the vendor.
It's fine for you to decide that the fruit company is less trustworthy than the advertising company, but don't spread nonsense to try and sell your point.
My impression is that Android is compromised by design because it so easily leaks data that any defense against it already became futile. Apps might not have permissions to do anything, but the same is true for users.
Not that a Desktop with Windows would be any better. But I don't trust the OS itself, no matter how many layers of virtualization you put between code I choose to execute. The weak link is already provided by design. This isn't a technical criticism of Android, but the whole platform as being intransparent and paternalistic.
Not a fan of trusted computing because I doubt it will ever be used in the interest of users. It will be a requirement for some services, which I don't want to further support in their endeavours.
> Android and ChromiumOS are likely the most trustable computing platforms out there
I talked to a security researcher specializing on Android at a conference and he didn't sound like he'd agree.
While I personally think ChromiumOS does a good job, I think a huge problem is that the issue is in how liberally complexity is added. And complexity is typically where security issues lurk. This has been seen again and again.
It's also why I think projects such as OpenBSD do such a great job. Their main focus seems to be reducing complexity (which they are sometimes are criticized for). A lot of the security seems to come from the reduced attack surface you get. And then the security mechanisms build, which typically are more easily implemented, because of said simplicity are the next layer.
And I think OpenBSD has reached a sweet spot there, where it's not some obscure research OS, but an OS that you can install on your server or desktop, run actual work loads, heck, even play Stardew Valley, or a shooter on, but have all the benefits in terms of security or simplicity that you can from research OSs, Plan 9, etc.
So maybe not mainstream, but mainstream enough to actually work with it. There's sadly many projects that completely ignore reality around them, also because their goal is to simply be research projects and nothing more. Then we have those papers that rarely everyone ever looks at on how in a perfect world all those big security topics could be solved. Unless some big company comes along and puts it into some milestone.
ChromiumOS seems like the limitations you get are similar, probably even more severe than what you get compared to OpenBSD's flexibility. That's something many de-google projects struggle with as well. At the same time the complexity remains a whole lot bigger. Of course goals and target groups are hugely different.
I think both Android and ChromiumOS used to put more emphasis on simplicity, but gave it up at some point. I am not sure why, but would assume that many decisions are simply company decisions. After all the eventual goal is economic growth.
That locking down on mobile devices is not just to increase security, but has the beneficial side effect of controlling the platform. This might not even be directly intended by the security focused developers, but it is a side effect.
So "most trustable" in that scenario comes with "most gate-keeping", "least ownership", etc., which we are kind of used to on smartphones, tablets and Chromebooks. So I think comparing it with other kinds of mainstream OSs isn't really leading to much.
This page has stuck with me since I read it regarding openbsd. It's a bit mean spirited, but I think openbsd mostly benefits from its own obscurity.
https://isopenbsdsecu.re/
But the nice parts of ChromeOS, as far as security properties go are the way it can be "power washed" between usages. Along with a desktop Linux system that has less binaries installed at it's base than most. And things that are built in are typically built atop chrome's sandbox.
I used to joke with my friends who ran TAILS Linux that my grandma with her Chromebook had the same threat model.
No other general-purpose OS that runs on my laptop has the track record of OpenBSD: only 2 remotely exploitable security holes in the default installation since ~1996. And then the other mitigations let you control carefully what more attack surface to expose--those mitigations dramatically reduce it. I appreciate the general lack of privilege escalation 0-day exploits, as seen over time.
Serious question: How big is OpenBSD as a target for malware, exploits, viruses, etc. ?
OpenBSD's track record is impressive but is it a significant target compared to Windows, MacOS, and Linux?
It is easy to say "only two bullets have ever penetrated my armor" when hardly anyone is shooting at you. I do not know if this is the case because I have never used OpenBSD and I do not know how widely it is used (headless servers, embedded devices, etc.).
If that were true you might also expect FreeBSD and NetBSD to have similarly low levels of exploits, and my general impression (not research, just reading around) is that they have had more exploits in that timeframe (privilege escalation bugs, whatever).
You can read more about OBSD's security by going to https://openbsd.org then clicking the "Security" link near the top left.
ps: FreeBSD seems to have more users than OpenBSD, and NetBSD seems to have fewer users than OpenBSD.
On OpenBSD do you even need to compromise the kernel? Can't your normal user account install a backdoored browser, steal your ssh keys, DDOS people, start a VNC server, keylog, etc.
I guess that is possible if you try to do so -- compile your own stuff, or download and run it. But if you act normally instead, installing just what you actually need, from the package repository (where most things are "pledged" and "unveiled" which adds some impressive protections), I think the chances are much lower than with other OSes.
Also I separate things by user account, so I don't do my general browsing as the same user that does my programming, which is again separate from bank access, which is separate from .... So the kernel is providing a lot of protections.
(And I usually browse without images and javascript, which is not OS-specific but a suggestion. Many things I use don't require it, and I can flip it on or configure it specifically for those that do.)
There is a package repository where many of the packages have been "pledged" and "unveiled", meaning that they execute with fewer unneeded privileges. And a general lack of privilege escalation exploits in base. And privilege separate by running things as distinct users. So jails might be less needed or less helpful by comparison, overall. There are chroot jails though, not sure why anyone would think they are not available in OBSD.
OpenBSD doesn't have jails. Jails take effort to setup. It's much easier to just run the malware instead of going through the effort of making a jail for it.
PRISM revelations showed that the state worked together with large companies like Google and Apple, MS, Facebook etc to gather information. The CIA also had backdoors into popular mobile and desktop OSes. I'm afraid we can't trust any device right now.
Law enforcement has never had problems with iPhones. iPhones in the default configuration back up all data to iCloud with Apple keys, allowing Apple and the FBI to read all of the photos and messages on a device at any time, without the device.
The "Apple vs FBI" thing was a coordinated PR campaign following the Snowden leaks to salvage Apple's reputation.
The price of an Android zero day has been slightly higher than the price of an iOS zero day since late 2019. Make of that what you like
https://zerodium.com/program.html
I suspect remarkably few people are qualified to objectively say which is more secure. If you're an expert on one, you're unlikely to be an expert on the other.
Security is multidimensional. It's unlikely there will be a platform that's more secure from every possible angle. What's secure for you might not be secure for a less technical person, or a world traveler.
In terms of privacy, Android is "compromised" by default, i.e. Google collects and stores a ton of private information about you. I believe Apple used to be much better, and still is, but getting worse.
> In terms of privacy, Android is "compromised" by default, i.e. Google collects and stores a ton of private information about you.
Apple also does the same. Also, this only applies if you're running an Android device "out-of-the-box". Fortunately, there exist AOSP forks that mitigate this type of intrusion (e.g. GrapheneOS).
iOS is a proprietary OS making security research unreasonably difficult with new setbacks on every new version. It can only be regarded as reasonably private and secure if you trust the Apple marketing team.
My understanding is that Apple has gotten a lot better about this with their bug bounty payouts and providing debug hardware to researchers, and it’s not like there’s not a ton of proprietary code running on most consumer android devices.
I would also assume the fact that their vertical integration all the way down to silicon is an advantage here as well.
In Android you at least have the choice to run a fully open source OS and open source apps, albeit with some driver blobs.
With the exception of the blobs, everything on Android is auditable.
Meanwhile very little of MacOS or iOS is auditable.
Personally I do not use or trust any of the above, but if forced to choose Android is worlds ahead of iOS in terms of publicly auditable privacy and security.
You cannot form reasonable confidence something is secure unless it can be readily audited by yourself or capable unbiased third parties of your choosing. This means source code availability is a hard requirement for any security claims. Even if you had teams de-compile everything you could never keep up with updates.
Not all open source code is secure, but all secure code is open source.
The bug bounty is pretty hard to actually get access to, there’s still no source outside of the kernel, and the Security Research Devices are really hard to get access to. You have to be someone they’ve heard of, in a country they approve of, you can’t move the device around, and you have to sign your life away to get it for 12* months.
What @Irvick is talking about is the fact that you have more freedom to test the security in an Android than in iOS, such as being able to flash other systems.
Isn’t most of the value here in not allowing sideloading? In iOS your grandma/child cannot be tricked into clicking “allow apps from untrusted sources”, which is how most breaches happen.
If the apps sandboxed, how can installing an app cause breaches? As far as I know, iOS apps are sandboxed.
Either the sandbox is very weak and Apple instead relies on App Store audits, or they disallow users installing apps outside the app store to protect their 30% tax that makes them a LOT of money.
the malicious app, even signed off the app store could also exploit unpublished vulnerabilities to gain elevated access and not require asking for permission. even or especially if it's not a full sandbox escape.
The problem is that third-party OEMs don’t have to run AOSP, they can easily replace any and all code with malicious call-home backdoors, and still pass CTS tests.
As far as I can tell, there is no meaningful protection in place to prevent OEMs from poisoning the Android well (and the Android brand), even without considering the black box firmware running on wifi/BT/LTE/5G modems.
Most of your bullet points are reinventions of standard technology or incidental complexity.
Cryptographic verification of the boot chain with Hardware root of trust are real. Heavily sandboxed userspace is real. Everything else would seem to be a reimplementation of common best practices (disk encryption), or a mitigation of a self-created problem (there shouldn't be binary driver blobs running on the main CPU to begin with).
And from what I remember, a plain AOSP install seemed to still phone home to Google to check for Internet connectivity and whatnot. It's awfully hard to put my faith in an operating system primarily developed by a surveillance company, as the working assumptions are a drastic departure from individualist computing. And trying to question those assumptions with independent devs is often dismissed (for a particularly striking example, see LineageOS/"Safetynet").
> Everything else would seem to be a reimplementation of common best practices...
True, but those protections are enabled by default (on Pixels at least). Users don't have to do anything here.
> And from what I remember, a plain AOSP install seemed to still phone home to Google to check for Internet connectivity and whatnot.
You're not wrong, but GrapheneOS and CalyxOS are valid options, if you don't trust the ROM Pixel ships with. Even with a custom ROM you're left trusting the OEM. It'd be nice if we could have an open hardware / open firmware Android, but it hasn't happened, yet.
Sure, but full disk encryption was also enabled on my Mom's Ubuntu laptop 15 years ago, because I chose the correct options when I set it up. What commercial vendors offer out of the box has never been a good yardstick for talking about security features, and it's only gotten worse with the rise of the surveillance economy.
My fundamental problem with Graphene/Calyx is that I don't trust the devs have enough bandwidth and resources to catch all the vulnerabilities created upstream, especially with the moving target created by rapid version churn. For example, Android is finally getting the ability to grant apps scoped capabilities rather than blanket full access permissions, which is actually coming from upstream - the Libre forks should have had these features a decade ago, but for their limited resources.
Concretely, what discourages me from going Pixel is the Qualcomm integrated baseband/application chipsets. I've heard that Qualcomm has worked on segmenting the two with memory isolation and whatnot, but their history plus the closed design doesn't instill confidence. Yet again it's the difference between the corporate perspective of providing top-down relativist "security" rather than the individualist stance of hardline securing the AP against attacks from the BB.
Pragmatically, I know I should get over that and stop letting the perfect be the enemy of the good (I'm currently using a proprietary trash-Android my carrier sent me. The early 4G shutdown obsoleted my previous Lineage/microG). But every time I look at Pixels it seems there's so damn many "current" models, none stand out as the best but rather it's a continuum of expensive versus older ones (destined to become e-waste even sooner due to the shameless software churn). And so I punt.
> What commercial vendors offer out of the box has never been a good yardstick for talking about security features
It absolutely is. Default setting matter a lot!
It's great to have extra security features too. But even experienced users won't change defaults if they have too much cost. If things are turned on by default then those costs diminish because other software has to work within them.
My point was that when talking about commercial security offerings, the security models generally end up relying on on "trust the company", which has never worked out well. So corporate offerings finally coming around to having full disk encryption is more catching up with something they lacked, rather than advancing the state of he art. (Contrast with Android's process sandboxing, which seems like a genuine advancement and could be worthwhile to port to desktop Linux)
In the context of talking about individual actions one can take to trust their personal machine, it's reasonable to assume this involves appropriately configuring your software environment. If this was instead a thread about what products would be good to recommend to your parents, then what was commercially available off the shelf would be relevant.
These are all part of the operating system and unrelated to the biggest attack surface: apps.
Users install apps and grant them full access all the time. Android phones are wide open no matter how secure the operating system itself is, because the security model for apps is so weak from a user experience point of view.
Windows does all of those, in addition to fine grained access controls. I would go so far as to say that the Chromium sandbox implementation is better than on Android because of the ability to completely de-privilege processes.
> Windows does all of those, in addition to fine grained access controls.
Aren't those still user-based? So, in the typical case of a single-user computer, a rogue app that I run with my account would be able to access my data.
There's the protected folders thing (not sure about the name) since a few versions ago, which attempts to block random apps from accessing random folders.
But it's opt-in (folders are not protected by default, you have to add them one by one). It's also all or nothing: either a given app is "untrusted" and it can't access any of the protected folders, or it's "trusted" and it can access all of them.
I can't say I trust Photoshop to only touch my pictures folder, but not my .ssh folder.
Windows struggles with feature adoption though. Win11 helped with the TPM requirement and features on by default, but MSIX apps are still underrepresented so userspace sandboxing is weaker. Windows virtualization-based security is great though, imo it's a significant advantage over Android
MSIX doesn't implement sandboxing. Apps can opt in to being sandboxed via that tech, but you can also write totally unsandboxed apps. There's also a very light weight app container mode called (internally) Helium which just redirects some filesystem and registry stuff, but the goal is to make uninstalls clean, not security.
The Windows kernel does offer an impressive number of options to lock down processes. Look at the Chrome sandbox code some time. The Windows API is huge but you can really lock it down a lot. The macOS sandbox architecture is, however, the best. The Linux approach is sadly in third place.
App silos are pretty neat, I also find it kind of shameful that server silos are locked down to server SKUs. I would figure there could be some significant benefit to using them in browser renderer processes.
I think they recently added a way to opt out of it entirely. That said, it does very little and because it's not a sandbox it's easy to 'escape':
• Copy an EXE to %TEMP% and run it from there.
• Use a Win32 API flag to start a process (of any path) outside the app container.
There are only a few cases where Helium is an issue and they're all easy to work around. One example is writing a log file to your app private UserData directory and then opening Notepad on it. If you do it the 'naive' way then Notepad won't be able to find it, because you'll pass a redirected path which it can't see. The fix is simply to resolve the redirect before passing the argument to Notepad using a Win32 API call.
I know we talked about Conveyor a few days ago so I'll note here that if you package a JVM app with it then the %LOCALAPPDATA% and %APPDATA% environment variables are rewritten to their target locations automatically, so as long as you use them to decide where to write app private files then their paths will be visible to other apps, avoiding the Notepad issue. This doesn't apply to native or Electron apps though, at least not at this time.
Does LineageOS have a place alongside GrapheneOS and CalyxOS?
Asking because I base hardware purchases on whether LineageOS is available for the device and wondering whether I should restrict further to GrapheneOS or CalyxOS.
There's overlap between CalyxOS and LineageOS. CalyxOS (privacy-focused ROM, currently Fairphone and Pixel-only) has different goals to LineageOS (Android ROMs for as many phones as possible); while GrapheneOS is a security-focused distribution. DivestOS is another credible alternative.
I didn't know that pixels allowed custom ROMs to re-lock bootloader with custom keys, Digging into it I found that even OnePlus & FairPhone might have that feature.
What do you want to verify exactly? Do you think Apple is lying about what lockdown mode does? Why would they do that?
Could you at least say what your opinion is based on?
But it is possible to verify what it does, the same way you would for an android phone (I.e. not just look at the source code and hope that it matches what’s running on your device).
https://youtu.be/8mQAYeozl5I
At 26:42 he talks about lockdown mode. Would be a bit weird if he lied about the impact lockdown mode has.
What if he’s wrong? Computers do things their programmers don’t expect them to literally all the time. Security bugs generally come from a mistaken assumption about how something behaves.
He doesn’t have to be a liar to be telling you untruths about how it works.
My android comment was taking yours, turning it around and taking it to the extreme to illustrate a point.
And no, I never asked why we would need to verify the security researcher’s claims (but sure, you should).
1. Dma54rhs says Apple’s (!) claims supposedly can’t be verified and that you need to take Apple’s word for it
2. I ask why not, provide a link to a talk about iOS security by a renown security researcher as both an example of how to verify Apple’s claims (reverse engineer iOS) and to lend some credence to the point that they are likely to be true
3. You talk about the researcher and/or programmers being wrong by replying with an “orthogonal” comment containing “whataboutism”.
Edit: Could we please talk about the actual topic? Do you or someone else know about instances where Apple lied about mitigations like lockdown mode before? Maybe there’s a long history of it and I just don’t know. Or is there some other flaw in my logic?
There is always the argument about hidden bugdoors, backdoored compilers or whatnot. But that’s not practical, by then you might as well stop using technology.
If Apple can’t be trusted then why can you trust google? Or Qualcomm?
You’re the one derailing from the actual topic, which was broadly can we trust our devices and specifically can we trust iOS, by muddying the water with what-about-android. The question wasn’t which we can trust more, the question was whether and how much we can trust Apple.
You can’t verify that iOS is doing what Apple says that it’s doing, because you can’t read the code. You can’t trust that Apple perfectly understands their product, because it’s extremely complicated, and therefore you can’t just take their word for it. I’ll state that the check here, although it’s painfully obvious, is that exploits happen. Researcher opinions are fine, but facts are better.
None of this is in any way contentious or new, it’s the exact debate about open-vs-closed that we’ve been having since the beginning of software.
Considering the security UI layers control device access, I don't think AOSP or Android in general (with its many vendor customisations) is the most trustable.
Knowing a PUK-code for a SIM card you own (and you can insert/hotswap) is all you need(ed) to unlock practically any Android phone until recently. Granted, this got reported and then fixed, it doesn't matter how good the TCB is if the front door is wide open.
I'd say that actual trust is hard to come by because you cannot trust what you see, since what you see is merely what is 'presented'. If a device says something like "the only Root CA I trust to sign my stage 1 boot loader is X", I still don't know if it is lying or not. I also can't do something like replace a SoC BROM since it's fused RO or simply a ROM (and not EPROM or Flash), or because the sources for that are owned by the SoC manufacturer, which isn't AOSP or Google, and I cannot inspect, build and run them. Hell, we can make this worse: even if I could flash it, who's to say that the memory I flashed is also the memory that is read when the SoC comes out of reset? What if there is a separate area on the die that has a different ROM that nobody told us about.
So trust isn't going to be purely based on "because this is the design we present you", but has to be based on non-technical factors and independent research. The former is mainly based on soft factors, and the latter is hard to come by and often just based on individual devices, not even an entire SKU release.
Architecturally, it seems to me that Apple with their own SoCs, bootrom, RTKit, iBoot etc. has a stronger platform trust case because they actually own the stack all the way with nobody else having a say about it. Especially with the spreading around of individually signed and verified hardware ROMs that don't even run on the same chips in the same device, a compromise would be very limited in scope. The only other hardware/software combination that would come close is the aforementioned Pixel devices since Google has almost all of the stack there as well.
On the x86 side it's a mess that will never be resolved, nearly every technology that was supposed to make it more trustworthy has been used to reduce trust and install persistent access outside the view of the OS. AGESA, IME, TXT, SGX, even the SMM implementations before any of those came along had problems that essentially circumvent any trust that was built up by other means. Even the hardcoded certificate signature hashes in the CPUs are coming in range if easy brute forcing (SHA1 mostly) which means that entire decades of systems can now think they are running trusted software from the reset vector all the way to the OS, just because a signature hash was using a crappy algo that was never intended to be used that way.
Windows is probably only ever going to be boot-trustable (but not OS-trustable) on ARM, just like macOS root-of-trust is pointless on anything before the T2 chip (and M1 later on). For Linux, it's about as trustworthy as you want to make it, but putting in the work is a PITA, so unless a distro or derivative (Qubes, ChromeOS etc.) does it for you, most users leave it as-is (untrusted).
> Architecturally, it seems to me that Apple with their own SoCs, bootrom, RTKit, iBoot etc. has a stronger platform trust case because they actually own the stack all the way with nobody else having a say about it.
That's why I specifically and only mention Pixel in my comment. Google's been doing their own hardware since Pixel 6. (iirc) Daniel Micay, creator of GrapheneOS, once said that Google shares firmware / proprietary code for the Pixel hardware "if you ask nicely enough".
Like you point out, eventually, one is left trusting a BigCo or worse assuming a flawed implementation is secure. Though, it isn't for the want of not trying. Or, to put it another way, "the best among the rest".
Android and ChromiumOS are likely the most trustable computing platforms out there; doubly so for Android running on Pixels. If you don't prefer the ROM Google ships with, you can flash GrapheneOS or CalyxOS and relock the bootloader.
Pixels have several protections in place:
- Hardware root of trust: This is the anchor on which the entire TCB (trusted computing base) is built.
- Cryptographic verification (verified boot) of all the bootloaders (IPL, SPL), the kernels (Linux and LittleKernel), and the device tree.
- Integrity verification (dm-verity) of the contents of the ROM (/system partition which contains privileged OEM software).
- File-based Encryption (fscrypt) of user data (/data partition where installed apps and data go) and adopted external storage (/sdcard); decrypted only with user credentials.
- Running blobs traditionally run in higher exception levels (like ARM EL2) in a restricted, mutually untrusted VM.
- Continued modularization of core ROM components so that they could be updated just like any other Android app, ie without having to update the entire OS.
- Heavily sandboxed userspace, where each app has very limited view of the rest of the system, typically gated by Android-enforced permissions, seccomp filters, selinux policies, posix ACLs, and linux capabilities.
- Private Compute Core for PII (personally identifiable information) workloads. And Trusty Execution Environment for high-trust workloads.
This is not to say Android is without exploits, but it seems it is most further ahead of the mainstream OSes. This is not a particularly high bar because of closed-source firmware and baseband, but this ties in generally with the need to trust the hardware vendors themselves (see point #1).