Take any other praxis that's reached the 'appliance' stage that you use in your daily life from washing machines, ovens, coffee makers, cars, smartphones, flip-phones, televisions, toilets, vacuums, microwaves, refrigerators, ranges, etc.
It takes ~30 years to optimize the UX to make it "appliance-worthy" and then everything afterwards consists of edge-case features, personalization, or regulatory compliance.
All of the other examples you gave are products constrained by physical reality with a small set of countable use-cases. I don't think computer operating systems are simply mature appliance-like products that have been optimized down their current design. I think there is a lot of potential that hasn't been realized because the very few players in the operating system space have been been hill-climbing towards a local maxima set by path dependence 40 years ago.
To be precise, we're talking about "Desktop Computers" and not the more generic "information appliances".
For example, we're not remotely close to having a standardized "watch form-factor" appliance interface.
Physical reality is always a constraint. In this case, keyboard+display+speaker+mouse+arms-length-proximity+stationary. If you add/remove/alter _any_ of those 6 constraints, then there's plenty of room for innovation, but those constraints _define_ a desktop computer.
That's just the thing, desktops computers have always been in an important way the antithesis of a specialized appliance, a materialization of Turing's dream of the Universal Machine. It's only in recent years that this universality has come under threat, in the name of safety.
I wouldn't save the driver is "safety". It's happened that a few highly-specialized symbolic manipulation tasks now have enough market value such that they can demand highly specialized UX to optimize task performance.
You can also see this from the reverse (analog -> digital) in the evolution of hospital patient life-sign monitors and the classic "6 pack" of gauges used in both aviation and automobiles.
I meant the universality (openness) of desktop computers comes under threat, as the "walled garden" model seeks to make the jump from mobile to desktop.
Ah yes, I agree. I run macOS as my daily driver, but otherwise barely skim the Apple ecosystem. Apple laptops were just the best hardware to run a Unix-ish (BSD) on.
Now with performant hypervisors, I just run a bunch of Linux VMs locally to minimize splash-zone and do cloud for performance computing.
I'll likely migrate fully to a Framework laptop next year, but I don't have time (atm) to do it. Ah, the good 'ole glory days of native Linux on Thinkpads.
I can think of two big improvements to desktop GUIs:
1. Incremental narrowing for all selection tasks like the Helm [0] extension for Emacs.
Whenever there is a list of choices, all choices should be displayed, and this list should be filterable in real time by typing. This should go further than what Helm provides, e.g. you should be able to filter a partially filtered list in a different way. No matter how complex your filtering, all results should appear within 10 ms or so. This should include things like full text search of all local documents on the machine. This will probably require extensive indexing, so it needs to be tightly integrated with all software so the indexes stay in sync with the data.
2. Pervasive support for mouse gestures.
This effectively increases the number of mouse buttons. Some tasks are fastest with keyboard, and some are fastest with mouse, but switching between the two costs time. Increasing the effective number of buttons increases the number of tasks that are fastest with mouse and reduces need for switching.
I use Emacs as my daily-driver so point well taken wrt incremental drill-down though I'd argue that's not just a "desktop thing". You see that in the Contacts manager of every smartphone.
I see "mouse gestures" as merely an incremental evolution for desktops.
Low latency capacitive touch-screens with gesture controls were, however, revolutionary for mobile devices and dashboards in vehicles.
Computers are not a kitchen. They are clay, they are Lego, they are jungles of imagination. It's true that our current trajectory devolves their universality to a set of discrete isolated applications == multiplexing a set of appliances on one device.
That's an economic win sure, but it's tragic if we fail to unlock more of their flexibility for "end users"! IMHO that's the biggest unsolved problems of computer science: that it takes so much professional learning to unlock the real potential (and even that fails us programmers much of the time! How frequently do you say "I'll solve it myself" and the time invested is actually worth it? In the words of Steve Krouse we've not quite solved "End-programmer Programming" yet.)
I do want FOSS to follow through and offer live inspectors for everything, but no, I no longer believe users should "learn to code" as the salvation. We're nowhere near it being worth their time, we actually went downhill on that :-( Conversational AI and "vibe using" will play a role, but that's more like "adversarial interoperability", doesn't cover what I mean either.
I want cross-app interoperability to be designed in, as something everyone'd understand users want. I want agency over app state — snapshot, fork, rewind, diff etc. I want files back (https://jenson.org/files/), and more¹. I want things like versioning and collaboration to work cross-app. I want ideas and metaphors we haven't found yet — I mean it when I call it unsolved problem of CS! — that would unlock more flexible workflows for users, with less learning curve. The URL was one such idea - unlocking so much coordination by being able to share place[+state] in any medium.
I want software to be malleable (https://malleable.systems/) in ways meaningful to users. I want all apps to expose their command set for OS-level configurability of input devices & keyboard shortcuts (Steam Input on steroids). I want breakthroughs on separating "business logic" from all the shit we piled, so users can "view source" and intervene on important stuff, like they can in spreadsheets. (I want orders of magnitude smaller/simpler shit too.) I want the equivalent of unix pipes combinatorial freedom² in GUI apps. I want universal metaphors for automation, a future not unlike Yahoo Pipes promised us (https://retool.com/pipes) though I don't know in what shape. I want previewable "vector actions", less like record-macro-and-pray-the-loop-works more like multiple cursor editing. I want more apps to expose UX like PhotoShop layers, where users are more productive manipulating a recipe than they'd be directly manipulating the final result. (https://graphite.art/ looks promising but that's again for visual stuff; we need more universal metaphors for "reactive" editing... I want an spreadsheet-like interface to any directory+Makefile. I want ability to include "formulas" everywhere³.) I want various ideas from Subtext (https://www.subtext-lang.org/retrospective.html).
I want user access to the fruits of 100% reproducibility, including full control of software versions (which are presently reserved to VM/Docker/Nix masters). I want universal visibility into app behavior — what it accessed, what it changed on your computer/the world, everything it logged. Ad blockers actually achieve 2 ways to inspect/intervene: starting from the UI (I don't want to see this), and starting from network behavior (I don't want it to contact this), and both give users meaningful agency!
Take any other praxis that's reached the 'appliance' stage that you use in your daily life from washing machines, ovens, coffee makers, cars, smartphones, flip-phones, televisions, toilets, vacuums, microwaves, refrigerators, ranges, etc.
It takes ~30 years to optimize the UX to make it "appliance-worthy" and then everything afterwards consists of edge-case features, personalization, or regulatory compliance.
Desktop Computers are no exception.