> Last time I saw stats Linux desktop marketshare, somebody said it was up to 6%. That's astonishing.
I wouldn't get too excited about that. That might just be because people are moving off of desktops entirely and now only own mobile devices, a market where Linux may as well not exist (excluding Android). The number goes up, because at large, the portion of people who run Linux desktops are less likely to pivot to using only a mobile phone as they tend to be hobbyists/enthusiasts.
I have mixed feelings about it. On the one hand, personally it is astonishing to me that is only 6%. I do buy the corporates explanation. I even buy the gaming explanation ( despite only heavily online games being 'better' on Windows -- from developer's perspective ). But everyone else? I can only assume it has to do with how little personal computing is done today not on smartphones.
One possible advantage of this approach that no one here has mentioned yet is that it would allow us to put RAM on the CPU die (allowing for us to take advantage of the greater memory bandwidth) while also allowing for upgradable RAM.
GPU RAM is high speed and power hungry. So there tends to not be very much of it on the GPU card. This is part of the reason we keep increasing the bandwidth is so the CPU can touch that GPU RAM at the highest speeds.
It makes me wonder though if a NUMA model for the GPU is a better idea. Add more lower power and lower speed RAM onto the GPU card. Then let the CPU preload as much data as is possible onto the card. Then instead of transferring textures through the CPU onto the PCI bus and into the GPU why not just send a DMA request to the GPU and ask it to move it from it's low speed memory to it's high speed memory?
It's a whole new architecture but it seems to get at the actual problems we have in the space.
Normally, I'd agree with you -- defaults are a very, _very_ powerful thing.
But if you're using Google's web tools, they make it (too) easy to download their apps and push you in that direction in a million little ways. For example, GMail's native iOS app will either open a link in a WKWebView or Chrome (if it's not installed, it'll prompt you to install it), but you have to jump through some hoops if you want to open a link in the system's default browser. Similarly, if you're searching for something via google.com, they'll put up a prompt to download their search app, with the default "Continue" option taking you to the store rather than continuing with your current task (and then click-jack the back button).
Firefox's "answer" to profiles is to run essentially two (or more) copies of the browser rather than only copying the profile-specific parts of each profile. This leads to a lot of wasted CPU cycles and RAM and is a very suboptimal solution compared to what Chromium and Safari do these days, not to mention that the ability to create and switch profiles is not included in the UI by default and requires an extension to access.
I think you may be mixing up profiles and containers.
Profiles do have a built-in UI at about:profiles or by launching Firefox with -P, neither of which requires an extension. Admittedly this UI is a bit basic, but a better version is being rolled out (https://support.mozilla.org/en-US/kb/profile-management). Running multiple profiles side by side does indeed involve running multiple instances of the browser.
Containers are an internal API and need an extension like Multi-account Containers to provide a GUI (though this is an official extension by Mozilla), however they don't require running multiple copies of the browser.
Just tried it out - definitely an improvement UX-wise, but it still essentially runs two copies of Firefox rather than only isolating profile-specific features.
>This feature is separate from the about:profiles experience, and we currently have no plans to change how about:profiles works. You may continue to use about:profiles if that is better for your workflow.
Same here - actually, my PC broke in early 2024 and I still haven't fixed it. I quickly found out that without gaming, I no longer have any use for my PC, so now I just do everything on my MacBook.
Centralia, PA has a mine that has been on fire since 1962 and will be on fire for at least another 250 years [0] - the town had to be evacuated in the 80s because it caused people to fall into sinkholes that randomly and suddenly opened up. Scary.
Just drove through there a couple weeks ago. There are basically crevices in the ground exhausting hot humid drafts. Almost uncomfortably hot for a hand, and will fog up glasses.
30 years ago, I would have said the same thing. But right now solar is seeing technological advances at an exponential rate, such that by the time we build a nuclear power plant, get it approved, and get it running, solar will be both cheaper and safer while using less space.
So you claim that and that one "paper" from 2019 calculating worst case and with 2019 battery prices. Bad thing is battery prices are falling through the floor and 6 years make all the difference.
I wouldn't get too excited about that. That might just be because people are moving off of desktops entirely and now only own mobile devices, a market where Linux may as well not exist (excluding Android). The number goes up, because at large, the portion of people who run Linux desktops are less likely to pivot to using only a mobile phone as they tend to be hobbyists/enthusiasts.