No, dealing with tables was like trying to build a house out of tempered glass.
With css grid, I can tell each element which area or column+row to occupy.
If I add or remove a random element, the rest of the elements stay in the correct place.
But do that with a table and you end up trying to glue your house back together shard by shard whilst trying not to cut yourself or breaking things more.
> If I add or remove a random element, the rest of the elements stay in the correct place.
This complaint highlights how absurdly not fit-for-purpose html+css actually is. Okay, you may want to do "responsive" design, but you have the semantic layout fixed, therefore you try and contort a styling engine into pretending to be a layout engine when in reality it is three stylesheets in a trenchoat.
> Okay, you may want to do "responsive" design, but you have the semantic layout fixed, therefore you try and contort a styling engine into pretending to be a layout engine when in reality it is three stylesheets in a trenchoat.
I need to write this up properly, but one of my bugbears with responsive design is that it became normalised to push the sidebar down below the content on small screens. And if you didn't have a sidebar, to interweave everything in the content no matter what screensize you were viewing on.
What I want is a way to interleave content and asides on small screens, and pull them out into 1+ other regions on larger screens. Reordering the content on larger screens would be the icing on the cake but for now I'll take just doing it.
Using named grid-template-areas stacks the items you move to the sidebar on top of each other, so you only see one of them.
'Good' old floats get most of the way, but put the item in the sidebar exactly where it falls. Plus they're a pain to work with overall: https://codepen.io/pbowyer/pen/jEqdJgP
>This complaint highlights how absurdly not fit-for-purpose html+css actually is. Okay, you may want to do "responsive" design, but you have the semantic layout fixed,
this not fit for purpose may in fact be historically superseded usages that still are baked in to some usages affected by the relatively rapid change of the various platforms that must interact and use the respective technologies, the specification version of "technical debt"
that is to say some subsets of the numerous technologies can be used to construct something fit for the purpose that you are describing, but as a general rule anything constructed in a solution will probably be using other subsets not fit for that particular purpose, but maybe fit for some other purpose.
Yes, it's harder and takes longer to read code. Hopefully this reality will propagate to leadership.
But there's something more. It's the benefits of a first-look perspective. With these tools that's lost.
Also, it seems like we've entered phase two of sorts in the sense of we solve LLMish #1 issues by running it by #2. So writing and reviewing. I wonder what #3 will be. Maybe confirming.
That was my first thought as well. I've spent time on those cars on the Coast Line. They used to indicate the next stop, but it broke at some point. I don't ride much anymore. I'm not surprised what's pictured is NJ TRANSIT, the fallback. Would be nice to have faster trains someday. Until then, crack a beer and enjoy the ride.
Not sure Microsoft realizes the damage they're doing to the Windows brand. My first experience with Windows 11 was figuring out some dumb workaround to use a local account.
When I think back to Windows 7, the good feeling isn't nostalgia. It was the last user-focused Windows.
Maybe someone will develop a new user-focused OS that's somehow compatible with Windows programs. Or better yet, maybe Microsoft will realize very important parts of Windows are going downhill and remember what made Windows great.
I'm not convinced Microsoft cares about the Windows market share in consumer PCs or the small amount of money they make from selling Windows licenses to regular consumers.
If they did, Windows wouldn't be so usable unactivated and the MassGravel activation stuff would have been patched already.
They built up their almost-monopoly when it mattered in the 90s and the 2000s, and now their market position is basically secured.
For Microsoft's purposes the main way of making money from Windows is from business and enterprise sales, and those sales will exist pretty much indefinitely.
The reason they don't meaningfully enforce their copyright on consumer PCs is precisely because they do care about their market share. If you buy a computer with Windows (or get it installed) in what I suspect is the overwhelming majority of the world, it's an 'illegitimate' copy and it works 100% fine, including operating with Microsoft's servers.
As you mentioned, they could trivially stop this if they wanted to, but they don't. Because if this were not possible, there'd be billions of more PCs out there running instead what would most likely be Linux. Enabling people to use Windows without paying is a key component of their strategy of maintaining market dominance, especially on a global level.
I think the biggest 'threat' to windows for general users has been mobile, besides that it seems like it's mostly running on momentum from the ecosystem of decades ago. The challenge is that most migrations for established users of any system take effort, and right now the effort of running activation/account requirement bypasses is low effort compared to changing to and learning a new OS.
The way of framing it which works for me is that there doesn't seem to be much reason to move to windows, if you were starting computing with a blank slate and could pick anything, why would someone want to pick windows? Most people need a mobile anyway which serves a lot of consumer needs. Gaming is a big one if you're not happy with mobile/console, but there's the wine/proton on linux route although there's a subset that won't work or has compatibility issues (from minor paper-cuts to major). And then there's those that need specific windows-only software with no alternative elsewhere.
Also note this strategy is in its fourth (or fifth?) decade and is also very successfully deployed by adobe et al. It’s also why Linux won on the headless server, though why FreeBSD didn’t I’m not sure; GPL marketing at the right time, perhaps.
The same reason why Ubuntu won the server market (for a while): by capturing the home-desktop/laptop market first, and then worming its way to employer environments by way of familiarity. Linux had broader driver coverage for consumer hardware; there was a time when running *BSD on fragmented consumer hardware was a crapshot.
I said Linux won for the same reason as Ubuntu (winning the distro wars), I did not say Linux won because of Ubuntu. Ubuntu:Linux Distros::Linux:server OSes
To an extent sure, but when people that grew up as home consumers not using Windows become business leaders they won't have the brand loyalty to Microsoft that the current aging out generation does.
If Google doesn't characteristically fumble the bag their dominance with ChromeOS in schools has potential pay major dividends in 10-15 years.
Windows centric software development is pretty much completely driven by business leaders 50+ years old on the young end.
A striking amount of business software runs on Windows because Microsoft was dominant during the peak PC era (e.g. 1990-2010). The companies running that stuff aren't doing so because old guys think Windows is good, they're running it because it's been built already and there's no real reason to change.
The next generation of business leaders already didn't build their companies on Windows or any other PC operating system because web apps replaced desktop apps and mobile devices overtook PCs in market share.
But it doesn't really matter to Microsoft. Microsoft isn't really the "Windows Company" anymore and hasn't been for some time. Azure, Office365, Sharepoint, etc. revenue dwarfs what Windows brings in and wouldn't be affected by Windows losing market share because everything is a web/electron client for a cloud service now.
In some ways, I suspect Microsoft views the Windows market share as more of a liability than an asset these days, because it makes them responsible for bad press events like BlueKeep and WannaCry. Business customers frequently buy support contracts with their licenses, whereas private consumers expect indefinite updates for a one time $120 fee. Given that, I wouldn't be surprised if they were intentionally letting consumer Windows slowly fade away.
Hum, how much of the success of azure is due to enterprise customers being in the windows ecosystem already? And what happens when the next enterprises are not?
Around 60% of Azure VMs are Linux. Between that and WSL it sometimes seems like Microsoft is putting more effort into being a Linux company than a Windows one.
Who could have predicted that back in the Slashdot days!
I realise that a good portion of the references to the product on that page is just "Microsoft 365", but other parts seem to include "Copilot" in the product name for Microsoft's office suite.
Macs were at a bit over 10% market share in q4 2024[1], but it's also worth noting that the PC market is shrinking as a whole. Windows still has most of the pie, but the pie itself is getting smaller, since many find phones to be a better (and cheaper) experience than Windows, and I can't say that I blame them.
I'm curious how inflated the numbers are from business sales, since the default option there is still Windows, even if you don't actually use any software that needs it (i.e. you just need a web browser). Consumer sales of PCs is probably only going to trend downwards, and it only got a small spike from people buying PCs for COVID.
>many find phones to be a better (and cheaper) experience than Windows, and I can't say that I blame them.
With that said at least using native apps on phones is becoming more and more of a risk. If you can get away with a browser that's fine. But if you need native phone features you are at the risk of Apple/Google cutting off your entire business for some hidden reason and nearly zero recourse. On that note people have been getting more worried about Apple starting to treat their desktop OS like a phone and locking it down more.
The risk of that happening, while real and problematic, is not a real concern for most people, as the liklyhood of anything happening is really low. Same goes for the OS being more closed of. The normies, to out it that way, are happy with the normal app store experience and won't notice the difference before and after a complete lockdown, since they've never gone outside the normal bounds to begin with.
Even OEMs that have the option to select Linux, e.g. Dell, Lenovo, have "works best with Windows" all over the place, one needs to be rather persistent to track down the Linux as pre-installed OS options.
> If Google doesn't characteristically fumble the bag their dominance with ChromeOS in schools has potential pay major dividends in 10-15 years.
There will be no ChromeOS anymore - just Android - and it will soon be locked down hard so that you need to pay Google or host ads/harvest data for every app.
You just need to make your choice of Tyrant landlord.
The crucial part: these business leaders won't see the ugly consumer side.
Enterprise windows is completely different, in that most of the crap we complain about will either be disable at the MDM level, or from the start depending on the license. A CEO being issued a windows laptop isn't barraged with ads, nor do they care if their account is local or not. It will "just work".
I don’t know, I work for a massive (benevolent of course) corporation and it’s still pushy with Lock Screen ads, copilot, etc… and it definitely doesn’t just work. Maybe for the CEO it does though…
It might depend on how much your IT departements cares about customizing your setups. The efforts described in TFA for instance don't cover auto install scripts which are still free to create whatever local account is needed, provided it's done through the fleet management mechanisms.
Much of the scripts to "debloat" windows also rely on MDM entry points and overriding user preferences with higher privilege.
As you point out it's still a cat and mouse game but I assume they work OK. I tend to go the painful way and do most of it myself following instructions, as I'm not comfortable having these tools run as admin on a system. It's not that bad either.
Do we believe that we’ll be using anything like today’s PCs and operating systems in 10-15 years time? I mean, that’s been the case since the 1980s, but now we have usable (if imperfect) AI.
- Reliability. For anything that needs deterministic result and not even 99.9% of chance that it's generated correctly and not hallucinated. E.g. health, finance, military, etc. There is no room for "you're absolutely right". For the same input an algo must give the same output.
- Privacy. Until we have powerful local models (we might have though in 10 years, I don't know), sending everything to some cloud companies, which are already obliged by court to save data and have spy and ex-military generals in their boardrooms, sounds a bit crazy if it's not about an apple pie recipe. Web chat interface isolates important data from non-important, but we can't integrate it fully in our lifes.
Personally: Yes, I do. Likely, voice assistants and other AI tools will have a bigger market share in a decade, sure. But I doubt an interface like Alexa can replace a PC-like setup for most of the «real work». Instead, I imagine we’ll just continue the trend of laptops and tablets with AI assistants integrated in better ways, and perhaps a wider adoption of AR/VR in some sectors.
Tre
The tech that could replace today’s PC setup is a neural interface, but I doubt that NeuraLink et al will be anywhere near mainstream in a decade.
> But I doubt an interface like Alexa can replace a PC-like setup for most of the «real work».
Most people, and most workers simply don't do what you call real work that needs a big screen and a keyboard. I think most of the kids at my child's school don't have a computer at home (other than the district issued chromebook) and likely won't ever own a personal computer.
People do everything on their phones. Google recently said Chrome OS is going to end next year... I don't know what schools are going to do.
I don’t doubt that a conventional laptop or desktop will be far less common in a decade.
But both iPads and Android tablets have keyboard cases. Even many phones can these days be plugged into USB-C docking stations that enable the use of a big screen and keyboard when needed. I agree that most non-programmers will probably end up using phones or tablets with an external keyboard, and even for programming it is kinda usable.
Those schools will probably just switch to Android netbooks or Android tablets with keyboard cases.
Still, I think that’s very different from AI technologies killing the PC form factor. The hardware and software might change, but I personally think the «screen and keyboard» form factor will remain the default for «work» for the next decade.
> I personally think the «screen and keyboard» form factor will remain the default for «work» for the next decade.
I'm not so sure. What was the interface pre-computer: voice and secretaries. Except the secretaries are now AI, and there is an unlimited supply of them and they don't need a salary or health insurance. Instead of "Ms. Wilson, come here and take a letter" it would be "Hey Google, take a letter"
We're already well on the way. Writing emails with AI is done today. Using AI to take notes in a meeting is possible today. OCR and cameras can handle a lot of "transcribe this printed form to that online form" input tasks today. And it will all be vastly better in 10 years.
I'm sure there will still be a place for screens. We are visually oriented and using paper would be wasteful. I'm not sure the screen + keyboard "workstation" of today will be common in 10 years.
I think mobile tech will be closer to a Star Trek TNG commnicator. A small device perhaps worn as jewelry with an earpiece and some kind of retinal projector for heads-up usage, and less like a rectangular slab of glass in your pocket. Current smart watches are a start, they only need a better way to show more information and they would replace phones for many people.
And of course this all presumes that "office work" as we know it is even a thing. If AI becomes AGI or close to it, what would we need people in offices to even do?
Alternatively it could be people working from home.
Though, with the state of "prompt engineering", I'm now imagining legions wandering down the street, speaking into Bluetooth headsets, desperately entreating an AI to do the task they've been assigned...
(you get better results if you sound like you're about to cry)
If something displaces Windows in the consumer PC market, I wonder how long it is before those new OS consumers start to want to use what they're comfortable with in the business as well. Windows will start to feel like some weird legacy system. By the time business starts moving away, it will be too late for Microsoft to save.
I think you're right that they don't care about the money from Windows licenses, but they seem to be pivoting to trying to pull data from consumer desktops for AI training. That's arguably way more valuable and no one besides Apple (or potentially Google) gets that kind of data.
As more and more public accessible areas start becoming so inundated with AI generated material, that makes the walled gardens where generated content is not AI generated that much more valuable for training.
Whether they care about consumer market or not, they know that most of the consumers aren't going to care about this problem. Hardly anyone would bat an eye at using their already existing Microsoft account/email address and internet connection to log on to their PC. They're almost 100% headed to get on the internet to do whatever anyways. These people are connected to the cloud 24/7. In the same way hardly any Apple user cares that they need an Apple account to get into a bunch of things/phone/whatever. This is a nerd/tech-niche problem.
> For Microsoft's purposes the main way of making money from Windows is from business and enterprise sales, and those sales will exist pretty much indefinitely.
Yes, and making corporations and smaller businesses donate their stuff via official spyware os, clouded "services" and "agents" is perfect opportunity for spyware creator :) It is hard to blame them for wanting this :) Except that, probably, will explode in their faces...
Small businesses don't like creating Microsoft accounts either. Limit 30 software activations per email address or something like that. And retail Office stops working after 365 days offline.
It being the year of Linux is definitely a meme at this point, but Microsoft's trying their hardest to make it a thing.
Steam's latest survey [1] shows Windows losing 0.19% marketshare. 3/4 of it went to Mac, 1/4 to Linux. 0.19% over a single month is a fairly significant shift, especially because the Steam survey is biased towards Windows gamers to begin with (Windows has 95.4% marketshare on the Steam survey), so it's probably understating the shift.
I’ve had multiple friends who are not tech savvy ask me about steam os. Because they basically only use their gaming PC for gaming, and they are frustrated with windows.
None have actually switched yet, but also 10 is still supported, and steam os isnt quite ready from what i understand; (nvidia driver issues?) although I assume that’s changing quite quickly. I haven’t looked super recently.
Personally I run bazzite on a machine I’ve got hooked to a tv. It’s basically steamOS
and works great for gaming. I can’t speak to the desktop mode, but as long as it’s passable, windows sets the bar pretty low. Main issue is that some multiplayer games intentionally don’t support Linux for anti-cheat reasons. :(
PC ownership is NOT a zero-sum game. You assume that lost marketshare must be replaced by something else. I'm confident this is not people replacing their PC for a Mac, this is people who stopped using a PC completely.
Microsoft, by ruining Windows, is not leaving the field open for a replacement OS; they're slowly killing the PC itself.
Mathematical: If this were the case then all competitors would have seen an increase in marketshare proportional to their existing marketshare. This isn't what happened - Mac saw 3x the increase of Linux, even though Linux has greater marketshare on the survey.
Statistical: It's often said that the PC is dead or dying, but that's a misrepresentation of the issue. 25 years ago, a new computer was dated in 3 months and obsolete in a year, so PC sales were huge. Now a days, a ten year old PC is still fine for just about everything, even including relatively high end gaming. So sales have plummeted, but ownership rates are around historic highs. [1] The main limiting factor is money. More than 96% of households earning $150k+ have a desktop/laptop, while only 56% with income less than $25,000 do. The overall average is 81%.
Pragmatic: PCs are still necessary for many types of games as well as content creation. Mobile devices and tablets (to a lesser degree) are limited by their input mechanisms to a subset of all experiences, and there's a pretty big chunk of people that utilize experiences outside that subset.
I don't worry much about that. I has been often said that PCs would be dying. Seems it was mostly marketing. It survived consoles and Xbox is probably dead. I have no illusions that Microsoft has the same mismanagement in store for Windows, it didn't have sensible patronage for years.
I don't think it's dying, what I think has been happening and will continue to happen is that unless you're an enthusiast the PC presence is gradually being shrunk and tidied away in a corner and forgotten by many. For many having a 'home PC' would be a relic, similar to how they don't have anything like a dedicated stereo system for playing audio which might have taken up a significant amount of space (possibly more than a PC) years ago.
This is definitely not the case. PC ownership is near record highs right now. I cited the stats in a peer comment. [1] The only real hurdle is perceived cost. More than 96% of households earning $150k+ have a desktop/laptop, while only 56% with income less than $25,000 do. The overall average is 81%.
Mobile, and to some degree tablets, just offer a generally poor interface for many aspects of computing from gaming to content creation, and I think that's mostly intractable.
Sure, but I guess this depends on what model you have of someone doing media consumption, are they going to fire up their PC to watch/listen to media, or their phone, or (smart) TV, or a smart speaker?
There is no Microsoft in this story. There is the structure of the company which roll up to the CEO. And they have 1 priority: make the shareholders happy.
This has caused incentives to shift thought the company. No more long-term work. Only short term stuff, where each change needs to make impact somewhere.
This is why you see CoPilot in 20 places in Edge. This is why OneDrive shows you nagging screens to upload your data there.
And this is why the OOBE now makes it harder. That change is used by a PM / Developer to justify their existence in the company at review time.
The thing is, Microsoft did plenty of user-hostile stuff back then. Games for Windows Live with its weird DRM and making games unplayable after shutting down, for instance. And the push for using all kinds of "Live" services. Something called a .NET Passport also comes to mind during the mid-XP days. .NET framework applications had their own special kinds of installers, Microsoft Silverlight thrived for a short moment, and the introduction of their (initially mediocre) antivirus program also wasn't well-received by the industry.
They just never shoveled their crap into the OS itself. It was always recommended addons, recommended freebies, and recommended optional features that came along with other products.
When MS started unifying everything into Just Windows, all of the crap they pulled with separate software packages merged into one digital blob, Windows 8/8.1/10/11.
With Windows 8, I can at least appreciate the attempt to unify things so they are easier to use for consumers (if only they hadn't bunged up Windows Phone, repeatedly). I wonder what Windows would be like if they hadn't tried to the Windows 8 experiment.
> Something called a .NET Passport also comes to mind during the mid-XP days
That's essentially Microsoft Account nowadays, which went thru few rebrandings on the way. In XP it was promoted via Windows Messenger with popup message which for less experienced people would suggest that in order to access the Internet they need this "passport".
Considering how many sites now offer (still optional) logins with apple/meta/microsoft accounts I wonder if the goal here is to be the provider of identity for sites and services and at the same future-proofing for any digital ID checks govt's may introduce
There was for a few years a South Korean national identity scheme which linked your national ID card to .. an ActiveX control. Making it not only IE-only but effectively tying it to IE6.
To the average consumer, Windows doesn't matter much anymore.
To enterprises, Microsoft has them under lock and key with Office 365, basically forever. LibreOffice is nowhere near a replacement for Excel in an enterprise setting.
Office 365 is absolutely not what you seem to describe. I run a small non-profit and I am banking hard on Office 365 while I use a Mac.
O365 is the Office suite of apps, an Exchange server, OneDrive with a ton of storage, access to unlimited Teams meetings, and tons of doodads and doohickeys we don't need. That my Windows using colleagues could potentially install Enterprise Windows on their own laptops (we're a BYOD employer), is irrelevant for us. Any fleet of trashy PC we need for frontline staff already comes with a Windows license.
I agree with your overall point but I'm starting to regularly see older M series MacBooks on sale for around 600 or 700 dollars brand new. Maybe they are using the strategy of selling older hardware for less like they did with the iPhone SE.
A $600 - $700 Dell laptop's CPU does not come anywhere close to an M4 Macbook Air, which you can get right now at Best Buy in a 15" version for $999.
The Mac will also have a faster SSD and (not sure about this) a faster memory bus architecture. And a better GPU and better ability to use Thunderbolt docks / have 3 external 4K displays.
CPU is almost never a limiting factor for workers. RAM is since they generally need a keep a bunch of browser tabs open to memory hogging things like Outlook/Teams/JIRA but RAM speed they probably won't notice.
If they have 3 external 4k displays, their company will probably shell out for Macbook.
The thing is, the storage on Apple devices is so unbelievably fast, you can get by on 8GB just fine. Even with clogging up hundreds of Chrome tabs. Swap is barely noticeable.
Memory management on Windows devices in contrast is utterly painful. The RAM itself is already slower simply due to physics (can't beat the SoC proximity with anything socketed), storage I/O usually has to cross through a lot of chips (same thing, Apple attaches storage directly to the SoC), and then the storage itself that you find on cheap devices is actually SATA under the hood or bottom of the barrel NVMe, no competition at all to Apple. Oh and the storage and RAM are both adequately cooled on Apple devices, so Apple can drive them much much harder unlike the Windows world where often enough the only thing that gets cooled is the CPU and GPU.
Yes, I do think Apple wants far too much money for RAM and SSD storage upgrades, but it's undeniable that even the more expensive ends pack a lot of punch.
I work in a large enterprise and I see more and more people move to macOS every year. We use Office 365. I run the Office apps on my Mac. We backup with OneDrive. We collaborate with SharePoint. We use our AD accounts to login on macOS, use InTune to manage endpoints. My Mac even has Defender on it now.
Microsoft is still getting their money, just slightly less from Windows itself.
I’m willing to bet it’s about the hardware. Windows laptops almost all universally suck in at least a few areas: display, touchpad and wake from sleep at the most inconvenient times. Give me a MacBook which natively boots Windows and I’ll use it, if only because it has WSL2. If it boots Linux, even better. (Naturally, those three usually broken things must work on either.)
It depends on the industry... go to any (non-ms based) tech company and every developer will want a mac. Nobody will chose windows if asked.
Other less developer related companies are moving more towards mac as well.
This is just my anecdote between being in/out of tech for the last 25 years and have gone from: "Here is your windows laptop" to "Do you want windows or macos" to "here is your macbook"
I've got the same experience, just saying that if I was offered 'here's a mac, we can put windows on it' I'd actually pick that option, because I love the hardware, but I'm very not impressed with macos.
Maybe someone will develop a new user-focused OS that's somehow compatible with Windows programs.
That's either Linux with WINE, or a "custom distro" of Windows from the remaining neighbourly hackers in the modding scene (they can't embed the hostility everywhere and as deep as the kernel, although they are most likely trying.)
WINE it is. I can't see any point in playing cat and mouse with an actively hostile OS. When a new Windows update starts stealing IMAP credentials[1] before the modding community catches on, it's game over for the user. Better to not use anything based on Windows.
I'm not sure if Microsoft knows it, but it doesn't care about or need Windows anymore. Office has native apps and is on the web, Xbox is doing its own things, dotnet has been freed from Windows, and Azure doesn't need Windows. Computing is generally moving away from the personal computing model, so Windows is just less relevant.
I was with you until you listed Xbox - their consoles are dying in the market.
They've adopted a strategy of calling everything "gaming" Xbox, and seem to be going all-in on Gamepass subscription revenue along with making their first-party games available on other platforms. I'll be surprised if there is another flagship console following the Series X.
There's always ReactOS[1], a project for a bug-for-bug compatible Windows clone. It used to mostly aim at Windows 9x compatibility the last time I'd checked, though, but that could probably change. And if anyone wants to create a Win7 clone, at least some of the groundwork has already been made.
"Compatibility with Windows programs" is a massive undertaking in the first place, as evidenced by the huge amount of development effort that has gone into Wine without quite reaching 100% bug-for-bug compatibility. (The level of compatibility they've achieved is truly impressive but it's really difficult to get to 100% for a large existing base of arbitrary applications.)
Reliable real-world compatibility requires not only implementing Windows APIs as documented (or reverse-engineered) but also discovering and conforming to quirks, undocumented features, and permissive interpretations of the specs or even outright bugs in Windows that some applications have either intentionally or unintentionally ended up relying on over the years.
I don't know if modern apps would tend to be better engineered to actually follow the spec and to only build on features as documented but for example older Windows games were sometimes notorious for being quite finicky.
And of course if the goal is a full-scale independent OS rather than a compatibility layer on top of an existing one, there's the whole "operating system" part to implement as well.
> Not sure Microsoft realizes the damage they're doing to the Windows brand.
Microsoft realized after Windows 8 and Windows 10 that literally nobody, outside of niche tech circles, has positive associations with the Windows brand, or views "Windows" as a selling point beyond "runs my old software." As such, it doesn't matter to them anymore.
It's like being the PR department at your local electricity provider or oil refinery. Keep the politicians happy, but people on the ground is a pointless endeavor.
I remember when new Windows versions were still an event: you could read about it on the magazines, people would get excited to try them, people would debate about how pretty/ugly the new UI was, etc.
Nowadays new Windows versions are like some unwanted background noise. I don't even know at what point Windows 10 stopped being the new version and 11 came out, but it went totally unnoticed to me until I heard that Windows 10 was close to EOL a couple of months ago. And then you start dreading the moment that you'll have to migrate and uninstall all the Xbox crap again that they force on you, etc.
>I remember when new Windows versions were still an event: you could read about it on the magazines, people would get excited to try them, people would debate about how pretty/ugly the new UI was, etc.
Lol. You can verify your claims in 1 minute just by simply googling
I'm not parent and Windows 11 is my least favourite desktop OS, but there are some things where I prefer Windows to Mac OS, for example multi monitor user experience, or the way full screen windows work (F11) and the ease of maximising windows without having to double click on the title bar. Also I like the way home/end/pgup/pgdown keys work. I much prefer how it renders text on non hidpi screeens. Finally I like how there is only one taskbar and no top bar, which results in more real estate on small displays.
Some Linux DEs also do these things well BTW. In fact I use Linux for most things at home. (I use Mac at work and my only Win device left is used exclusively for gaming).
> Sincerely curious about why do you think it's the best desktop OS and/or where it excels.
Hey, so I'm a different user, and I wouldn't claim it's the best desktop OS, but split between macOS/Windows for desktop use, there are definitely things about Windows I appreciate. Off the top of my head:
* It has pretty approachable "config as code" built-in - with "winget configure" and some yaml files, you can define the apps you want, the Windows config, the registry settings, etc. without the overhead of MDM or something like Ansible.
* UI scaling took a long time to get good, but it's more flexible than macOS now for pixel-perfect output on displays that aren't multiples of 1440p. (e.g. 4K)
> UI scaling took a long time to get good, but it's more flexible than macOS now for pixel-perfect output on displays that aren't multiples of 1440p. (e.g. 4K)
We can't be using the same windows. At work we have 27" 5k displays which I use at 200%, so a perfect multiple of the usual 100% I use everywhere else. The screen is blurry 99% of the time. The only reliable way to get it sharp is to boot the PC with the screen attached. Of course, if I go to the toilet and the screen turns off, when I come back it's just like hot-plugging it: a blurry mess.
Apparently, updating the graphics driver also works, so I suppose it's enough to restart just that instead of the whole OS. Don't know how to do that, though. The resolution is reported as the correct one, changing scaling options doesn't help. 100% looks sharp enough, but it's unusable for me.
And I don't use any old app, it's mostly new outlook and edge. But even the start menu is blurry! There's also the fact that afterwards, tray icons' menus tend to appear in random places, but I understand that apps draw those, so I guess this isn't completely windows' fault.
My work machine dual-boots Linux, which is what I actually daily drive, and these screens have pushed me to switch to Wayland. Now there are some rough edges there, but the high-dpi is handled perfectly (same setup as windows: everything 100% except for that one screen at 200%). This is using Sway and mainly Firefox, Chromium and Alacaritty. Native GTK apps seem to work fine, too, but I don't use many of those.
edit: not sure about your mac point. I sometimes use a mac and it works at 200% on two separate 4k screens.
> edit: not sure about your mac point. I sometimes use a mac and it works at 200% on two separate 4k screens.
200% scaling works if you only want "looks like 1920x1080", but if you have a 27" 4K display, I'd typically want "looks like 2560x1440" or 150% scaling - if you do that on macOS, the desktop is rendered at 5120x2880 and then downscaled to 3840x2160. So you're getting both higher resource draw from rendering the desktop at a higher resolution and losing pixel-perfect rendering.
It won't be a problem for most people, but it's enough of a problem for me that I won't use macOS with scaled displays.
>The screen is blurry 99% of the time. The only reliable way to get it sharp is to boot the PC with the screen attached.
That sounds like a (graphics driver) bug. It's not something I ever experienced on Windows 10, even when occasionally connecting an additional display set to 150% scaling. I believe you, though, bugs do happen.
>not sure about your mac point. I sometimes use a mac and it works at 200% on two separate 4k screens.
I think his point is that on macOS you pretty much have to use 200%, whereas on Windows it can be any value (though multiples of 25% are recommended).
It wouldn't surprise me, although this is a bog-standard-fare enterprise laptop, a 5 year-old full Intel affair. No dedicated GPU or anything fancy.
But, for a long time, I had weird issues with display output on Windows. It would refuse to output 4k@60Hz without doing a stupid plug-unplug-replug-just-at-the-right-time dance, even though it worked on Linux. It took a good 3 years for that to work reliably.
And, in the beginning, those 5k screens only worked at 4k for some reason. Again, no issue on Linux.
But when any of the above situations happened, the state was actually correctly reported, as in 4k@30 Hz, or the 5k screen running at 4k. That's not the case now, everything says what it should, but the image is not sharp.
That's the only situation where I use Windows with scaling, so don't have any easy way of figuring which component is broken. All I can say is that the hardware itself seems to work fine.
I like windows 11 family settings. I can let my kids play Minecraft on old corporate castaway Dells, which I setup from bios/pe to do a clean reinstall. Then I can manage screen time limits and content restrictions from an app on my phone. All free.
> Or better yet, maybe Microsoft will realize very important parts of Windows are going downhill and remember what made Windows great.
Microsoft have done 180's in the past. I still hope that at some point they'll see the light and what you say here above will suddenly click and become evident to them.
Windows, and DOS before that, did not succeed by holding customers as hostages.
Part of Satya reorg in 2018 moved windows into a weird leadership structure where it was part of bing iirc. I think they recently finally fixed that org mistake and hopefully they quickly push an improved windows 12.
I remembered something weird like this, & went looking for coverage last week. I thought it'd maybe gotten divied up between Azure Services and like some ads or online experience thing? I ended up giving up, so much noise and I wasn't sure what I was looking for, but I'd love to see some coverage. Incredible seeing Windows broken up like that & internally sold for parts, just total throwing it to the MBA wolves to milk some money out of, it felt like & seems like.
I remember listening it from Paul Thurrott in a podcast, and it wasn't only 2018, it was reorganized several times during Satya's lead. no wonder it sucks
Are you installing those tools regularly? I have a couple of invisible helper apps but Time Machine backups and Mac-to-Mac Migration Assistant has made those apps transparent. They're always there.
But you know what, I think I know where you are mentally. I was there 2 years after I first bought a Mac. I wanted a clean Mac. Nothing untoward, nothing that wasn't Apple. I got rid of that feeling and learned to love the Mac as a platform, to love the Mac because of its vibrant third-party developers. That's why I use a Mac even though Apple is often a bad steward of this wonderful bicycle for the mind.
> mac window management is borderline unusable and I'm tired of installing 5 tools to fix it.
There's exactly two you need to get macOS eye-on-eye with Windows: Hyperswitch for an alt-tab that actually works and SizeUp to get a "window arrangement like Windows with Win+arrow keys".
Further migration pains can be eased with a Windows keyboard layout bringing special characters to where they belong in muscle memory (that however can and will bring pains with anything Adobe, their apps absolutely do not like non-Apple keyboard layouts and will refuse to load keyboard command presets) and Karabiner to map Ctrl+C/V to reduce hand strain.
- remove all this Games & XBox related stuff?
- remove everything pre-installed but not used stuff? (Internet Explorer legacy?)
- remove all this "fancy" Icons & links: Video/Music etc. in Explorer
- deselect to install most of all these Background Services?
And: Does it work for the Windows Server versions as well?
Their reputation is irrelevant, at least whilst they maintain an OS monopoly. Enterprise customers don't care because all the issues you described are not present on Enterprise editions. The vast majority of users want a machine that "just works".
I would never use a machine running Windows 11 S mode whilst a good chunk of the home PC market would likely not notice a difference.
Enterprise edition is as much of a clownshow as the others. I actually run one such edition at work and since a few weeks ago I've noticed in the "home" screen of the settings a new tile, inviting me to add my microsoft account to benefit from something or other.
Now, this is a machine I mostly use for goofing out, so it actually has my microsoft account connected to it. It's fully entra id joined: I log into my windows session with my office 365 account, which has a full license (p2 or whatever it's called), I can see the bitlocker key in entra id, the works.
Now, curiosity got the best of me the other day, and I figured I might just as well click that button. Guess what? It didn't work! It apparently doesn't support business accounts!
On my home pc (pro edition, which I use for photoshop and the occasional game), which does have a consumer microsoft account, that tile doesn't show up.
> When I think back to Windows 7, the good feeling isn't nostalgia. It was the last user-focused Windows.
I think Windows 98 was the last user-focused Windows. At least then all the useful settings were a single right-click away, and it just worked without invading your privacy.
(WinME never worked and WinXP was the first in a long series of shareholder-focused Windows.)
> Maybe someone will develop a new user-focused OS that's somehow compatible with Windows programs.
Nothing as user focused as linux, and it's mostly compatible with windows programs with wine. Important to note though that user focused is not the same thing as easy to use.
I think perhaps you are conflating user-friendly and user-focused.
Linux, and open source in general, is infinitely more user-focused than anything from Microsoft, since open source is often built for users and by users.
But if you don't have great computer skills already, Linux can be extremely un-friendly the moment you step off the beaten path.
I mean, unless you know the various arcane aspects of Windows, it's pretty hilariously un-friendly when you step off the path, too. After a decade of using Gnome exclusively, whenever a friend asks for help with Windows, all I can do is shrug and suggest reinstalling and/or living with the pain.
It's user-focused in the sense that the user's goals drive the design. The good non-profit distributions, such as Debian and Arch, would never even try to require or push an online account, since that is contrary to the user's interests.
Not disagreeing with you, but your comment brought back memories of Ubuntu One, and the amazon spyware(?) search thing. Ubuntu is kind of the Windows of the GNU/Linux world in that they repeatedly do user-hostile things that test everyone's limits.
Yeah, I would not use Ubuntu if I can help it. I'd still rather use it over Windows. This is why I specifically said "The good non-profit distributions," and not "Linux distributions" or some other broader phrase.
I'm sure that's why they weren't included in the examples of "the good non-profit distributions". It's not like Ubuntu is going to be overlooked. But they are malicious.
The snap disaster really was the final nail in the coffin for me.
That bug report about ~/snap has to be the hottest bug in their bugtracker, and they simply don't seem to give a shit and pretend it's fine.
All the while naive users like my father or colleagues at my workplace shoot themselves in the foot by thinking "what's that folder doing in my home directory? Delete."
I'm not sure if that's still the case, but there was a time when that simply hosed your whole snap installation.
It's also completely ridiculous when you run "docker run ubuntu; apt install whatever" only to find out that "whatever" is now a snap and won't run w/o getting into nested containerization.
For packages that got the snap treatment, window tracking for the Gnome dash was broken for ages if, god forbid, you wanted to create a custom .desktop file to add some parameters. Completely broke the custom launchers I created.
I created bug reports, I tried to work with them. Others did, too. Some of these reports approach 10 years now.
I am purging Ubuntu from all of my employers systems, replacing it with RockyLinux.
Only one major application still to go.
Friends and family get Debian, that transition is already completed.
I want to do the same, but there was some heavy discord at the top of the community a year or so ago that left me fearing for the org's future. If there was a satisfactory resolution, I haven't heard about it.
That's concerning to hear. What discord? The number one thing I want from Debian is predictability, dependability. Other than that, it's not even that great of a distro. I don't use it for my own machines.
Nobody forces you to use Ubuntu. Thats the thing. If Ubuntu fucks up, I can switch to another distro at the blink of an eye and nothing of value was lost.
If user is linux nerd well yes. For more casual users there is way too many weird annoyances and problems. Maybe not with single version, but migrating between or at end of LTS support...
I beg to differ. There is less corporate BS on Linux than any mainstream OS.
The software if largely by users for users.
Obviously it caters to the power user, but it also works well for extremely novice users. It’s those savvy with Win/Mac that get screwed switching. I’d encourage them to put a bit more into trying.
I don’t even mind logging in on a personal laptop but we have shared computers at work to operate machines. It does not make any sense to login with your account in one of those.
Developing a new consumer-grade OS is literally not possible. I don't mean it would take a herculean effort like the software ecosystem issue takes to address, I mean actually not possible regardless of how much effort any development team put in. Virtually all hardware on the open market is made for Windows, largely powered by proprietary, closed-source drivers. Linux gets some afterthought from a percentage of vendors, but even for it, hardware support is in an absolutely atrocious state. Hardware vendors will obviously not give the time of day to any uppity new OS. This relegates any attempt to a hobbyist project targeting virtual machines or obsolete hardware. The only way a new player could enter the game is by using Apple-level money to develop their hardware in-house, but any kind of corporation fronting Apple money to do that would certainly not be aiming to produce a user-driven experience.
Drivers are a lot of work. IMHO, do some core stuff, and then build in driver adapters. NDIS wrapper, linuxkpi, etc.
If you want to work hard to make things easy, I bet you could build a hypervisor that does pci passthrough for each device to a guest that runs a different OS driver and rexports the device as a virtio device, and then the main OS guest can just have virtio drivers for everything. It can't be that hard to take documentation for writing Windows drivers and use that to build a minimal guest kernel to run windows drivers in.
That indirection will cost performance and latency, but windows 11 feels like more latency than windows 10 too, so eh. You can also build native drivers for important stuff as needed / over time.
Perhaps the bottleneck is public perception after the accident at Three Mile Island, and then everyone wasting time on alternate (insufficient) renewables. But now it's not about migrating from dirty to clean energy (which nuclear is), it's we need more power and it's time to get serious. Welcome back, nuclear. Microsoft entering an agreement with Three Mile Island nicely concludes a period in energy history. The next one should be most exciting.
Solar is very, very cheap and almost totally worthless without storage. Storage is extremely expensive. Nuke is extremely cheap to generate -once its built. The cost of nuke energy is not because the technology is complex or because resources are scarce. It's because we have very, very burdensome regulations around nuclear reactors (for good reason!) and each nuke plant is a bespoke effort which gets recertified each time. This is enormously expensive. There is reason to believe that small modular nuke plants will vastly reduce this cost. That means we might have a path to cheap nuke, but there is no immediate path to cheap storage barring a technological revolution (not just incremental improvements) in battery tech.
In the long run solar power will kill fossil fuels, but we desperately need a bridge to get us there and not destroy the carbon balance in the atmosphere. Nuke is that bridge.
"Achieving 97% of the way to 24/365 solar in very sunny regions is now affordable at as low as $104/MWh, cheaper than coal and nuclear and 22% less than a year earlier."
This is right now, July 2025. The costs of batteries continue to fall. How much cheaper will batteries be by the time we start churning out SMRs fast and cheap?
By all means keep beavering away at nuclear. Its time will come one day. But I won't hold my breath for it to solve the climate problem in the next 10 years.
“Very sunny” is doing a lot of work there. The storage required goes up dramatically once you run the numbers for somewhere that has seasons. The long-range HVDC lines between hemispheres idea is cute but probably geopolitically impossible; I don’t think the US will let its ability to literally keep the lights on depend on South America.
Storage could get there, but I don’t think it’s credible that manufacturing scale alone will solve the problem. We probably need some new, qualitatively different chemistries to become viable for solar to be viable for the whole grid. From a technical perspective the nuclear plants we could build in the 1960s could do it, whether we can still build them (no matter if the barrier is regulatory or practical) is another question.
How will you get me with rooftop solar and a home battery to buy your extremely expensive nuclear powered electricity when I have my own imperfect solution almost the entire year?
Scale this up to a society adding onshore and offshore wind and you quickly realize that the nuclear plant will have a capacity factor at 10% or so.
Vogtle with a 20% capacity factor costs somewhere like 85 cents per kWh, or $850 per MWh.
Nuclear power due to the massive CAPEX is the worse solution imaginable to fix renewable shortcomings.
Take a look at France. They generally export quite large amounts of electricity. But whenever a cold spell hits that export flow is reversed to imports and they have to start up local fossil gas and coal based production.
What they have done is that they have outsourced the management of their grid to their neighbors and rely on 35 GW of fossil based electricity production both inside France and their neighbors grids. Because their nuclear power produces too much when no one wants the electricity and too little when it is actually needed.
Their neighbors are able to both absorb the cold spell which very likely hits them as well, their own grid as the French exports stops and they start exporting to France.
Electricity self-sufficiency is only realistic when:
your needs are quite low, you own a house, you have capital to invest for both panels and storage, your heating is not electricity dependent (so most likely fossil fuel or wood, which isn't better)
Yet, most people live in cities, with plenty of appartement or shared houses where most of the requirements are just not feasible. And the trend isn't going in reverse.
So yes, YOU, may have your own individualistic solution but clearly, it's not something that is suitable for most people.
Considering you do not have a real horse in the race, you should quit arguing and enjoy your own egotistical "solution" and let people who want to live collectively decide what's best for them.
I’m sure the French are crying about having much lower energy prices than e.g. Germany, even with the importing. I don’t see why we’d expect they’d pay more if the natural gas plants were in their borders.
You sure wrote a lot here to make one point. Yes, if you're willing to operate your own disconnected microgrid you have enormous advantages. Not every entity can do that or is willing to accept the loss of reliability that comes with.
The additional storage needed when you need to store energy from the summer to feed the grid in the winter (instead of just for day/night and a few cloudy days) is not only orders of magnitude higher in raw capacity, but requires different battery chemistries that can hold charge for that long. 22% cheaper is a drop in the bucket.
> when you need to store energy from the summer to feed the grid in the winter
Surely you don't need to power 100% of winter hours with summer sunshine. Electricity isn't grain to be stored in a silo.
Most places humans live in also get sunshine in the winter. Less sunshine admittedly, but that's where overbuilding panels and interconnecting grids comes in. And even dark, cold places get windy.
Are you so sure that storage is so expensive? It’s been coming down the cost curve extremely quickly, such that opinions formed even an year ago are severely outdated, and it’s now solar+storage that’s being favorably compared to replacing nat gas plants, not just solar itself.
Storage that is good enough to replace peaker plants, and storage that is good enough to handle seasonal variations in insolation are completely different ballgames. The lithium battery chemistry in your phone will self-discharge on the order of a month - there are alternate chemistries but they have other problems right now.
Yes, if you have a magic planet spanning transmission system capable of handling the power flows over building solves the problem. Unfortunately that's orders of magnitude more expensive than storage, which we already can't afford.
Gonna have to see more numbers for "storage is more expensive than nuclear". And not the unit cost of SMRs with the assumption that mass manufacturing is solved, certified, and permitted. You have to account for those costs too. And time, of course. The climate crisis is here now. We can't wait 10 years for cheap SMRs to be ready (though we'll gladly take them when they are).
I’ve never seen anybody give an estimate for the cost of storage required to fully convert the grid of e.g. the US that wasn’t obviously astronomical and not something the utilities could afford the capital for. If you’ve seen different please share.
Battery storage ranges from $150 to $300 / kwH capacity. The entire grid would need something like 5twH of capacity for an 8 hour ride through. I want you to carefully consider those prefixes and the vast, vast Gulf of space between them.
Nah, you can buy retail packs for less than $300/kwh now, I installed some recently. Commercial installs in China are reportedly hitting like $60 installed.
Also, 4 hours is the target Jigar Shah talks about for getting solar to a load factor roughly equal to most thermal plants.
Also, I believe that’s 4 hours on the nameplate of the variable generation, not 4 hours on the entire grid load. People generally aren’t advocating for going fully variable generation.
Also please give a source as to why the US grid would need 5 TWh of battery storage. So we know it is not simply a number you invented out of thin air to say ”impossible!!!!”
> Solar is very, very cheap and almost totally worthless without storage.
For say an AI training-oriented data center, you could scale down the power usage when supply is limited. You could change power limits on the CPU/GPUs, put the machines in sleep mode or powered off entirely. So the required storage would just be a slightly bigger UPS.
Not sure if the economics works out, but at least technically it's possible as it's more flexible than user-based loads.
AI based training is an almost ideal match with this kind of supply. You could even imagine migrating long running training jobs to different parts of the world based on energy availability to optimise costs.
So the model is buy some of the worlds most expensive hardware and let it sit idle for half the time? If I want to save the same throughput I need to buy at least twice the hardware!
Load throttling is one of those ideas that seems great as long as someone else is doing it.
Ha, that's a great point. I guess if you have some latency sensitive inference workload the capacity will effectively be dynamic, but that is likely uncorrelated with local energy prices I imagine.
You don't want to waste your GPU capex by not running those suckers at 100%. (Other datacenter workloads it makes some sense to demand-regulate, but not AI.)
Solar benefits from storage yes but it's not at all worthless even without it.
If your solar panels generate 10 TWh per year, you have 10 TWh unused hydro, gas, even oil and coal that is stored instead of spent. You have saved the planet from megatons of CO2 emissions even if you have no new green storage.
Solar is already adding the equivalent of several nuclear power plants worth of new electricity every few months. Getting another month's worth of electricity delivered 10 years from now is not much of a bridge.
I think solar and storage just needs every other worse idea to stay out of the way and things will be fine.
> The cost of nuke energy is not because the technology is complex or because resources are scarce. It's because we have very, very burdensome regulations around nuclear reactors (for good reason!)
So its easy, at least if it wasn't for all that burdensome regulation. But also the burdensome regulations is actually good, presumably because it's hard to get right.
This sounds like nonsense to me. If the regulation is good, that would usually be because a thing is hard to make work in a liberal society, usually for some misaligned incentive reasons. In that case the regulation isn't "burdensome" but necessary to counteract the failure of the market.
You're approaching this with the nuance of a child. Yes, nuke regulation is burdensome, and yes it is necessary because nuke can have quite severe failures when failures occur. The solution is not to dogmatically suppose that one of those two basic facts is false. It's to engineer around the problem by making many exact copies of the same design, reducing the amount of regulation that needs to be applied on a per unit basis. That's what small modular reactors are. Certify once, build many.
You can bury the casks in my (literal) backyard if you'd like (please put the grass back). It's an overhyped issue much less impactful than the pollution we've had waiting for an idealized answer to arrive.
> than the pollution we've had waiting for an idealized answer to arrive.
As I'm fond of saying, environmentalists didn't kill nuclear. I'm not denying they had motive. But they lacked means. They can't stop anything else they've set their minds to: fossil fuels, automobiles, deforestation, industrial livestock farming. Even whaling is alive ffs.
No, there was another party with both motive (competition) and means (lots of cash and political influence) to do the deed: the fossil fuel industry. And nuclear didn't help itself with accidents (and ensuing costly clean ups, one of which helped take down the Soviet Union), and budget overruns even when things went smoothly. Both found a convenient fall guy: the green movement.
Tl;dr nuclear hasn't grown because of money. It cost too much, and the competition had the cash to slander its reputation.
God I hate this argument. Casks. The answer is casks. The short term solution turns out to be a fantastic long term solution. If that isnt good enough, demand it be reprocessed with thorium or something.
There. No more silly anti nuke gotcha. You can give up on that one permanently.
I am steelmanning this, and assuming you are making a hilarious joke at the expense of anti nuke activists. Instead of defending the storage issue, this is just a pivot to another unrelated and already well resolved issue. Thats exactly what the silly anti nuke folk get up to. Well played, solid joke, 10/10.
The difference is this isn't some legacy system that still exists a decade later. It's brand new with the tag still on. And it wasn't designed by a conscious being but by probability.
I've seen from beautiful to crazy legacy systems in various domains. But when I encounter something off, there appears to always be a story. Not so much with LLMs.
A team unfamiliar with a code base demoed asking questions to an LLM about it. The answers genuinely excited some. But anyone who had spent a short time in the code base knew the answers were wrong. Oh well.
That is one anecdote, but it doesn't really have any information in it. To debug the process we'd need to know which LLM, the developer's backgrounds, what prompts they used etc.
I've used a variety of LLMs to ask questions about probably dozens of unfamiliar code bases many of which are very complicated technically.
At this point I think LLMs are indispensable to understanding unfamiliar code bases. Nearly always much better than documentation and search engines combined.
reply