Cross compilation is a pain to set up, especially if you're relying on system libraries for anything. Even dynamically linking against glibc is a pain when cross compiling.
Which doesn't mean that it's easy to use an ARM device in the way I'd want to (i.e. as a trouble-free laptop or desktop with complete upstream kernel support).
We do have ARM CI pipelines now, but I can only imagine what a nightmare they would have been to set up without any ability to locally debug bits that were broken for architectural reasons.
I guess you must be doing trickier things than I ever have. I've found docker's emulation via qemu pretty reliable, and I'd be pretty surprised if there was a corner case that wouldn't show on it but would show on a native system.
Not really trickier, but different stack - we’re a .NET stack with a pile of linters, analyzers, tests, etc. No emulation, everything run natively on both x86-64 and ARM64. (But prior to actually running/debugging it on arm64, had various hang-ups.)
Native is also much faster than qemu emulation - I have a personal (non-.NET) project where I moved the CI from docker/qemu for x86+arm builds to separate x86+arm runners, and it cut the runtime from 10 minutes in total to 2 minutes per runner.
It's more surprising to me that software isn't portable enough that you can develop locally on x86-64. And then have a proper pipeline that produces the official binaries.
Outside the embedded space, cross-compilation really is a fool's errand: either your software is not portable (which means it's not future-proof), or you are targeting an architecture that is not commercially viable.
> It's more surprising to me that software isn't portable enough that you can develop locally on x86-64. And then have a proper pipeline that produces the official binaries.
This is what we largely do - my entire team other than me is on x86, but setting up the ARM pipelines (on GitHub Actions runners) would have been a real pain without being able to debug issues locally.
If this is like the RPI4 with a single global model I'm wondering how it passes FCC regs, in that presumably the Wi-Fi is flexible enough to transmit on illegal US channels (i.e. Ch 12-14 on 2.4).
The usual way to do it is to use the same regdomain as the current access point, with a fallback to a default country code of '00' (which allows only the minimal set of frequencies which are acceptable globally).
As an Arch user I've always been a bit confused by the joke. I have more shit go wrong on my Macbook. Sure, on Arch I might get a bad Nvidia driver update and either have to roll back the driver, kernel, or both[0] but these are at least easily fixable. You can easily determine the problem, fix it, and you've learned how to avoid it or resolve it in 5 minutes if it happens again (thanks Nvidia ;). Other than that, the only breaking things are when I'm fucking around, and well... that seems like my own damn fault lol. But several Macbooks I've had will go to sleep and if I try too fast to wake it up I'll have a black screen that can't be recovered until I reboot. And I could go on about how weird and infuriating some shit is and how I can't even implement a fix myself and I just give up because I don't want to waste time fighting Apple and play that cat and mouse game with no good documentation. I've just come to understand that "just works" means "not as buggy as Winblows".
AFAIAA the joke comes from Arch's purported superiority (rolling release, close to upstream, bleeding edge, KISS) as compared to "bloated/slow" e.g. Ubuntu. It's kinda old now and existed even before the controversial switch to systemd from SysVinit/rc.conf.
Tbh I thought that it was because people are scared of the terminal, this even somehow includes people who have used Linux for a decade. That and that basically you're forced to use the terminal for Arch and read the instructions.
So kinda like how people joke about Ikea furniture being difficult to build. I mean... it's not, it's quite intuitive. But even if it wasn't then it's purely an exercise in ability to read instructions
On virtual routers there is no content in /rom. What you can easily do is install another copy of the same image on another VM or container and run it through firstboot. Make a backup and compare it with a backup from your running system. Even better is to make a backup just after firstboot, then use the system and compare your current backup with the first one.
Do keep in mind that the OpenWRT backup does not contain information about which extra packages were installed after firstboot. I solved this adding a cron job which runs opkg list-installed > /etc/opkg_installed.txt and adding that last filepath to /etc/sysupgrade.conf so it gets added to backups.
Without looking I'm going to assume that the Surface Go is not an open source hardware platform -- not to mention the fact that it's produced by a company that's been famously anti-open source for most of its existence.
I have used it on and off on a Surface Go, but I would prefer a dedicated device built with linux and open source in mind. Starlabs Starlite is the first tablet device that is a true daily driver with Linux (Pine Tab is not a daily driver in my eyes), and I am pleased to see more in the works.
https://openwrt.org/docs/techref/ubus https://openwrt.org/docs/techref/ubus#what_s_the_difference_...
(no endorsement implied; in particular, ubus has not much of a security model, though OpenWrt has an excuse for that)