The problem with articles like this is that they read a little like justifying a decision that has already been made. I've a feeling that if it was written in C++/Rust/Go/whatever, it would also be possible to justify that decision with similar reasoning.
I think you're not wrong, but it's not necessarily a problem? I might argue that's actually part of the main point the article is making. The software is already written in C, well understood and optimized in that language, and quite stable, so they'd need very compelling reasons for a rewrite.
I think the "lingua franca" argument for C and the points at the end about what they'd need from Rust to switch do go beyond merely justifying a decision that's already been made, though.
I'm interested in why SSDs would struggle with condensation. What aspect of the design is prone to issues? I routinely repair old computer boards, replace leaky capacitors, that sort of thing, and have cleaned boards with IPA and rinsed in tap water without any issues to anything for many years.
I think that's an over-simplification. There was pressure on the language to ensure that data structures were compatible with C structs, so avoiding the vtable with simple classes was a win for moving data between these languages.
Of course these days with LTO the whole performance space is somewhat blurred since de-virtualisation can happen across whole applications at link time, and so the presumed performance cost can disappear (even if it wasn't actually a performance issue in reality). It's tough to create hard and fast rules in this case.
I don't see the weight reduction being very significant.
If we take a Tesla model 3, I believe it weighs 1611kg, and the motor shows up at 80kg if you google it (no idea if this is correct). This YASA motor by comparison weighs 14kg. So, this would drop the vehicle weight by 66kg out of 1611, so that's a 4% saving.
This motor is well more than twice as powerful as the Model 3 motor, so it could eliminate the entire weight of the second motor in the higher performance models. That’s 146kg, the weight of two adults, an 11% reduction.
`C2PA Authenticity: Integrated support for the C2PA standard for photo authenticity verification – initially available exclusively for registered news agencies.`
Sounds like it's limited to some users for now, I guess this will change in the future.
Going too far won't really help, since the scene being photographed can be manipulated or staged, which sounds more likely to be a concern rather than the hardware being hacked.
The UK has 1/5th of the population of the US, and 12x the population of Minnesota.
The last school shooting was the Dunblane Massacre in 1996, which led to gun law changes, removing rights to have handguns and various semi-automatic weapons.
I'm not sure how many occurred before then, but the total number of mass shootings in the UK is low. Check out https://en.wikipedia.org/wiki/List_of_mass_shootings_in_the_... to get a feel for how rare this sort of attack is in any setting here in the UK.
So for the last 25 years we've had no school shootings. I believe the US as a whole has had >300 shootings so far this year, with >300 victims.
I think the point is that the API doesn't specify that the returned integers are positive, or are monotonically increasing, then it's fine for the service to return any unique integer.
If a client application makes an assumption about this, then their engineers will accept this as being their bad and will fix it.
I'd defend this as being pragmatic - minimising disruption to clients instead of the more 'correct' solution of changing the API. I'm hoping that they managed to roll out the new API update alongside the old one and avoid a 'big bang' API change with this. Sometimes this isn't possible, but it's great when that works out.
I'm far more likely to assume that an integer-id I get from an API is non-negative or even positive than to assume that they're always smaller than 2^31. And I'd be far more likely to blame the API provider for violating the former assumption.
Back in the 80s, as the home computer revolution got going, computers were typically wired up to small, cheap, portable TVs as a display device. These TVs used shadow masks, and the computer video output was typically modulated to a TV signal, and the TV was 'tuned' to the computer. All of this added large amounts of blur and distortion even before the signal was displayed on the TV.
By the mid 80s, it was maybe more typical to buy a dedicated CRT monitor, and the computer connected via composite, or maybe even an RGB feed to the monitor, allowing higher resolution and much improved quality.
For the well healed, this route also led to the holy grail, a trinitron tube!
At each of these changes, the aesthetic of the display technology changed, but probably the best memories come from the original blurry stuff as the magical moment of actually getting something out of a home computer.
For a long time my only "monitor" when I was a kid was a 12" B+W TV for my ZX Spectrum. For the day of my birthday when I got it I was allowed to hook it up to the family 14" color TV, but after that it was back to the B+W for the next couple of years!
(funnily enough, when I finally got a PC years later, the only monitor I could afford was a Philips monochrome VGA -- I guess they now sell for multiples of the original retail price? https://www.ebay.com/itm/176945464730)
The Christmas lectures are probably the most famous thing they do, and these have definitely moved in a more 'child' focussed direction. If you were attending the Christmas lectures in the 1850s however, the audience would have been middle class victorioans, and you'd have had Michael Faraday telling you about electricity, forces, chemistry etc.
I would recommend attending one of their lectures if you happen to find yourself in London, just to be in the building, and to sit in the lecture theatre!
reply