I was on my 3rd Synology over the last 12 years or so. They were solid and something I didn't have to pay attention to.
But the drive lock-in tipped me over the edge when I needed to expand space. I'm getting burned out with all of these corporate products gradually taking away your control (Synology, Sonos, Harmony, etc.).
Even though it takes more time and attention, I ended up building my own NAS with ZFS on Debian using the old desktops I use for VMs. I did enjoy the learning experience, but I can't see it's that reasonable a use of time.
It doesn't seem like you're talking about the same thing the article is. Graham doesn't say "you must be a good writer to be a good thinker".
> This is only true of writing that's used to develop ideas, though. It doesn't apply when you have ideas in some other way and then write about them afterward — for example, if you build something, or conduct an experiment, and then write a paper about it. In such cases the ideas often live more in the work than the writing, so the writing can be bad even though the ideas are good.
Writers who have trouble expressing thoughts in a non-native language are not actually developing the idea in that language. That doesn't mean they are producing bad ideas, but it _might_ mean they won't produce good writing (in that non-native language).
I took the essay to be highlighting that if you use writing as a tool for thinking, clunky writing is likely to highlight places where your ideas themselves aren't clear or correct yet. The iterative process of refining the writing to "sound good" will help shape the ideas.
This seems to be a commonly expressed idea in other forms. For example, when thinking through ideas in code, the process of making the code more "beautiful" can also result in a clearer expression of more correct ideas.
> The problem stems from the fact that Unicode encodes characters rather than "glyphs," which are the visual representations of the characters. There are four basic traditions for East Asian character shapes: traditional Chinese, simplified Chinese, Japanese, and Korean. While the Han root character may be the same for CJK languages, the glyphs in common use for the same characters may not be. For example, the traditional Chinese glyph for "grass" uses four strokes for the "grass" radical [⺿], whereas the simplified Chinese, Japanese, and Korean glyphs [⺾] use three. But there is only one Unicode point for the grass character (U+8349) [草] regardless of writing system. Another example is the ideograph for "one," which is different in Chinese, Japanese, and Korean. Many people think that the three versions should be encoded differently.
That's not what happened. Downstream was building from source, that source just had malicious code in it.
One part was binary, the test file (pretty common), but checked into the repo. One part was in the build config/script, but was in the source tarball and not in the repo.
Git would make you merge or rebase, but yes there wouldn't be a conflict. They're saying Pijul would let you directly push without having to deal with the diverging histories.
Which tbh is a bad thing. Just because change a doesn't textually touch change b doesn't mean they don't interact.
Unless your VCS is handling CI for integrating changes on push, you really need to pull down the upstream changes first and test them combined with your code before blindly pushing.
> Which tbh is a bad thing. Just because change a doesn't textually touch change b doesn't mean they don't interact.
A good example for this is code which grabs several locks and different functions have to do that in the same order, or a deadlock will result. A lot of interaction, even if changes might happen in completely different lines.
And I think that's generally true for complex software. Of course it is great if the compiler can prove that there are no data race conditions, but there will always be abstract invariants which have to be met by the changed code. In very complex code, it is essential to be able to do bisecting, and I think that works only if you have a defined linear order of changes in your artefact. Looking at the graphs of changes can only help to understand why some breakage happened, it cannot prevent it.
I have clarified the comment above. A pull is needed before pushing. That pull does not need a merge or a rebase like git does because the order of commits A nd B does not matter (iff they are commutative). This gets a lot more useful when there are more than two patches to consider.
it already does.. you need an valid license to use DNF.. you can even get the RHEL install ISO but you will not get updates unless you active a license..
but some cloud providers offer vm with the license already in place, they pay the license in embed the cost in the vm price.. so the cloud provider that is bound by the eula..