Hacker Newsnew | past | comments | ask | show | jobs | submit | ajoberstar's commentslogin

It's not even the same ballpark.


I was on my 3rd Synology over the last 12 years or so. They were solid and something I didn't have to pay attention to.

But the drive lock-in tipped me over the edge when I needed to expand space. I'm getting burned out with all of these corporate products gradually taking away your control (Synology, Sonos, Harmony, etc.).

Even though it takes more time and attention, I ended up building my own NAS with ZFS on Debian using the old desktops I use for VMs. I did enjoy the learning experience, but I can't see it's that reasonable a use of time.


> can't see it's that reasonable a use of time

That totally depends on if you enjoyed yourself and maybe learned something or not. Totally up to you!


Definitely. Should have phrased it as "cant's see it's that reasonable a use of time _for most people_".


It doesn't seem like you're talking about the same thing the article is. Graham doesn't say "you must be a good writer to be a good thinker".

> This is only true of writing that's used to develop ideas, though. It doesn't apply when you have ideas in some other way and then write about them afterward — for example, if you build something, or conduct an experiment, and then write a paper about it. In such cases the ideas often live more in the work than the writing, so the writing can be bad even though the ideas are good.

Writers who have trouble expressing thoughts in a non-native language are not actually developing the idea in that language. That doesn't mean they are producing bad ideas, but it _might_ mean they won't produce good writing (in that non-native language).

I took the essay to be highlighting that if you use writing as a tool for thinking, clunky writing is likely to highlight places where your ideas themselves aren't clear or correct yet. The iterative process of refining the writing to "sound good" will help shape the ideas.

This seems to be a commonly expressed idea in other forms. For example, when thinking through ideas in code, the process of making the code more "beautiful" can also result in a clearer expression of more correct ideas.


Nest thermostats have a heat and cool mode where you have setpoints for each. On older ones there was a limit on how close the two could be set.


> The problem stems from the fact that Unicode encodes characters rather than "glyphs," which are the visual representations of the characters. There are four basic traditions for East Asian character shapes: traditional Chinese, simplified Chinese, Japanese, and Korean. While the Han root character may be the same for CJK languages, the glyphs in common use for the same characters may not be. For example, the traditional Chinese glyph for "grass" uses four strokes for the "grass" radical [⺿], whereas the simplified Chinese, Japanese, and Korean glyphs [⺾] use three. But there is only one Unicode point for the grass character (U+8349) [草] regardless of writing system. Another example is the ideograph for "one," which is different in Chinese, Japanese, and Korean. Many people think that the three versions should be encoded differently.

https://en.m.wikipedia.org/wiki/Han_unification

Seems like Wikipedia has a good overview of the issue.


That's not what happened. Downstream was building from source, that source just had malicious code in it.

One part was binary, the test file (pretty common), but checked into the repo. One part was in the build config/script, but was in the source tarball and not in the repo.


Git would make you merge or rebase, but yes there wouldn't be a conflict. They're saying Pijul would let you directly push without having to deal with the diverging histories.


Which tbh is a bad thing. Just because change a doesn't textually touch change b doesn't mean they don't interact.

Unless your VCS is handling CI for integrating changes on push, you really need to pull down the upstream changes first and test them combined with your code before blindly pushing.


> Which tbh is a bad thing. Just because change a doesn't textually touch change b doesn't mean they don't interact.

A good example for this is code which grabs several locks and different functions have to do that in the same order, or a deadlock will result. A lot of interaction, even if changes might happen in completely different lines.

And I think that's generally true for complex software. Of course it is great if the compiler can prove that there are no data race conditions, but there will always be abstract invariants which have to be met by the changed code. In very complex code, it is essential to be able to do bisecting, and I think that works only if you have a defined linear order of changes in your artefact. Looking at the graphs of changes can only help to understand why some breakage happened, it cannot prevent it.


I have clarified the comment above. A pull is needed before pushing. That pull does not need a merge or a rebase like git does because the order of commits A nd B does not matter (iff they are commutative). This gets a lot more useful when there are more than two patches to consider.


That seems like a very important point; how does pijul deal with such "effects at a distance"?


By not being a CI tool, nor claiming to solve such Turing-complete problems.

Pijul has a theory of textual changes, but indeed doesn't care at all about what you write in your files: that's your problem!


As with other versions of Datomic, this is not open source (only binaries are Apache licensed).

EDIT: Rephrased to sound less snarky. Not inherently a bad thing, but it's a limiting factor for many, including myself.

EDIT 2: Typo


In the replies they link to this blog post with the details.

Using the UBI images and public cloud instances does seem like a clever way to handle that.

https://rockylinux.org/news/keeping-open-source-open/


I started giggling when I read the part about using cloud instances.

here's hoping that red hat don't modify dnf to require some kind of key to get source code from the default repo, as stupid as that would be.


it already does.. you need an valid license to use DNF.. you can even get the RHEL install ISO but you will not get updates unless you active a license..

but some cloud providers offer vm with the license already in place, they pay the license in embed the cost in the vm price.. so the cloud provider that is bound by the eula..


VS Code brought Language Server Protocol, so I don't think the "vision" balance is as one-sided as you imply.


And the debug adapter protocol IIRC.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: