Hacker Newsnew | past | comments | ask | show | jobs | submit | isaacaggrey's commentslogin

It's interesting how this is both a nebulous yet concrete idea and how definitions and implementations vary widely. I was having the _exact_ same questions since I've had to newly lean more into the data engineering space at my current gig, so it's helpful to hear these takeaways from folks experienced in the trenches (not just the newbs).


I agree with your parent post but why would a breakup not equally apply to Apple?


It should! But the parent was referring to Android manufacturers.


I think other commenters are overstating the change from staggered to columnar. I type just fine (100 WPM+) going between my Moonlander (split keyboard from ZSA) and my Lenovo/Macbook (typical staggered layouts).

In hindsight, the biggest issue I ran into switching keyboards was that I was too ambitious playing around with the key configuration. The configurability is a big draw but I took for granted that I had already built up years of natural tendency for certain things - which thumb I use for space, preferences for Ctrl/Alt/Command/Option, for Shift, etc.

The default for these keyboards probably don't 100% align with what you're used to, so you should directly map what you're doing currently over to the keymap of the keyboard and then you can fiddle with making it yours over time.

I will say that if you're not already a touch typer, then a split keyboard is not going to help and it will be more difficult to get used to.

edit: also, if anything, going columnar helped me actually consistently hit number keys!


Could you elaborate?


There are some hilariously ugly things for even trivial things like editing/replies:

Edits get a `"m.relates_to": {"event_id": ..., "rel_type": "m.replace"}` field in the body, and `"m.new_content": {...}`, containing the plain-text and HTML versions of the message (while also keeping a copy of both, typically with a prepended "*", outside "m.new_content", for backwards compatibility; yes, that's 4 (four) copies of the message text in an edit; for some while Element generated up to 10 (ten) copies IIRC, due to some proposed extension(s), but that seems to be gone thankfully).

Reply messages get `"m.relates_to": {"m.in_reply_to": {"event_id": ...}}` - an annoyingly different format from the edits. It might look like that that allows an edit to change which message is being replied to, but nope, last I checked, that's not supported. Oh and for backwards compatibility a <mx-reply> HTML element is to be prepended, containing a copy of the replied-to message and its info (and yes, that means that the replied-to message effectively can't be deleted as the reply will still contain its text; luckily clients can choose to do not generate such <mx-reply>, but Element still does). And if you want to actually get proper reliable info about the replied-to event, you have to just make an API request for every single one you want to display (unless you happen to already have a cached copy, which luckily for replies is reasonably common).

And then there's threads - again in the name of crappy backwards-compatibility there's a mess - within-thread replies imitate a reply to the last in-thread message, along with `"m.relates_to": {"event_id": ..., "rel_type": "m.thread", "is_falling_back": true, ...}`, that "is_falling_back" indicating that this isn't actually a reply (being false when you want an actual reply). And clients are "supposed" to also handle replies to in-thread messages without the "m.thread" relation (which'd come from clients not supporting threads), but as far as I can tell there's no way do to it while paginating without making an API request per every single message (and yes Element behaves quite buggily here).

And then there are some things that can't be reasonably handled - the context/message listing/pagination APIs don't give any reaction info (besides the reaction events themselves in chronological position of their addition), so reaction presence/counts in history view must be calculated by clients, and thus won't be able to show ones that were posted a while after the messages. (there used to be some aggregation provided, but it's since been removed!!!) I think the only way to do this properly is truly just making an API request of "list reactions" for every single message the client wants to show.

This may make it seem like Matrix has an extremely firm stance on backwards-compatibility, but nope - recently they deprecated & made non-functional the unauthenticated API retrieval of media, making it impossible for clients not supporting video/audio/image display to just open those in the browser, instead forcing them to have custom file downloading & saving code, and also making it impossible to link to media within a message. There was a window of 6 (or less?) months between the new API being finalized, and the old one being removed.


While you have a point with the ugliness of threads and replies I do not like your example with the authenticated media. The reason for the very short deprecation cycle was that it was deemed to be almost a security fix. The Matrix team did not like how their and other's servers were used as CDNs.


Right; it's not unreasonable to want to quickly transfer away from it, but it's still a rather short deprecation cycle compared to everything else I've had to touch never being deprecated; it's not an "example", it's a real thing that actually affects my usage of Matrix (due to being now unable to post images inline in a message (granted, including fixed-URL links wasn't the prettiest thing, but it did work), and having to fully download videos before viewing them from my client that doesn't include video playback, whereas before I could immediately open it in a browser and view it while it's streamed in). Granted, then again, it's the only such occurrence, and probably there's nothing else that could match it in the future.

However many years ago I started work on my matrix client I was rather surprised it ever allowed direct links to media in the first place, but then again Discord had done the same mistake. But at least Discord's solution is more sensible, providing temporary links.


Many of these problems stem from the fact that matrix is federated. And it's a protocol, not really even for messaging but for a federated graph database. In federation some messages may be missing and arrive out-of-order. And different clients have different capabilities. There's a really simple example client implemented in a couple of lines of bash.


No, most of those are just plain and simple bad outcomes due to not having thought out extremely basic things ahead-of-time. And even if there are problems with actually no simple fix, that in no way means the problem ceases to be a problem.

Replies to threads from thread-unsupporting clients are the one potentially-hard thing federation-wise, as doing it properly would require the server to trace back the reply and from that handling it as in the thread, but that's not far from what servers already have to do with edits. Or you could also just not require/suggest that behavior, having replies without thread metadata always be outside of threads; would probably save on confusion too, as the illusion would break anyway when someone on a client without thread support would not do a reply on some message after having used replies on others.

There might be some graph core to it, but it's still primarily a messaging service. Opening the client spec will show you tons of messaging-specific APIs.

I utterly disagree that saving a half-dozen lines in largely-useless toy clients is worth making more feature-complete clients more complex, and doubling the size of every edit event.


> In federation some messages may be missing and arrive out-of-order.

Doesn't "some messages are missing" trait defeat the point of a reliable communication protocol?


Missing at any given moment. There's eventual consistency. But sometimes connectivity isn't 100%



All that really shows is "the project has an active community". Have you seen how many XMPP has? Or Python[1]? This is just normal open development metadata.

[1]: https://peps.python.org/pep-0000/#


> Have you seen how many XMPP has? Or Python[1]?

Python has had 664 PEPs in 23 years (29 per year). XMPP has received 495 XEPs in 23 years (22½ per year).

Matrix has received about 650 in 8 years (>81 per year). Four times the change rate is quite annoying when writing code against the spec. Plus, most XMPP clients only support a fraction of the full spec, so by that comparison the impact of the rate of change is even worse. Furthermore, most XEPs and PEPs are mere (optional) extensions, whereas a lot of MSCs are alterations of existing APIs. Any JSON parser used for Matrix needs to anticipate fields changing or being added all over the place because you never know when random fields show up all over the input data because of a spec change.

The way the Matrix spec is developed feels a lot more like a proprietary company spec that happens to be published on Github than the IETF/XMPP/Python spec process. The rate of change is high and almost all changes are done to serve new features for the two or three major players that bought into the Matrix ecosystem.

One recent change that comes to mind is the move from secret, public URLs for media, to authenticated URLs. The setting to force that changeover won't apply everywhere for a while, but it'll completely break every media-supporting client written before the spec change.

Nothing wrong with extending the spec to improve the product, but with how fast the protocol is growing, I wouldn't want to be tasked with maintaining a Matrix client and I don't have much faith in the forward compatibility of the few Matrix bots I've written either.


Aaaargh, this comment is a nightmare.

It is a GOOD THING for people to open MSCs and try to evolve Matrix, and the number of open proposals shows the enthusiasm in the ecosystem for doing so and proposing ways to evolve the protocol.

Meanwhile, the number of actually accepted merged MSCs is way lower - 226 merged in 8 years: https://github.com/matrix-org/matrix-spec-proposals/issues?q... - so 28 per year. Same as Python.

> I wouldn't want to be tasked with maintaining a Matrix client and I don't have much faith in the forward compatibility of the few Matrix bots I've written either.

Authenticated media is literally the first time we've made a significant breaking change on the CS API in 10 years - and was effectively a security fix, to stop people abusing Matrix as a CDN. Bots I wrote 10 years ago still work today without changes (other than auth media).

> The way the Matrix spec is developed feels a lot more like a proprietary company spec that happens to be published on Github than the IETF/XMPP/Python spec process.

Seriously, read the proposal mechanism (https://spec.matrix.org/proposals/) and look at a MSC like https://github.com/matrix-org/matrix-spec-proposals/pull/177... with >500 comments from across the wider community (so big that it crashed GitHub at the time).


Could you elaborate on how code review after merge is fine or better?

Some major downsides to review after merge:

- cost of context switching (author has moved on to something else new, which now remains paused to "go back", so there's no agility benefit to just merging if it works)

- increases risk of unnecessary conflicts (how do you address someone merging something, you have feedback, then someone else merges after on top? A PR helps resolve code that's done vs could be improved because it forces a communication moment between the authors)

- tooling (a PR or diff is well supported. How are you discussing feedback when everyone can just merge on top without review? I am assuming there's no point to a PR if everyone can just merge)

- decreases shared learning and understanding (I might think the code follows our standards but there still may be feedback from my team that could help improve. Why put that in the main branch before such feedback? It seems like it would be hard to keep track.)

I can't imagine my team performing well under those circumstances and I think we have a very healthy code review / quality culture. If I'm not giving or receiving feedback - that sounds more like code slinging than thinking and humility, even for the most experienced architects I've worked with welcome feedback, so it's not a matter of trust.


I haven't used this workflow but I imagine your concerns could be addressed.

> addressing merge conflicts

I actually think, from the pov of the change author, this workflow is better at this. Other code changes have to resolve conflicts with yours, not the other way around. The followup changes from review feedback can begin with conflicts addressed for you.

> tooling

I don't know how other platforms handle this, but on GitHub at least there is nothing stopping you from reviewing a merged PR. You can prevent pushing straight to the trunk while still allowing authors to merge their own PRs at will.

I do think your other points are clear drawbacks but on its face the practice doesn't seem without merit. Seems like the "show" point on the ship/show/ask spectrum.


The biggest benefits are:

- Breaking the framing that a review should only address code that has changed in a PR, and encouraging a more holistic view of the codebase. You don't have to review every merge every time, just review code at a granularity that makes sense when it makes sense to do it.

- Removing a delaying step between code being functionally complete and being delivered, where value invested is sitting on the shelf waiting for a reviewer.

- Forcing a high test pipeline confidence. If you're relying on code reviews to catch functional problems rather than stylistic or structural ones, yeah, you won't have the maturity to do this. Saying "we'll review after merge" is saying "we have enough confidence in our automated quality gates not to rely on a human before then."

To address your worries:

- Cost of context switching is removed, not made worse. If you insist on a review before merge, the author has to wait for a review, and unless you have a culture of reviewers context switching to do reviews immediately when they are requested, they will probably pick up something else. Now when the review comes back, what do they do? Do they leave the review waiting, increasing the likelihood of a merge conflict as it gathers dust? Or do they context switch to deal with it and realize the value invested? Contrast this to review-after-merge: the value is delivered, and there's no merge conflict potential; the reviewer can get to it when convenient; and the result of the review can just be another set of tickets on the board to be picked up in the normal sequence. No context switch required.

- Disconnect the review from the PR and the problem of conflicts goes away. Instead review a module at a time (where "module" can be whatever scope makes sense: file, import, header, whatever), so that you're always looking at the whole context rather than the few lines that happen to have changed. That also minimises the human tendency to find as many issues in a 50 line diff as in a 2000 line diff.

- The tooling issue is to some extent an availability bias. Just because tooling for a specific process is well-known does not make that process good. It can nail on harmful practices and make them hard to change or even to see how harmful they are. And yes, if you're doing something close to trunk-based development then either PRs get very small or they go away entirely - branch management is orthogonal to when reviews happen, but you can't usefully move in that direction without addressing the delays inherent in review-before-merge.

- Learning and understanding would only decrease if reviews stopped entirely, rather than moving when they happen. The reviewer always has `git blame` to direct feedback to the right place, and by expanding the scope of a review from just-the-diff to all the code around it, the reviewer has more of a learning opportunity, not less.

It's possible you do have a healthy review culture, but the question I would have is this: how long do PRs sit on the shelf waiting for a review, or for review feedback to be addressed? Do you track that number? And are you relying on humans to catch problems that should be embedded in automated quality gates? Moving the review out of the commit delivery cycle removes a potential process bottleneck, and it's a very easy one to be completely blind to.


No amount of automated tooling can account for subtle security issues, wrong understanding of the spec (something that happens easily especially to people new to the project, but also every time you work with an area of the code you're not familiar with), gotchas previously encountered, etc. There are things that humans (especially ones with a lot of experience) are good at catching that machines aren't.

If you do "review after merge" and you deploy after every merge, I think that's highly irresponsible. If you don't, then your first point still applies - there is a delay between the function being "complete" and being delivered.


That's just blame diffusion though. No review is guaranteed to catch issues like that, all it does is say "well we followed The Process, guess it sucks that one got through" when something bad inevitably makes it to production because the reviewer didn't have the specific knowledge to catch the specific problem. That's more likely to happen when the scope of a review is limited to a diff - either implicitly because that's all the tooling shows you, or explicitly because of "why should I fix code I didn't write on this ticket" reactions to review feedback.

That's not to say blame diffusion is without value! You might need it to avoid toxicity around the team. But that's a different problem.


It's not "blame diffusion", it's risk management. Two sets of eyes are more likely to catch issues than one, and if I specifically know that some part of the code is tricky and person X is familiar with that part of the code, I might even ask that specific person to review.

Honestly, it sounds like you have a very cavalier attitude towards breaking production. That might work in some settings, but definitely not all of them.


I appreciate more takes in this space and what elim could be. Best of luck!

That said, I find Fabriq to be more inline with what I (no affiliation) think about an app to help cultivate relationships: https://ourfabriq.com/fabriq


> Our Linux product is currently focused on big budget post facilities which often use Linux.

What's the value prop for a studio to run something on Linux vs Mac / Windows?

This is interesting to me to hear as the trope for Linux is that it's not used for audio/video work (and couldn't find much online about real world professional usage without pulling up reviews of best video editor / etc sort of results).

Also, congratulations on the release - rewrites are a huge undertaking.


Linux is the main OS at quite a few 2D and 3D animation studios. (I suspect because the industry started out on SGI machines running Unix.)


Are you just saying that Windows users longer need to use PuTTY or similar?

That sounds convenient but leaves me wondering what particularly about that quality of life improvement changes how you view Windows.


Back when I had to use Windows for work (around 2018) it boggled my mind how there was no terminal app that I liked. CMD: Sucks. Powershell: Sucks. Powershell ISE: Sucks. cmder: Not great but it's the best thing I found.

If I was starting out today I could just use Windows Terminal and worry about other things, making me view Windows not quite so unfavorably.


Yeah Putty worked great but was a bit-unorthodox compared to openssh.

Other interesting change is unix style \n line endings working everywhere even notepad.

Its also combined with other changes like browser hosting most apps, vscode being used on linux/mac which is still Windows first class.

Still many unix tools dont work well which is why wsl is handy. docker is handy too.


PuTTY is used for SSH or Telnet, right? Windows 10 has had SSH built in for some time.


I'm biased to appreciate that F-Droid even exists but providing a default list is not cognitive dissonance, and what you've said suggests that you may have missed the author's point.

You can connect whatever "collection of apps" source you desire but you're correct that you do get the "F-Droid List" out of the box.

It may feel odd to show no apps when initially using F-Droid but, in theory, one improvement may be to have the user explicitly choose a source list and maybe the "F-Droid List" is first in the list among others.

Going in that direction would also help expose that particular feature as I wasn't even aware that you could add / change sources for F-Droid, which sounds like if easy enough to add may be an easier avenue to self publish apps than the major app stores.


I agree that censorship is not the best angle the article concludes on but I don't think "user choice, decentralization, and community-controlled curation" are abstract ideas. The article is concrete what this looks like practically speaking:

> This means F-Droid gives you selected apps by default without bans or censorship. When you install the F-Droid app, it automatically connects to the collection on f-droid.org that is maintained by this community. F-Droid also makes it easy for anyone to publish their own repository, with their own curation rules.

i.e., yes, you do get the "F-Droid List" by default, but you are welcome to connect to a different list or publish your own "list" of apps that has its own curation rules.

Imagine if you could view Apple or Google's app store with an "awesome app" list curated by a list of experts you follow without all the junk of suggested apps or ads. That would go in the direction of "meta-curation" akin to what /u/hinkley is referring to in a another comment [1].

[1]: https://news.ycombinator.com/item?id=33776244


Steam already has curators/curation lists exactly like you describe. They are usually not particularly interesting.

Anyone can make a web page with apps they think are great, and links to those apps that will go straight to store pages. Very few people do.

These are done rarely because there's little or no money in it. Give the curators a significant cut, and now you have a lot of curators and a lot of gaming of the system.

Now you need to curate the curators, which is still a significant problem.

Throughout all of that, you'll have those claiming that curation is censorship. They don't matter because you can never satisfy them.

Who ensures security in decentralized app stores? Curators, independently? Or can they "inherit" that from the major app stores?

None of this is anywhere near simple.


With free software and reproducible builds, it is possible for small scale curators to inherit the security of the large scale curators. That is why they are key pieces of the f-droid.org collection.


> Imagine if you could view Apple or Google's app store with an "awesome app" list curated by a list of experts you follow without all the junk of suggested apps or ads.

My highschool history teacher once said the answer to nearly everything is money.

Why don't we have mobile app stores that operate in a manner similar to package managers? Money. Why can't I install what I want on my iPhone? Money.


This distinction is very minor. From my perspective there is almost no difference whether I am viewing a curated app list as a web page or as an alternative App Store. This has almost no consumer advantage and a vastly increased risk surface as an obvious downside.


This, from my perspective anyway, seems to be one of the biggest drivers of adoption for closed ecosystems. Users want to feel safe and not vet everything ( because it is hard to do well ) and it is genuinely hard to argue with that stance from a very pragmatic POV. As my friend once put it 'I don't want to spend my valuable time left fiddling'. For the argument you mention, I think I agree, because I still remember getting calls from family members, who installed something and now had constant unremovable popups everywhere.

That said, Apple seems to be more targeted now precisely ( compared to non-Apple linux and Windows ) because it has more people, who are lulled by the sense of security Apple curation model provides.

edit: I kinda get that the article is mostly about mobile devices, but the app-store concept appears to have moved to desktop world as well.


I agree - for most users safety is more important than "alternative stores".

This post is very manipulative in my view. It would be really easy to avoid that for the authors - just list the downsides of allowing any app to be installed on an iPhone. What are the consequences of allowing your parent to install "Bank of Amerika" on their phone? Exactly.


How people access apps is not a on/off switch between walled garden and dog eat dog free for all. Decentralized systems need to be designed with safety in mind, just like walled gardens do. Both can be done badly or done well.


Let's be fair - I don't think we're talking about simply swapping lists when you zoom out. At a minimum, any value prop would have to match the existing major app stores such as verifying binary sources, rejecting malicious apps, and the like.

I think the main question I see is - do multiple stores benefit the user?

I'm not sure of that answer but I think we can agree that multiple stores do NOT help the default app store, which in turn could be beneficial to the consumer (multiple stores that have to compete on pricing w/ deals, self publishers offering a cheaper price directly, etc. - think more like grocery stores selling the same stuff vs farmers market vs direct from farm).

I'm no economist but I think we could also agree that having at least a few options is generally A Good Thing.

edit: regardless, even in a world with multiple stores the point re: attack surface is a good one and one of your other comments regarding what users actually value like safety is an important one, which as a business are the things you need to weigh on to make a profit


I disagree that the main question is: "do multiple stores benefit the user?". The main question is: "Should the user have the choice in their stores?". Apple believes that their users should not have that choice, and Google used that to drive adoption with Android by making it more open. AS Google gained the market share and power, they locked down Android more and more to gain those monopoly-level profits. Based on data that was released as part of Oracle v. Google, it looks like they have over 40% profit margins. Plus notice how Google just cut their fee in half (30% to 15%). That means they were rolling in cash.


There are many alternative stores available for Android. In my experience this only leads to:

1. Less trusted software. Can I trust Russian Yandex app store? Can I trust Amazon app store?

2. Focus on upselling their own / affiliated apps.

3. No actual increase in choice. Some devices just come preinstalled with alternative stores for no other reason than their own monetary benefit.

4. A theoretical benefit that "I have choice" and if someone bans something I _might_ be able to install it from a different store. Of course oppressive regimes don't just ban apps, they often restrict internet in more severe ways.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: