Hacker Newsnew | past | comments | ask | show | jobs | submit | rudi-c's commentslogin

Framework is unfortunately a term that's both ill-defined and quite overloaded. Electron is a framework in a very different sense than the "JS frameworks" op is asking about. The latter is about libraries with APIs and mental models for producing the UI & UX of web applications.

Electron is just a way of running Chrome without the Chrome UI around it, + a few APIs for accessing native OS APIs. You wouldn't say that Chrome/Firefox/Safari are frameworks just because they execute JS code and has APIs. In the context of this discussion, it is fair to say that Obsidian was built without a framework.


> Electron is a framework in a very different sense than the "JS frameworks" op is asking about.

The OP doesn't have a good understanding of what they're asking about, and that's okay. That's why they asked the question.

The linked thread is titled "What framework did the developer use to create Obsidian desktop application?". It's not asking about a web application and specifically referencing a desktop framework with: "He must be using some sort of desktop framework in order to push this out simultaneously right?".

> The latter is about libraries with APIs and mental models for producing the UI & UX of web applications.

Obsidian is not a web application. It is a desktop and mobile application. There is presently no Obsidian web application. So it would be odd to be asking about web frameworks for a non-web desktop and mobile application.

> Electron is just a way of running Chrome without the Chrome UI around it, + a few APIs for accessing native OS APIs.

No, Electron is a complete Javascript-based application framework. It does so much more[1] than that.

1. https://www.electronjs.org/docs/latest/api/app


Obsidian is a built using web technologies but it's not using any front-end JS framework. Electron is the runtime framework.


I mean, you're correct that Obsidian doesn't run on the browser. But it's built on web technologies. As a result, I would argue that the overlap between the skillset and work needed to build an app like Obsidian overlaps more with most web applications than most desktop and mobile applications.

You're also correct that Electron provides APIs beyond those available in the browser, such as access to the native filesystem. The way I see it, those are mostly some lower-level details it wouldn't be that hard to run Obsidian in the browser, it's just a product choice not to (specifically, it would imply creating a file hosting service). As the Obsidian mobile app demonstrates, Electron is swappable and not needed for Obsidian's core functionality. In contrast, had Obsidian been built on React, it would be rather difficult to simply "run without React" without rewriting the entire application.

How to build a large front-end app on non-web technologies (Swift, C++/QT, C#, etc) is also an interesting question but I didn't understand that as being the topic of this conversation.


I developed my note-taking app Daino Notes[1], which has a block editor (like Notion) and is written in C++ and QML using Qt. I wrote extensively about its development in my blog: https://rubymamistvalove.com/block-editor

[1] get-notes.com


You can just admit you were wrong instead of continuing to move the goalposts. :)


Everyone else in this thread is talking about (React/Angular/Vue/JQuery/etc) v.s. (Plain JS/Direct DOM manipulation/etc). Running that code on top of Electron or not is entirely orthogonal. So I admit I'm confused why you're fixated on bringing Electron into the conversation. Op's question appears to me like it references the last part of the linked thread: "I’d like to know what JavaScript framework (e.g. Vue, React) Obsidian desktop application is using for creating the user interface?"

Since we seem to be talking past each other, what do you think the conversation is about?


I thought we were talking about this (pasted from your comment above):

> "I’d like to know what JavaScript framework (e.g. Vue, React) Obsidian desktop application is using for creating the user interface?

And the answer to that question is: Electron.

Is that not the question?


Electron does not belong in the same category as React & Vue. JavaScript frameworks are commonly understood to mean:

- Third-party libraries, almost always implemented in JS (technically it could be some language compiled to WASM but I'm not aware of any commonly used WASM framework)

- Dynamically loaded from a CDN or bundled with application code at build time

- Provide an high-level API for creating and updating UI

- Whose implementation edits the DOM (a browser's low-level UI representation)

In contrast, writing an app _without a UI framework_, therefore implies writing first-party JS code that interacts with DOM APIs directly, without that level of abstraction in-between. This is not a common choice these days, and could be considered an impressive accomplishment, hence this Ask HN.

To create that UI, you use the same low-level DOM APIs in Electron as you would in the browser because well, it is a Chromium browser engine.

Example of each combination:

- Framework-using apps running in the browser: Airbnb, Figma

- Frameworkless apps running in the browser: HN

- Framework-using apps running in Electron: Figma's desktop app

- Frameworkless apps running in Electron: Obsidian

I wouldn't consider Electron as an answer to the question. It would be best described as a framework for running web apps as a standalone desktop app, but not a framework for creating user interfaces. Just using Electron doesn't make any progress towards having a solution for managing the complexity of writing code that interacts with the DOM.


I'm impressed by your patience. All I want to add is that you're 100% correct.


I can second the other commenter: you are having a different discussion than the rest of the comments and OP.


When you're writing only a "couple lines of code", you can do pretty much anything you want. There's no real tradeoffs to discuss except in a theoretical sense, because the stakes are so small.

If the app being built is "large" (which I understand to mean, has high essential complexity), then those tradeoffs matter a lot. If the app is built by a team instead of an individual, the people problems become significant. Those can very well be turned into a technology problem. The technology (framework in this discussion) can be used, among many other things, to establish a consistent way of solving the problems in the application, which alleviates many people problems.


> When you're writing only a "couple lines of code", you can do pretty much anything you want. There's no real tradeoffs to discuss except in a theoretical sense, because the stakes are so small.

The JavaScript logic in the browser is comparatively small compared to the total application. This is absolutely more true when you remove the bloat imposed by a large framework.

Frameworks do not exist to alleviate problems for the developer. They exist to help the employer with candidate selection and training elimination to expedite hiring and firing. I can understand why a developer who is utterly reliant on the framework to do their job might think otherwise, which is a capability/dependence concern.


That you believe frameworks were invented to serve employers is a cynical point of view. I'm sorry for whatever bad experience you've had with the frameworks or people using them that caused you to develop this viewpoint.

A developer choosing to use a framework doesn't mean they are reliant on it, any more than choosing a particular language, library, text editor, tool, etc. It simply means they decided it was a helpful way to accomplish their goal, whether that's to save time, or establish consistency, eliminate categories of problems, avoid re-inventing the wheel, etc.

I don't know if you're aware of this, but you're coming off as incredibly arrogant with your strong claim that frameworks are used by those who don't know better. It's easy on the internet to vaguely gesture at "developers", but most of us are individual who've built software with real users among other demonstrated accomplishments. Strong claims require strong evidence, I hope you have the arguments to back it up.


I hear the exact same self absorbed reasoning on other subjects from my autistic child almost daily. The psychological term is fragile ego.

For example: It’s not that the developer reliant on the framework is less than wonderful. It’s that everyone who differs in opinion is obviously wrong and/or arrogant, because the given developer cannot fathom being less than wonderful.


The important thing in any "large" application is to set consistent patterns for doing common tasks: creating UI components, re-using them, updating their content as a result of data changes (reactivity), etc. This can be done with or without a framework.

A framework establishes a large portion of those patterns upfront. If the framework is a popular one (e.g. React) rather than an in-house one, it makes it easier to quickly ramp up hires and have a lot of documentation/examples online. Popular frameworks also implicitly contain a lot of knowledge from years of feedback from thousands of devs exercising its APIs and providing feedback, so they're typically pretty good at solving common use cases.

Obsidian was initially built by a single developer. One of the best that I have the pleasure of knowing, but when you're one person, it's much easier to create your own pattern and have it work for yourself. They have since hired maybe 2 other devs, with minimal intention of hiring more, so ease of onboarding isn't a significant concern to them the way it would be for most companies (and "large" frontend apps more often than not require larger teams, by definition).


Thank you for your comment (and replies). This makes me realize that we can also actively choose which patterns of a framework we want to use, and which aspects of a project are better built to work independently.

Also, so cool that you got to know Silver! Their team is small but very talented, I look up to them a lot.


The contents of this article are pretty old, but the static website's design has been revamped (I believe several times) since then. My guess it that the two may have just fallen out of sync in such a way that this particular oddity manifests.


There's definitely others that shared your perspective. A commonly cited reason of early Figma adopters was that they felt it was faster than Sketch.

Of course, the reality was that performance is a super nuanced thing. It's always measured in relation to specific things, but ultimately summarized via a "feeling".

Aspects of performance include:

- Loading a (blank/medium/large) file from (scratch/cache/etc)

- Performance when editing (what?), panning, zooming (small or large doc?)

- Performance with a large number of simple objects, or complex objects (components? variables? nested components? drop shadows/background blurs?)

I haven't personally done some performance comparisons between the two apps since ~2018 but at the time there were definitely things where Figma was noticeably faster than sketch, a lot of things that were comparable, some things that were slower. My own very biased feeling was that Figma was faster more often than not but it's always up to the individual use case, how their file is setup, what they are doing within that file, and how they mentally weigh those different scenarios.

I definitely didn't feel like being on the web was a limiting factor. In some theoretical state, with infinite resources to optimize everything, native could be faster since you have access to lower-level APIs. In practice, that's the same argument as "it could be faster in hand-written assembly". Almost never did we get to the point where we'd use those abilities even if we had them, due to their cost on development and impact on the correctness/maintainability of the code.


There's plenty of server-side components to Figma that are substantially more complex and expensive than that of the typical website.

Multiplayer means that every file that are user loads is loaded onto a compute server and retained in-memory on that server, processing every mutation to the document. Figma documents can get quite large -- they can contain the design system of an entire organization, dozens of variations of a product feature, etc. This means the server cost scales per active session at a given time, a higher "number" than active requests.

In addition to multiplayer, Figma attempts to keep most other parts of the application real-time. If another user adds a file to a project, you will see it without refreshing the page. That's a simple example, but certain types of queries/mutation can get much more complex.

Figma is an enterprise app, and the data models needed to support complex use cases (permissions, etc) means that DB load is a real challenge.

While the DAU count of Figma might not be as high as other consumer apps, the amount of time those users spend in Figma is often substantially higher.

Those are some of the things that contribute to a high bill. While Figma is most known for the frontend editor, the infra work is plenty interesting too.


> Figma is an enterprise app, and the data models needed to support complex use cases (permissions, etc) means that DB load is a real challenge.

This, “permissions, etc”, isn't just an enterprise-scale problem, any multi-tenant system can and probably will hit it.

Working out who can access what can sound simple enough, but it gets rather less pleasant when the rules can be set with more complex ACLs¹ and because people can move around dynamically it is both potentially resource heavy to derive and difficult to cache² both safely and³ efficiently. It is natural to think “well, we can simplify the permissions model”, but you really can't when selling to different enterprises: many have their own idiosyncratic workflows, or local tweaks if using an “industry standard” workflow, and they will make a noise if your software to support them without extra tricks on their part.

We are at a much smaller scale⁴, in another industry, but this is an issue we have to be very careful about.

--------

[1] Is this person in a given group? Does that give or remove permission? Can they just access, or edit, add, …? also: different elements on the same screen could have very different ACLs to each other and at different times in a process

[2] missing changes for a time could cause significant issues if users are working on something commercially important or otherwise sensitive

[3] Safe is easy: don't cache at all. Efficient is easy: don't care about a bit of staleness and accept a bit of “eventual consistency”/“eventual correctness”. Achieving both takes a pile of resource even with a great design.

[4] We don't have to worry about consistently spreading data and processing over a set of DCs as our product has natural borders between tenants so splitting off into distinct DBs⁵ is an easy answer to some of the scale & efficiency issues.

[5] rather than needing everything in on “public” system because anyone can potentially want to share access with any other user.


Font rendering is indeed complex, but the anecdote seems to be misleading readers into thinking Evan wrote obscure code.

I worked extensively in the parts of the Figma where Evan wrote a lot of foundational code, and have also worked directly with him on building the plugin API.

One of Evan's strong points as a CTO is that he was very pragmatic and that was reflected in his code. Things that could be simple were simple, things that needed complexity did not shy away from it.

While all codebases have early decisions that later get changed, I'd say that largely speaking, Figma's editor codebase was built upon foundations that stood the test of time. This includes choices such as the APIs used to build product code, interact with multiplayer, the balance between using OOP v.s. more functional or JS-like patterns, the balance between writing code in C++/WASM v.s. JS, etc. Many of these choices can be credited to Evan directly or his influence.

Subsequent engineers that joined were able to build upon the product without causing a spiraling mess of complexity thanks to those good decisions made early on. In my opinion, this is why Figma was able to ship complex new features without being bogged down by exploding tech debt, and a major contributing factor to the business's success as a whole.


+1 from another Figma engineer who happened to work on the text engine back in the day.

I think that Evan generally wrote code that was as simple as possible — there was no unnecessary complexity. In this case there indeed is some inherent, unavoidable complexity due to the math involved and the performance requirements, but otherwise I found our text rendering pipeline very understandable.

Evan actually wrote about it if you're curious to learn more: https://medium.com/@evanwallace/easy-scalable-text-rendering...


It’s a clever trick. But can it render a textured text? Transparent text, gradient fills? Maybe it can, I dont know. But why not just triangulate the glyph shapes, and represent each glyph as a set of triangles. This triangulation can be done offline, making rendering very lightweight.


The linked post was about Evan's side project, but within Figma, all of that is indeed possible. The glyphs are transformed into vector networks[0], which has a fill pipeline that supports transparency, gradients, images, blending, masking, etc.

[0]: https://www.figma.com/blog/introducing-vector-networks/


It is incredible how easy this stuff spirals out of control and why I'm not too worried about AI yet.

Every now and then I'm writing a PoC or greenfield project that you put down for 6 months. And sometimes when I pick it up to extend it, it will just so rapidly feel like it's getting out of control (I'm actually listening to the chemical brothers song of the same name at the moment!). I can at least usually fix that with some refactoring, but why didn't I get it right at the time? I don't know.

And it's often hard to figure out why, what architectural decision did you make to cause this. Pointing at the particular interface or pattern or method call chain that is the cause of a ton of complexity that could be fixed is much, much, much harder than most jobs in programming. And beyond the ability of 90% of developers.

The -2000 story popped up here again recently (https://news.ycombinator.com/item?id=44381252), and it's of the same vein, why had no-one else done that? Because it takes extreme skill to simplify existing code. It's beyond most developers.

I think it's why we as an industry often obsess about things like space/tabs/semi-colons or not/etc. They're obvious improvements to an architecture, and everyone can join in. But really, they're a small improvement to a codebase, not a massive one.

And then you get the Evans of the world who just do it, almost effortlessly. I've worked with an Evan, and sometimes you'd look at his code and think "why?", but any attempt to change or improve it invariably made it worse. He'd picked that pattern or structure or method call chain and it was always the right choice. And after a day of poking at it, exploring the edges, trying to change it, you'd realize why.

And yes, sometimes the code was so complex other developers couldn't get it. And then they'd call me over to help because I could get it. And I'd look at it and realize it was complicated because it had to be like that. He'd actually done it in the simplest way possible.

And years later I still make the wrong choice sometimes, and I always think of Simon and wish I had his magic touch.

Even if he used to name functions Thing() and DoStuff() and forget to update them.


This looks like interesting technology! Congrats on the launch, it's great to have people exploring the space of realtime data storage & sync. Some thoughts, that I hope you can find constructive.

The landing page draws comparison to Figma, Linear and Notion. But they are vastly different use cases.

Figma is document-centric, which means that:

- All data is tied to a single document, limiting their size.

- Requires that single backend session holds a document in-memory and deals with split-brain issues.

- Operational complexity arises around deployment.

- User interactions are often continuous (one event / frame), imposing tighter latency requirements.

- Generally requires that the document stays loaded in a memory in a stateful backend session.

- Technologies like Jamsocket are targeting such use cases.

Notion & Linear, on the other hand, have collaborative editing but not to the same degree of realtimeness. - Data is not tied to a single document.

- Data can grow unbounded.

- Data is typically more relational.

Presenting both these uses cases adds confusion. By using SQLite (emphasis: database and "SQL"), it makes the technology appear at first glance more suited for the Linear/Notion use case. However, the opposite appears to be true after reading https://docs.livestore.dev/evaluation/when-livestore/.

- "All the client app data should fit into a in-memory SQLite database".

- "Reasons when not to use LiveStore" -> "Your app data is highly connected across users".

The Figma-like document use case does seem like something that LiveStore could support. When it comes to designing a data model for collaborative apps (CRDT or CRDT-like), the most simple and flexible solution is to store every object as a map of [Object ID] -> [Property Name] -> [Property Value]. Assuming that the properties names & values come from a fixed set and are typed (this will generally be true unless you allow for arbitrary user-defined fields), that looks like a database row! So why not just store it in a database, indeed.

However, among databases, there are object stores and relational databases. The latter is useful, as applications often want to represent relationships between different objects. But if we support multiplayer-like use cases which implies the absence of server-side transactions, how should conflicting events involving the creation/deletion/relationships between objects be handled? Is it entirely on the application to think about it? This is an interesting topic that I believe LiveStore seems well-positioned to innovate in.

I also find the "local-first" emphasis to be at odds with all of Figma/Notion/Linear. Local-first software tends to have a limited number of concurrent users accessing the data in-mind. SaaS applications that are truly targeted at collaboration tend to have a lot of different requirements. On the other hand, those applications do tend to have relatively limited offline support, which a local-first application tends to emphasize more.

The Figma-like use cases also has additional requirements. But I understand this is beta and look forward to seeing further development!


I worked with Rudi at Figma and of course support his comment - Figma seems to be mentioned for marketing, not for the actual technical comparison.

For others looking for more details on how Figma's sync engines differ and why 2 sync engines emerged, I had a long thread about it here:

https://x.com/imslavko/status/1890482196697186309


There's a similar situation with Go where some positions utterly confuse bots trained on playing mostly normal games. There was this interesting research blog post on training a bot specifically to become good at solving one of these weird problems (Igo Hatsuyoron 120, the "hardest go problem ever")

https://blog.janestreet.com/deep-learning-the-hardest-go-pro...


It is worth noting that that problem is only understood by humans (as well as it is) after _centuries_ of study by many professional players. My understanding is that even very strong human players must study that problem for days/months/years to really understand it well.

So katago being unable to handle it without special training doesn't seem _quite_ as blind of a blindspot as the chess examples from the article seem to be (I suck at chess and I was able to understand one or two).

I'm not trying to undermine you mentioning this, in case it comes off like that, on the contrary I think the comparison is quite interesting. I'm curious if this is just a difference in go vs chess, or in the relative abilities of specific kinds of AIs to handle these, or maybe just differences in human ability to craft and/or understand different problem difficulties between the games.


Almost always when I see a statement of the form "people in group X are doing two contradictory things", it's very likely that it's different people in group X that said those things. You're creating strawmen.

In this case, even if the same person criticized Brian Armstrong then later congratulated him, I don't see the issue. Humans sometimes achieve things, sometimes do controversial things, it's as mundane as it gets.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: