The downside to that approach is that it limits you to C calling conventions (most language communities are going to want to write in the target language) and means that the process will be as stable as the least stable plugin. Given how many bug reports were filed against every editor which has ever done that, it's easy to see the appeal of containment to the people working on an editor.
The other side of that is that we're not in the 90s with single core processors and there's been a lot of hardware & software optimization over time. Most people can run something like VSCode and a language server and still have multiple cores left over — and since the events which trigger language server interactions are generally made by a human with a keyboard it's not like you need to be in the microsecond range to keep up.
That’s a lot of fearmongering with no evidence. Do you have profiler data showing that the microseconds needed to pass a message between a process is a significant limiting factor in a program which is rate-limited by human text entry?
I mean, taking your argument seriously would mean everything should be hand-tuned assembly. Obviously we figured out that other factors like time to write, security, portability, flexibility, etc. matter as well and engineering is all about finding acceptable balances between them. Microsoft has been writing developer tools since the 1970s and in the absence of actual evidence I’m going to assume they made a well-reasoned decision.
It seems like the person was suggesting if all processes on your PC used a client/server model with message passing/RPC instead of the existing API model, the idle cores you speak about, would not be idle.
While you're right that productivity versus performance is a trade-off, and an editor is not necessarily a high performance application, its not clear to me whether future optimizations would reduce the gap, as much as optimizing compilers did vis-a-vis C and assembly.
In any case, that aside, the core guarantee of software stability with LSP remains to be seen.
> Do you have profiler data showing that the microseconds needed to pass a message between a process is a significant limiting factor
What? I think you didn't get my point. Let me try again.
You can look at a single operation and say "oh, that's nothing, it's so cheap, it only takes a millisecond". Even though there's a way to do the same thing that takes much less time.
So this kind of measurement gives you a rational to do things the "wrong" way or shall we saw the "slow" way because you deem it insignificant.
Now imagine that everything the computer is built that way.
Layers upon layers of abstractions.
Each layer made thousands of decisions with the same mindset.
The mindset of sacrificing performance because "well it's easier for me this way".
And it's exactly because of this mindset.
Now you have a super computer that's doing busy work all the time. You think every program on your machine would start instantly because the hardware is so advanced, but nothing acts this way. Everything is still slow.
This is not really fear mongering, this is basically the state of software today. _Most_ software runs very slow, without actually doing that much.
I don't think this is true. Also any solution that involves humans just "Trying harder" is doomed to failure. History has demonstrated that over and over again.
The technologies that win are the ones that account for that.
The other side of that is that we're not in the 90s with single core processors and there's been a lot of hardware & software optimization over time. Most people can run something like VSCode and a language server and still have multiple cores left over — and since the events which trigger language server interactions are generally made by a human with a keyboard it's not like you need to be in the microsecond range to keep up.