Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the key problem is that a large number of startups are shipping software in containers, and dotnet requiring a CLR is not particularly well-suited for containerization. It's like the old school Java JVM model. You have to ship a copy of the runtime with every container, and if you're doing proper microservices it's an awful lot of overhead.

Yes I'm aware MS makes it easy to build containers and even single executables, but languages that compile down to an ELF are pretty much a requirement once your deployments are over the 10k containers mark.



> dotnet requiring a CLR is not particularly well-suited for containerization

Why? I routinely put compiled .NET programs into containers.

It's also easy (easier than Rust even) to build on Mac targeting a Linux image.


Create a hello world dotnet container, then do the same in a modern language. Then compare image size and resource consumption. Then imagine you're running tens of thousands of containers in a proper SaaS microservices model, and it'll make sense :)


Enterprise doesn’t spawn 10,000 containers to perform a simple “hello world” operation. That’s not how it operates. You’d be amazed at how many concurrent requests a single service can handle. This capacity must align with the actual requirements of the companies involved, not some unrealistic scenario like “we need to emulate Google.”


  > Create a hello world dotnet container
The container image is 10.9 MiB. The binary is 1.2 MiB.


While that is small for a container and modern binary, I recall C hello worlds being 17KiB -- if only AOT/Spans/interop be used more to drive down those filesizes further.


Sounds like a problem with the "proper SaaS microservices model" more than .NET


If you have that many containers, you should be having the revenue to pay for that.


Just say you don't want to use .NET. It's fine, but how many startups ever get to over 10k containers? You can use AOT to further reduce the footprint. It's totally fine to hate Microsoft, but this is as weak an argument as I've ever seen.


> once your deployments are over the 10k containers mark.

Stackexchange famously is a dotnet application that runs on a handful of fairly (but not unreasonably) large computers. 10k containers is either "you are Facebook", or you're wasting a lot of that in some other way.


Nowadays it's very common to have .NET apps being containerized and running them on K8s or whatever you like in production -- I think you are relying on outdated information.

It's also well-suited for that. Of course, you won't end up with a tiny Go docker image, but this doesn't matter.


A million different Javascript and Python services in delivered as Docker images would like a chance to disagree with you.


You’re making the classic logical error of “your thing doesn’t have the workaround needed for an issue that only happens with my thing”.

You need 10K containers for Node and Python apps because they use a single threaded runtime! The best way to scale these is to deploy many small containers.

The .NET runtime is fully multithreaded and asynchronous and supports overlapped I/O. It scales to dozens of cores, maybe hundreds in a single process. The built in Kestrel server is full featured including HTTP/3 and TLS 1.3! You don’t even need NGINX in front of it.

Not to mention that unlike most Linux-centric programming languages, it deploys reliably and consistently without needing the crutch of containers. You can simply copy the files to a folder on the web server(s), and you’re done. I’ve actually never seen anyone bother with containers for an ASP.NET web app. There is very little actual benefit, unlike with other languages where it’s essentially the only way to avoid madness.

PS: Every Node app I’ve ever deployed has been many times slower to build and deploy than any ASP.NET app I’ve ever seen by an order of magnitude, containerised or not. Go is comparable to C# but is notably slower at runtime and a terrible language designed for beginners too inexperienced to grok how exceptions work.


If you use the same base image, is it really as bad as you're making it out to be?

I understand that you're getting a roughly 100mb dist directory for a .Net web app, and that it uses quite a bit of ram.. but people also use Node and Java which have similar issues.

Don't get me wrong on this, I'd like to use Rust+Axum a lot more and C# a bit less.. but I don't dislike C#.


The runtime alone is a bit over 200mb, and that doesn't include additional packages you'll most likely need.

That being said, I'd much prefer to deploy a C# application over Node or Java, no argument there. But saying "I wish more startups were using C#" makes me wince. C# seems well-suited for the monolith-architected VM-image-deployed strategy of the early 2000s, but it's pretty close to being the exact opposite of modern best practices. And unfortunately it's kinda unfixable in a language that depends on a VM execution environment.

I'm sure all this is short-lived however -- I'm relatively confident we'll see deployment best practices converge down to "use whatever language you want but you must compile to WASM" in the next decade, so the warts of devs' chosen language aren't an ops problem anymore.


But the runtime is in the base image, so that is shared across all deployed services on the a single host system(s). So it's much less of an issue, also, the entire runtime isn't loaded into RAM for every application. From Task manager in windows, I'm running a local app in debug/dev mode and it's taking 226mb ram, which is a lot compared to IIRC well under 20mb for the last rust-axum project I wrote.

That said, you get a lot of functionality in the box and nearby out of that extra resource usage and it doesn't really grow by much under load.

Beyond that, there's nothing particularly wrong about having a mostly monolithic backend for a lot of things, I would say most applications are better served starting with a more monolithic backend in a mono-repo with the FE.


C#/dotNet has Ahead of Time compilation that works very well with containerization. Obviously there are still overheads for the AoT runtime, but it is pruned.


AOT would solve a lot of these problems if it didn't have show-stopping restrictions like "you can't use reflection" and "you can't use native sessions".

https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...

https://learn.microsoft.com/en-us/aspnet/core/fundamentals/n...


Reflection doesn't work with AOT because there is nothing left to reflect on... it's compiled away. You can't use reflection with C, C++ or Rust either, doesn't mean you can't use them for useful things.


> dotnet requiring a CLR is not particularly well-suited for containerization

This is a solved problem within csproj to do dotnet publish to OCI containers already. I even have some csproj override to magically add it to every console projects in the solution.

The biggest problem IMO is because of the JIT generated code not being able to be saved, so it will always be regenerated on the fly, and compound that with a not so state-of-the-art GC (wish we have ZGC someday), it will create brief moment of latency very easily and making the timing fat-tailed.

NativeAOT and ReadyToRun remedies this problem by compiling ahead of time, but you trade space with time.


> ...languages that compile down to an ELF are pretty much a requirement once your deployments are over the 10k containers mark.

Why?


Exactly this point.

Go and Rust produce native binaries, I wish C# had an official native compiler without the big runtime needs of .Net.


You might want to read https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...

Publishing your app as Native AOT produces an app that's self-contained and that has been ahead-of-time (AOT) compiled to native code. Native AOT apps have faster startup time and smaller memory footprints. These apps can run on machines that don't have the .NET runtime installed.


There are quite a few gotchas for this, especially web apps. THis is understandable because it was added after the fact, vs. a first-party design requirement. It's cool and might work for you, but taking a non-trivial .net codebase to native AOT can be tough, and if you're starting greenfield, why go .net?


FWIW, the .net folks seem to have put a lot of effort into the native AOT pipeline in the last few releases. We have a large, non-trivial application with a fair amount of legacy code, and getting it AOT’d in .net 10 (targeting wasm, even!) was not an insane lift.


How is the WASM target looking nowadays? I tried it in 2023 and early 2024, and it was far from complete (JS interop, poor documentation scattered across GitHub issues, and so on). I still can't find a single trustworthy source of documentation on how to proceed. C# would look great at the edge (Cloudflare Workers).


Sure, legacy applications won't be easy to move over but Microsoft has been quite consistent in working towards making microservice applications easy to build and run with AOT by moving more and more components over to using source-generators and promoting minimal-API's.

Their target is probably not entirely greenfield projects (although I wouldn't mind it myself), but rather those with existing investments that start new projects that still want to share some parts.


And this sounds great until you get to the laundry list of restrictions. For us the showstopper was you can't use reflection.


You can't use reflection with AOT compilation. That's what AOT compilation is. Java has the same limitation for AOT compilation, for example.


Most reflection usage is for JSON (de)serialization and for that you can use source generators, which also offer better performance.

https://learn.microsoft.com/en-us/dotnet/standard/serializat...


These same restrictions exist for Go, the Go team just decided that it was easier to never support these features to begin with which has its pros and cons.


Such as? For the ones I've actually needed from the C# AOT limitations list, you can use reflection and dynamic loading just fine in Golang, with static single-binary compilation and all.


Golang's reflection is severely limited by design, so that there is nothing to restrict during compilation. On the other hand, you lose out on powerful tools such as C#'s System.Reflection.Emit. To note, the biggest library limited by AOT compilation, ASP.NET (which does still work with it if you design around the limitations), is being updated to work better with it (and source generators play a big part of that).


They're self contained and native, but they're still massive.

There's been some work on CoreRT and a general thrust to remove all dependencies on any reflection (so that all metadata can be stripped) and to get tree-shaking working (e.g. in Blazor WASM).

It seems like in general they're going in this direction.


Smaller is better, of course, but I've never found the size of .NET binaries to be an issue.

What problems does this cause?


If you're trying to pack hundreds of microservices into a cluster, having containers using 80MB of ram minimum instead of 500KB can become a big deal.


Not every library is capable of building to Native AOT, which means any app that depends on those libraries run into the same problem. If the library or app uses reflection, it likely isn't capable of Native AOT compilation.


Just an FYI, Go still bundles a runtime in its native binaries. C#'s AOT has restrictions on what works (largely reflection), but these same restrictions apply to Go (although Go applies these restrictions into how it's designed for the entire thing).


rust and go are only good at single binary. when you need a few their size adds up quickly, because they don't really do shared libs.


Meh. Probably 100x more startups use Python and JavaScript than anything else combined.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: