Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m a proponent of that. Just rewriting old java or python monsters in an efficient language like Rust would easily give us an order of magnitude better efficiency.

A special class of theorem provers could be developed, proving that a program runs below a certain level of spacetime complexity.

This would entail a great increase in energy efficiency.

I also endorse holy information warfare against inefficient proof of work cryptocurrencies and a transition to efficient proof of stake.



I like Rust, but gosh it produces the second-biggest bloated binaries I've ever seen. (Yes, it's mostly people using it wrong, though apparently I'm one of them.) The only thing worse is C++. (Again, probably people using that wrong, but that doesn't mean it doesn't happen.) Java and Python, by comparison, are tolerable, even when people use them wrong; when Java programs are huge, that's usually because of a mass of hideous “business logic” rather than a billion dependencies.


I like Rust, but gosh it produces the second-biggest bloated binaries I've ever seen.

Are you building in "release" mode? The debug builds can be 10x bigger.

    cargo build --release
You still get stack backtraces and subscript checking.


It's not releasing that's the problem; it's developing. Development is much harder with the release profile, because stuff like overflow checks are disabled.

But yes, I have tried my own custom debug profile that turns on the optimisations to try to get the size down. The final binary is smaller, but `cargo build` still regularly leaves me with just kilobytes of space remaining, and then fails outright until I `cargo clean` and try again (which I think is build script related).


But it doesn't depend on a runtime. The Java program may be smaller but it depends on hundreds of MB of JVM.

If you're building a docker image for example Rust is going to be smaller.


The JVM is 114MiB on my machine. A near-minimal ggez program in debug mode is about 100MiB,¹ and ggez is small for a Rust application library. When you start getting into the 300s of dependencies (i.e. every time I've ever got beyond a trivial desktop application), you're lucky if your release build is less than 100MiB.

Sure, I could probably halve that by forking every dependency so they aren't duplicating versions, but that's a lot of work. (It's a shame Rust doesn't let you do conditional compilation based on dependency versions, or this would be a lot easier. As it is, we have to resort to the Semver trick: https://github.com/dtolnay/semver-trick/ — not that many people do that, so it's functionally useless.)

Take GanttProject as an example. It's 20.6MiB of files, plus the JVM. I challenge any of you to make a Rust version (with accessibility support in the GUI) that can open (something resembling) its XML files and draw some (vague graphical Proof of Concept) representation on the screen (with editable text fields), in less than 114+21=135 MiB of binary. And then tell me how, because I've been trying to do that kind of thing for over a year.

¹: I can get it down to around 8MiB with release mode, lto etc., but that significantly increases the build time and only about halves the weight of the intermediate build files.


You haven't seen no haskell binaries yet :-) The haskell lsp client is around 150MB :-)


But it is rust! Every problem in every HN thread can be solved with it!

How dare you claim a larger binary...


> the second-biggest bloated binaries I've ever seen

Let me guess; Go?


I've never managed to get a Go program to compile, so I couldn't tell you. I was referring to C++ ­– though in fairness to the compiler, I had to brute-force myself through that source code too.


For binary size you could use my hypothetical theorem prover ensuring a binary size below a certain point.

That would come at some kind of compile time or efficiency cost since you couldn’t anymore optimise for those but I’m sure that’s something one could opt for.

I don’t think binary size matters though. Storage is very cheap these days.


People keep saying that. I have 1.8 GiB available for all my build files, and that's only because I keep deleting the build files of my other projects (meaning they have to be recompiled whenever I go back to them). Storage matters for me.

At least the prices are almost normal again, now that Chia's over.


Couldn’t you get 4x that much from a $3 usb stick?

You could also have self modifying code such that the size of the binary automatically changes as needed.

If you ship a lisp interpreter instead of rust, you can have the interpreter recode itself and any lisp files to a smaller size. You’d just implement a compression algorithm that preserves the functionality of the code compressed.

I think you could do that with rust too with a self compiling binary. It’ll require some real technical skill but if you hire a real hacker you can pull it off.

My cousin his solution for the competitive programming contest had something like that.


I doubt a $3 USB stick would perform decently enough to not become a bottleneck in rust compile and link times.


There are no bottlenecks in, or even arbitrarily related to rust.


Do people still know about upx these days? However, I think executable compression is besides the point of OP. Around 18 years ago, I was going through the codebase of a program I maintained as a Debian package, and as a udeb for the installer. Back then, I was trying to make it small enough to fit on the first floppy. I learnt about unnecessarily large datatypes in structs, packing, padding and alignment, and that adding "static" to module-local functions and data can really do things to the binary size. These times are over. Nobody cares about binary sizes anymore, the main argument against doing so is "we cant be bothered, we need to innovate."


> It’ll require some real technical skill but if you hire a real hacker you can pull it off.

I do have a project a bit like that, but I'm not using Rust for it. I was trying to make my own language (like Rust, but more powerful and also smaller), but I'm probably just going to use a modified (safer) C.

If I were to write it in Rust, I'd have to compile it in the first place… and if I could do that easily, I would simply use Rust.


You have a project involving self resizing code? That’s awesome! Can you explain it more?


A ramdisk wouldn't help there?


Not with 4GB of RAM. I already have to close Firefox to compile non-trivial programs.


Ooops. 48GB here, regretting that I didn't get 64GB. (My excuse is that I needed it for osm2pgrouting which ate up to 90GB of paged memory on a country-sized input file. That hurt a lot.)


I have never been gladder that I abandoned my OSM data-processing project before I got that far. (I was planning on processing multiple country-sized input files for their road networks – on a friend's computer, but it only had 16GB, so that would be a lot of paging.)


I then jumped ship to OSRM, which did the same job in ~3GB of memory, AND several times faster, AND didn't return spurious routing results afterwards. (I was getting some routes with "hyperspace jumps" between points in the road network that weren't connected, and due to previous issues with memory I just decided to stop trying, so I didn't investigate the cause of those jumps.) So if you'll ever decide to do that again, just use OSRM. It "just works" (at least in my experience).


Storage is indeed very cheap, but so are CPU cycles, for the same reason.


This practice of injecting Rust in every conversation no matter the topic, is getting tiresome.


But for good reason, no?

I would personally prefer a language model designed to emit x86 assembly, akin to Github copilot, but I understand that to be a minority viewpoint.


Why x86 assembly? Seems like the world is moving away from a single dominant instruction set


True, let's talk about Zig instead!


Java is remarkably efficient already. I’d bet you won’t get an order of magnitude improvement.


"Hypothetically", JIT can outperform native code given the right circumstances. Why not put more effort into improving JITs instead?


And who's gonna pay for that, Pushkin?


> I’m a proponent of that. Just rewriting old java or python monsters in an efficient language like Rust would easily give us an order of magnitude better efficiency.

Do not rewrite software in Rust, because Rust is not an efficient language. It's a memory, and RAM hog, and it's unstable, with major breakers every release. Switching to Rust is not a thing for a profit seeking enterprise.

In practice, it's an n-fold downgrade from good C on performance.


> with major breakers every release

Source? I was under the impression that Rust took an aggressive approach to back compat.

And I'm also not a proponent of rewriting in Rust for it's own sake, but the other commenter was suggesting it in place of Python, which would probably be a huge net gain in efficiency.


Possibly with the caveat that the unstable versions of rust are going to be more prone to breakage (I mean, obviously, but it is a way to hit issues). It was my impression that most people didn't need unstable anymore, though.


If you can so that in C all the better.

I know there exists a formally verified secure C compiler. Ideally you would have a similar compiler which guarantees low energy uses.

In my experience Rust already does a great job there but I’m happy to behold whatever evidence exists for C being more energy efficient.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: