Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Rust 2021 Roadmap (github.com/rust-lang)
178 points by praveenperera on Dec 18, 2020 | hide | past | favorite | 81 comments


This feels like "rust 2021 governance roadmap", which is good and all but not particularly interesting to those of us only peripherally involved.

I'd be much more interested to see "rust 2021 language/development/... roadmap" - I sort of expect I will be seeing this soon.


What you're seeing is the project's growth. We're big enough now that we can't talk meaningfully about everything the project does in one roadmap, and given that it's set by the core team, we're more formally delegating most of the specifics to the individual teams.

Beyond that, this meta-work is really needed, for the project overall. We've been bumping into limits here for a while. Rust is now large enough that if it were a startup, some people would argue that it's no longer a startup.

(I should also note the roadmaps have been trending this way for a while. See the change over time for yourself:

* 2020: https://github.com/rust-lang/rfcs/blob/master/text/2857-road...

* 2019: https://github.com/rust-lang/rfcs/blob/master/text/2657-road...

* 2018: https://github.com/rust-lang/rfcs/blob/master/text/2314-road...

* 2017: https://github.com/rust-lang/rfcs/blob/master/text/1774-road...

We didn't do this before 2017.)


Is it fair to say that Mozilla's apparent step back from Rust is a major driver to needing to decide on future governance?


Pretty much nothing to do with it, in the broad scheme of things. Mozilla was not directly involved in governance, and only employed a small number of people involved in project leadership, and those people didn't leave their positions when they left Mozilla.

The closest thing to this is that the situation with Mozilla jumpstarted actually finally creating the Rust Foundation, and deciding its relationship to the project is a governance matter. But that's not in scope for this roadmap, and so has nothing directly to do with it.


The impression I got is that they're taking a year to focus on governance instead of any dramatic technical efforts, in the wake of the Mozilla shake-up and the formation of the Foundation (and continuing project growth). I could be wrong.

Edit: This was just speculation and people with better information have weighed in elsewhere


The core team is, yes, but that doesn't mean that improvements to other things won't happen. It's not like the compiler team will stop their work. But the core team is focusing on governance this year.


Frankly, I hope you're right... Rust is an impressive language as it is... if it needs any improvement it's outside the language itself IMHO... all I can think of to complain about is compiler speed, really! The rest is really good stuff.


I'm into rust 50% for language innovations and 50% for their work on open source governance models.

Super important stuff.


This is partly explained in the chapter unified project tracking [0]. In short: Too many Teams/Project and no current way to track them and their progress easily. It is something they want to address this year.

[0] https://github.com/Mark-Simulacrum/rfcs/blob/roadmap-2021/te...


> This feels like "rust 2021 governance roadmap",

Please don't get me wrong -- governance is important. But concentrating on governance for the entirety of 2021 leaving technical roadmaps to the teams is surprising. What problems is this roadmap trying to solve?

1. Is the existing governance model inadequate and needs to be urgently fixed?

2. Is Rust becoming a "designed by committee" language? This model does not seem to work despite the best intentions.

3. It would be very useful to see not only "what" is in the roadmap but also "why".

EDIT: formatting...


Yeah, it looks like the project has grown to the point where they're splitting the roadmaps out by team.


> ### How does this group make decisions?

In the early days of the Apache Software Foundation, individual projects were expected to draft their own bylaws. This had the advantage of getting project members to think about hard about the content of the bylaws. However, it had the downside of producing a lot of sloppy bylaws with ambiguous or otherwise unworkable rules. For instance, some bylaws were written to require 3/4 of the all eligible voting members to change them, rather than 3/4 of those who vote — which can become a problem as members with voting privileges go inactive and become unreachable.

Documenting processes by which conflicts are resolved is hard. If you're lucky, most of the time, the rules don't matter as people reach consensus without having to go through exacting, difficult negotiations. But when you really need a formal process to resolve conflict, then it matters if your process is borken and pathological.

The best high-level design is to have the rules for resolving conflicts written by a few specialists who really care about precise drafting, and for most other entities within the organization to inherit those rules whenever possible.


The situation here is more about getting it written down at all than it is having conflicting systems; teams almost exclusively operate by full consensus among members, but that's through the sort of inheritance you refer to, rather than having it actually written down somewhere.


Most of the time, you want to bias for action and not require interacting with the rules to move forward. But "full consensus" has a problem with scale — the more eligible voting members you have, the more likely it is that you get a holdout on some issue.

There needs to be a process by which the concerns of minority factions are heard, which gives the majority a chance to accommodate those concerns rather than steamroll, but then eventually to move on after establishing workable consensus (as opposed to full consensus) despite the holdout. How do you get there — a majority-rule vote, a super-majority rule vote of x% of those voting? How long does the vote run? What's the escalation process? What entity has final say on technical disputes? And so on.

Rust is big enough and has been around long enough that it's certain many bitter disputes have arisen and been dealt with — there's surely an abundance of institutional expertise on resolving contentious issues. Nevertheless, it's in everyone's interest to avoid Balkanized, idiosyncratic, possibly suboptimal formal governance rules for the subgroups, so the process of "writing down the rules" should have the goal of producing one central set of canonical rules, possibly customizable, rather than many sets of potentially incompatible and superficially distinct rules.


> Most of the time, you want to bias for action

Not everyone agrees with this. One of Rust's core values is stability, and that involves making sure to not move fast and break things.

That said, there is always a balance. This is also why I had "almost" in there. There are a few exceptions in certain cases.

This is a significant difference between the Apache and Rust outlooks on things.

> But "full consensus" has a problem with scale — the more eligible voting members you have, the more likely it is that you get a holdout on some issue.

This is true, but you can solve this problem in different ways. Rust takes a federated approach. In the early days, there was only the core team. Eventually, we ran into scale issues. So we broke things out into multiple, smaller teams with full autonomy. This process has continued over time.

A lot of the procedural issues you cite only come into effect once you've rejected full consensus. If there's no voting, there's no need for complex supermajority overrule processes.

That being said, I fully agree that a common set of rules is a good thing. There's always tension here, because there are pros and cons to uniform rules. We have very common rules for making decisions, but complete balkanization of chat platforms, for example.


> > Most of the time, you want to bias for action

> Not everyone agrees with this. One of Rust's core values is stability, and that involves making sure to not move fast and break things.

Haha, fair point. As a user I appreciate that priority very much. It's hard to resist the pressure to add features.

Let me recalibrate and restate: Just because you have a formal process to adjudicate disputes, to the extent possible it should not impact operations when things are going well, infecting everyday routines with needless bureaucratic formalities.


Can we change this to link to the rendered copy?

https://github.com/Mark-Simulacrum/rfcs/blob/roadmap-2021/te...


Or how about just waiting until it gets merged and officially adopted?


Project health is a good goal. To help toward that, I suggest making it especially easy for people to donate money to Rust and to crate authors.

This year, Mozilla laid off Rust developers. Other languages have massive corporate backing e.g. Google Go, Oracle Java, Microsoft C#, etc.

I believe strongly in Rust, and also in professional programmers and companies doing funding for excellent software.


This would be the job of the foundation, not the project itself. Here's the current state of that: https://github.com/rust-lang/foundation-faq-2020/blob/main/F...


I would like to see greater clarity in this document on limits on sponsor influence with regards to technical governance. For example, I'd like to know if technical governance is considered outside the scope of the Board of Directors.

This is especially important since you have opted for a 501(c)(6), making it easy to be responsive to donors.


Please open this as an issue on that repository, as that's the process there.


AWS hired a whole bunch of core developers: https://aws.amazon.com/blogs/opensource/why-aws-loves-rust-a...


To the compiler folks: Can a language with memory safety and speed of rust be made with the compilation speed of Go? I'm just curios. I have no idea about this


This question inappropriately assumes that the Rust compiling design and performance are fixed to what is today, and in particular, that they are entirely inherent to the language.

There are a few interesting reasons why Rust compilation time is slow and Go's is fast. Here's a summary of the first: https://pingcap.com/blog/reasons-rust-compiles-slowly.

In theory, with enough time and resources, Rust could see an efficient compiler, but this is something that can't be predicted, as it depends on its future success, or, to be clear, the backing it will have.

In practice, the Rust Cranelift backend could provide enough speed (for debug builds) that compilation times won't be slow anymore.

Something to consider is also that compilation is a tradeoff between compiling speed and runtime speed (Golang's favoring compiling speed), so one can't have both ends of the the spectrum at the same time.


> and in particular, that they are entirely inherent to the language.

But... this is true. The design of the language is biased to make slow compiling across the full pipeline.

Is possible to get into a shout contest about this :), so just look at what Pascal(Delphi) do and how absurdly fast it is, despite the fact is still a "batch compiler".

So in the presence of pascal (that is even more complex than Go, to be a really close to C/C++ in capabilities) is clear that Rust is MADE to be slow.

And to the point, I used more than +12 languages already, super popular things like this:

https://endler.dev/2020/rust-compile-times/

NOT exist for Delphi. The "how make delphi compile fast" is to just use delphi. It will compile fast.


I think the most accurate way to say it is that, while Rust compile times are not inherently slow, Rust has in most cases where where a desirable feature was at odds with compile times decided to sacrifice it. So while it is not inherently slow to compile, it was not designed to be easy to compile quickly either.


I've profiled the compilation of a small but not trivially written crate (~5k lines, with macros, many UTs etc.), and the vast majority of the time (78%+) was actually spent on the LLVM side (see numbers in a post above).

In other words the Rust language design (as opposed to the compiler implementation) was not the bottleneck in the compilation time; the factors were well-known issues of the Rust compiler implementation: 1. it relies on LLVM, which is slow 2. it generates slow to compile IR code.


This is correct, and part of "the design of Rust make it slow". The use of LLVM is part of the design, and how use it is too.

Is like the people on the C-like languages (with the exception of Go) can't resits the temptation of make languages slow to use (yet, spend absurd amount of time on produce good binaries).

Is not that I not grateful of how good Rust/LLVM compiled binaries perform, in fact, is one of the big reasons I use Rust! but is kind of weird(?)/funny(?) that the languages itself are so slow.

I think is cultural. People on that side of the fence are so used to use sub-optimal tooling that don't mind too much to use another tool on the same feels. And if it provide "10%" increase on performance, get so happy, because is against the old, not agains the best.


> The use of LLVM is part of the design, and how use it is too.

This is false. There is the Rust Cranelift backend in development, which is actually usable already (depending on the project). Having a fast compiler for debug builds and a slow one for release ones is a setup that I can imagine being the future of Rust.

I've just built a project of mine and it took exactly half of the time to compile, when compared to the standard compiler.

It's actually crazy that they're even developing a JIT (although it requires dynamic libraries as dependencies, so its usage is and possibly will be limited).


Delphi compile times are very slow on a reasonable 10 man year project. It doesn’t help the the IDE (and compiler?) was still 32 bit last year (even though it can compile 64 bit apps). Apart from some other toxic problems!


This toy project https://github.com/pjmlp/gwc-rs takes 16 minutes to compile from scratch on a core 2 machine, with 8GB and a SSD.

The original one in Gtkmm, takes around 5 minutes in release mode, if that much.


Pro tip: use incremental compilation rather than full builds, especially if you're at an airport.


Pro tip: it only works if the crates are already compiled.

Another pro tip, switch to llvm linker.

Here is the thing with pro tips, maybe just maybe, I am an avid reader of Nicholas blog posts and know them all already.


Hmm that looks slow. Can rust support binary patching like zig or is it impossible with the current design?


Parent poster is referring to a full build, which are not really representative of a daily workflow. Incremental builds, in particular of a project where the large part of a full build is on external crates (which don't need to be recompiled) are much faster.

I can't test on an old machine, but modifying the project code and performing an incremental (standard) build, takes around 5 seconds on a modern machine.


> Parent poster is referring to a full build,

Yes

> which are not really representative of a daily workflow.

Only on Rust :). This is a "hack" that must be done to make Rust viable. In Delphi, a full compile is so fast that it's pointless to make it incremental.


The biggest issue is lack of support for binary crates in cargo, so instead of pkg-config, you need to build the whole world when starting a new project.

So imagine being on the waiting lobby of an airport, reading about a cool Rust project and then trying to check it on your laptop. With luck it is finished compiling before being called for boarding.


> This question inappropriately assumes that the Rust compiling design and performance are fixed to what is today, and in particular, that they are entirely inherent to the language.

Either that, or: it asks exactly that question rather than assumes that slow compilation is intrinsic to memory safety etc. But yes, there might be an implicit premise about some part of the Rust design contributing to slow compilation times.


Well, I've just profiled the incremental compilation of a 4.8k lines Rust project (using the `self-profile` option); top three offenders:

- LLVM_module_codegen_emit_obj: 47% of the time - LLVM_passes: 16% - codegen_module: 15%

Sample of Rust-specific passes:

- MIR_borrow_checking: 0.2% - resolve_lifetimes: 0.0%

The two passes above are arbitrarily picked, but anything other than the top three takes cumulatively 22% of the time.

Based on this, the Rust IR generation and LLVM are the bottleneck. I'd be interested in the opinion of somebody knowledgeable, since there may be non-obvious mistakes in the above reasoning.


This is a pretty common shape of the measurement, yes.


Hello Steve, I hope you're well and baking more breads for Christmas!


Thanks! I am working on two loaves right now, haha!


I'm not inappropriately assuming that design is fixed. I just wanted to know if it's possible. I'm not complaining about rust cause I don't know much than the basics also I have not worked with statically compiled languages.


For a language to reach the compilation speed of Go, it must have been designed with that goal in mind from the start.

The best Rust can hope for is to make it largely irrelevant during dev by doing fine grained incremental compilation.


Safety doesn't cost that much in compilation time. `cargo check` performs full type-checking and borrow checking, and it's reasonably fast already.

Rust's performance depends very much on optimizations, mainly LLVM. It could be faster if you were willing to accept less optimized code, or change coding style to use fewer libraries and fewer abstractions. But if you want the same speed without any compromises, then I don't see any way around that.


Yes, of course is possible. This is a question with open definitions, there are dynamic languages with memory safety that can be as fast as Rust (or native code in general) in some cases. Also you could say that Go itself is mostly memory and can be as fast as Rust.


Yes, example Ada/SPARK, Eiffel, Delphi, OCaml, Haskell, D.

It is a matter of tooling, and where to spend development resources.


Isn't Haskell (GHC in particular) extremely slow to compile? In fact, I think GHC is the slowest compiler I used.


Except when one is developing, GHCi is the way to go.


This is a better page to read (although linking to the PR is appropriate for HN):

https://github.com/rust-lang/rfcs/blob/256d1969753f23f1182d4...



I'm not sure what you mean by posting just this link. But I'll repeat what I've been saying for a while. The Rust ecosystem is making significant advances toward building what I believe will be a world class GUI toolkit, but we do not yet have a mature, usable solution, and getting there will take time. There are no shortcuts either, though it is possible to get something working by lashing together existing pieces.

I obviously consider Druid to to be one of the most promising approaches, but even that aside there are other projects that could become a solid GUI toolkit, including Iced. I also encourage people to look at Makepad, as it does a lot of innovative things and emphasizes priorities such as small binary size and fast compile time.


I would argue it’s not possible for Rust to have a “world-class” toolkit, because it seems implied that it’d be cross-platform. This means it’ll always lag behind UIKit, AppKit, GTK/Qt, and... whatever Windows has these days.

The Iced project used as an example has a custom renderer for example... does it support VoiceOver? Does it support every standard Cocoa keyboard shortcut?

Rust wrappers for these existing libraries (some exist) that maintain all functionality is what would allow “world-class” apps to be built in pure Rust IMO.


I agree that support for VoiceOver (and other assistive technology) and full support for keyboard shortcuts is essential for a production GUI toolkit. We certainly don't do those yet, but I do think we can get there.

Rust makes it easier to bind platform capabilities at a low level than most other languages. As an example, we do cut'n'paste cross-platform, supporting complex media types, not just plain text, so we can round-trip a vector glyph from Runebender to other font editors. I think that's a taste of being able to take on these more ambitious challenges.

Also, "native" UI toolkits are a lot less actually native than they used to be. On Windows, the old HWND-per-control and GDI drawing model (long considered the standard for "native" Windows UI) is long obsolete, and there are actually a whole bunch of toolkits that people use, with UWP the most actively supported. Apple has a stronger story, but even there you have AppKit + SwiftUI, and technically Catalyst is officially supported, though nobody would argue it's actually a good experience.

And the fact is, a lot of people are using Electron, because it actually solves the business problem of delivering good-enough UI. I think we have a good shot at competing against that niche, and also that building it on Rust is a more solid foundation than any other language ecosystem.


You are also missing the eco-system part, having just a bare bones GUI support isn't enough.

A sucessful Rust GUI also needs the likes Telerik, Component ONE and similar third party vendors with GUI component libraries.


Of course. And once the basics are in place (again, they're getting there but not yet), why wouldn't that ecosystem develop of it?


For sure, it depends pretty much how the framework will look like.

I still can't think of a way to make borrow checker deal with a GUI designer and drag and drop of components(which should be able to be plugged anywhere on the visual tree), or even something like SwiftUI, without forcing Rc<RefCell<>> everywhere.

I am looking forward how Rust/WinRT will look like with WinUI, but again, COM means AddRef/Release everywhere.


> something like SwiftUI

one thing im having a hard time wrapping my mind around is, if swiftui depends on (please correct me if im wrong) the swift compiler itself and emited static type information from that compiler, how can rust build an interface to that easily?

it seems like you'd have to make some kind of wrapper lib that emits extern c interfaces that take and return `AnyView` and thereby hurting performance...?


By having macros that produce the required information, similar to how Qt, wxWidgets do it, and Rust/WinRT is in the process of doing.

https://github.com/microsoft/winrt-rs

But that is the easy part, the hard part is having the ability to plug a widget anywhere on the tree without triggering a bunch of lifetime errors. WinRT gets around this because it is built on COM, so you get reference counting no matter what.


interesting... though since winrt is c/c++ based its probably easier for rust to interface with, guessing...

but afaik with swift there is static type information encoded at the binary level that swiftui (e.g `some View`) utilizes to do type-based tree diffing, so i wondered how rust can generate that, even with macros...

maybe im mistaken though!

(thanks for the reply btw)


React (vanilla, Native, and Electron) is the one to beat here, not platform native toolkits. For a toolkit to be world class today it needs to be cross platform across web, Linux, Windows, Mac, iOS, and Android.

Companies want to reduce duplicated effort per platform, in order to make frontends faster and/or cheaper. That's the number one feature a toolkit can have.


wxRust needs to be resurrected. wxWidgets is a wrapper around native controls.


Eh, there have been "promising approaches" for years now. It's clear that GUI is not one of the priorities of Rust developers, and that's OK. Its real innovations are more important for lower-level stuff anyway.


As someone not at all involved in Rust but interested in its progress, I think this is an important milestone to hit.

In most orgs that I've been a part of, no language tends to meet the standard for consideration until it has been the core of at least one very large public software project.

A functioning GUI toolkit is likely a major barrier to that happening in Rust.

Until then, we know it exists but doesn't pass inspection for inclusion. So far the largest things we know of built with it are Servo's CSS engine and the NPM authentication service.


> In most orgs that I've been a part of, no language tends to meet the standard for consideration until it has been the core of at least one very large public software project.

> So far the largest things we know of built with it are Servo's CSS engine and the NPM authentication service.

Here are some others, for your consideration:

* Dropbox's storage layer: https://dropbox.tech/infrastructure/extending-magic-pocket-i...

* CrosVM is an important part of ChromeOS https://chromium.googlesource.com/chromiumos/platform/crosvm...

* Firecracker is the underlying tech of AWS Lambda and Fargate https://firecracker-microvm.github.io/

* More coming from Amazon I don't have the ability to easily cite just yet, I hope the re:Invent recordings go up somewhere sometime soon

* "EdenSCM is the primary source control system used at Facebook,": https://github.com/facebookexperimental/eden

There's a bunch of other stuff too, of course. Apple has been hiring, but I don't think their usage is public yet. Microsoft is hiring rustc hackers, and has some other stuff going on. And tons of other stuff that may count as "large" depending on how you define it.


What about Cloudflare? I thought they were really using a lot of rust, but maybe that was just because you joined for a time!


You know, I was like "I know I am forgetting some major stuff" but couldn't put my finger on it, and yes, that is absolutely one of the things I've been forgetting.

They are using quite a bit of Rust:

* The 1.1.1.1 app's core is in Rust (this is one of the most prominent, if not most prominent, usages of Rust on iOS/Android)

* Their HTTP/3 implementation is in Rust (I am very excited about this one, personally)

* Workers has multiple parts; the CLI, HTMLRewriter... (This was related to the team I worked on, so I have a soft spot for it)

... and probably more that I'm forgetting.

Honestly, it's really nice that there's so much Rust these days that it's easy to forget all of the cool stuff. And there's a lot that I'd consider meaningful that may or may not be "large," so I was shooting for only the largest. 1Password and Discord are huge for me, but maybe others don't think so.


Also Wirefilter for their Firewall Rules

Source: https://blog.cloudflare.com/how-we-made-firewall-rules/


Don't forget fuchsia. It has over 800k lines of Rust (not counting dependencies, then it has >2M): https://i.imgur.com/gknVmYk.png


Its kernel is still predominantly C++ & Assembly.


... but since Fuchsia is totally-not-a-microkernel, a lot of stuff that would be in the kernel in, say, Linux, is not in the kernel in Fuchsia.


True, but outside of the kernel the official SDK languages are C++, Go and Dart.


The "kernel" of fuchsia is a microkernel, but when one talks about the Rust-free core of fuchsia, people usually mean the "zircon" component that encompasses both the microkernel as well as core utilities.

Around this zircon component they built a set of higher level components. They'd still be considered important parts of an operating system. E.g. things like a bluetooth stack.

The SDK you mention is for end user programs targetting the OS.


You're conflating two different things; outside the kernel, but inside of the OS, you can use Rust. They are using an increasing amount of it, as the other comment shows.

What you're talking about is the SDK for people writing applications to run on top of Fuchsia.


I'm not sure that this is true.

The documentation on fuchsia.dev heavily references Rust, and there is even documentation for usage with FIDL. I don't see anything explicitly relegating Rust usage to the kernel or even drivers. Seems like anything is fair game and "official."

Is there another source you are referencing?



The summary of this page is essentially:

For the kernel: C, C++

For the rest of the OS: C, C++, Dart, Rust

For the "end-developers" (=regular app developers): C, C++, Dart


The kernel is not what makes fuchsia special. There are plenty of kernels out there. The code around the kernel is far more mission critical to the success of the fuchsia product.

In fact, if you remove the kernel, Rust is the main language of fuchsia already.


Honestly Rust is a bad match for most GUI work you're going to end up ARCing so much stuff you might as well just start with Swift.


That's true for traditional object oriented GUI, but there are other ways to do it. Druid uses almost no interior mutability, and Arc is used sparingly, as immutable data structures are very handy for incremental computation. The Crochet prototype is even more imgui-like from the perspective of the application, and relies on Arc (and Clone bounds) even less.

I think doing this right will require some imagination, not just playing greatest hits from previous decades, only in a new language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: