I agree that Rust can look pretty weird to an untrained developer when lifetimes get involved. But, in Rust's defence, I haven't seen any other language write down lifetimes more concisely.
The underscore could've been a name if the name mattered, which would be required in many languages. Rewriting it to <'something>) may help readability (but risks introducing bugs later by reusing `something`).
Many C-derived languages are full of symbol soup. A group like <?,?>[]) can happen all over Java, for instance. Many of these languages have mixes of * and & all over the place, C++ has . and -> for some reason, making for some pretty unreadable soup. The biggest additions I think Rust added to the mix was ' for lifetimes (a concept missing from most languages, unfortunately), ! for a macro call (macro invocations in many other languages aren't marked at all, leaving the dev to figure out if println is a method or a macro), and ? to bubble up errors. The last one could've been a keyword (like try in Zig) but I'm not sure if it makes the code much more readable that way.
If you know other programming languages, the symbols themselves fall into place quite quickly. I know what <'_> does in Rust for the same reason I know what <T, R> T does in Java, while a beginner or someone who hasn't learned past Java 6 may struggle to read the code. Out of all the hurdles a beginning Rust programmer will face, the symbols are probably your least concern.
As for books, the Rust book on the Rust website is kept up to date pretty well. There are books for programmers coming from various other languages as well.
The language itself hasn't changed much these past few years. The standard library gets extended with new features, but a book a few years old will teach you Rust just fine.
In many cases, changes to the language have been things like "the compiler no longer treats this as broken (because it isn't)" and "the compiler no longer requires you to write out this long definition because it can figure that stuff out itself". I'd recommend running a tool called "clippy" in your IDE or on the command line, if you can leverage a modern language feature for better legibility, clippy will usually suggest it.
> I agree that Rust can look pretty weird to an untrained developer when lifetimes get involved. But, in Rust's defence, I haven't seen any other language write down lifetimes more concisely.
Can you do a lot better? I don't think so and it wouldn't help that much.
The truth is that most of the time we want to rely on some inferred lifetime annotations, but will obviously need an escape hatch from time to time.
Rust doesn't waste a lot of typing around the annotations, but if you were to improve Rust, you'd improve the implicit inference, not the syntax for being explicit.
> Can you do a lot better? I don't think so and it wouldn't help that much.
I think Rust could do a lot better inferring lifetimes if the compiler would be allowed to peek into called function instead of stopping at the function signature - e.g. if it had a complete picture of the control flow of the entire code base (maybe be up to a point that manual lifetime annotations could be completely eliminated?).
IMHO it's not unrealistic to treat the entire codebase as a single compilation unit, Zig does this for instance - it just doesn't do much so far with the additional information that could be gained.
It's a dangerous option: Rust already has long compile times, expanding the space it has to analyze would only increase that. Not to mention you'd be much more dependent on the implementation details of a given function, and it'd become very messy. The fact that lifetimes have a specifiable interface is probably one of the main things that makes Rust's approach work at all.
Rust has similar rules about type inference (of which lifetimes are a subset) at the function level as well. I think this was a lesson learned the hard way by Haskell, which does allow whole-program type inference, and how programmers working in it quickly learned you really want to specify the types at the function level anyway
> Not to mention you'd be much more dependent on the implementation details of a given function
Hmm, but wouldn't that already be the case since the manual lifetime annotation must match what the function actually does? E.g. I would expect compiler errors if the 'internal' lifetime details of a function no longer match its manual lifetime annotations (is it actually possible to create incorrect lifetime annoatations in Rust without the compiler noticing?)
Higher compile times would be bad of course, but I wonder how much it would add in practice. It's a similar problem as LTO, just earlier in the compile process. E.g. maybe some time consuming tasks can be moved around instead of added on top.
> is it actually possible to create incorrect lifetime annoatations in Rust without the compiler noticing?
In safe rust, no.
Full inference is one of those things that seems like a no brainer, but there are a number of other more subtle tradeoffs that make it a not great idea. Speed was already mentioned, but it’s really downstream from tractibility, IMHO. That is, lifetime checking is effectively instantaneous today, and that’s because you only need to confirm that the body matches the signature, which is a very small and local problem. Once you allow inference, you end up needing to check not just the body, but also the bodies of every function called in your body, recursively, since you no longer know their signatures up front. We tend to think of compiler passes as “speed” in the sense of it’s nice to have fast compile times, but it also matters in the sense of what can practically be checked in a reasonable time. The cheaper a check, the more checks we can do. Furthermore, remember that Rust supports separate compilation, which is a major hindrance to full program analysis, which is what you need to truly infer lifetimes.
Beyond complexity arguments, there’s also more practical ones: error messages would get way worse. More valid programs would be rejected if the inference can’t figure out an answer. Semver is harder to maintain, because a change in the body now changes the signature, and you may break your callers in ways you don’t realize at first.
I would kill for Rust to spend some time figuring out what the ownership rule should be when I get the ownership wrong - compile cycles are cheap and inexpensive compared to me sitting & trying different approaches or running an LLM to try to help me figure it out (hint: they largely fail miserably and cause me to waste more time). I was fighting one function in my codebase and couldn’t figure out how to get the compiler to be happy despite me seemingly having a correct definition, so I just broke down and won the impasse by using unsafe which isn’t what I wanted to do. I know it sometimes recommends, but not in all cases and not in this particular case.
Another thing I’ll point out is that TypeScript does full program inference and while type checking performance is a huge problem, it does a pretty good job. That obviously doesn’t necessarily map to Rust and the problem domain it’s solving (& maybe TS codebases naturally are smaller than Rust) but just putting that out there. Rust has made certain opinionated choices but that doesn’t mean that other choices weren’t equally valid and available. SemVer is easily solvable - don’t allow inference for pub APIs exported from the crate which also neatly largely solves the locality issue.
Did you check your unsafe with Miri? It's possible you were trying to do something that isn't actually possible, locally speaking.
> I’ll point out is that TypeScript does full program inference
Do you have a citation for this? I don't believe this is the case, though I could be wrong. I actually spent some time trying to find a definitive answer here and couldn't. That said,
> Rust has made certain opinionated choices but that doesn’t mean that other choices weren’t equally valid and available.
This is true for sure; for example, TypeScript is deliberately unsound, and that's a great choice for it, but does not make sense for Rust.
> SemVer is easily solvable - don’t allow inference for pub APIs exported from the crate which also neatly largely solves the locality issue.
It helps with locality but doesn't solve it, as it's still a non-local analysis. The same problems fundamentally remain, even if the scope is a bit reduced.
Have you ever had success with Miri on non-trivial programs? Here's a reduced test case which does show it's safe under Miri but for the life of me I can't figure out how to get rid of the unsafe: https://play.rust-lang.org/?version=stable&mode=debug&editio...
> Do you have a citation for this? I don't believe this is the case, though I could be wrong. I actually spent some time trying to find a definitive answer here and couldn't. That said,
No and thinking about it more I'm not sure about the specific requirements that constitutes full program inference so it's possible it's not. However, I do know that it infers the return type signatures of functions from the bodies.
> This is true for sure; for example, TypeScript is deliberately unsound, and that's a great choice for it, but does not make sense for Rust
Sure but I think we can agree that the deliberately unsound is for ergonomic and pragmatic compatibility with JS, not because of the choice of inference.
I'm not arguing Rust should change it's inference strategy. Of all the things, I'd rate this quite low on my wishlist of "what would I change about how Rust works if I could wave a magic wand".
Note that by creating the reference to a local and passing it up through the callback, you are using a fresh region that can’t possibly outlive any of one of the ones you are generic over. Fundamentally, that callback could stash the reference you pass it into state somewhere and now the pointer has escaped, invalidated as soon as that iteration of the loops ends.
See that the definition of Group is tying those together. Instead, you can split them apart and maybe use HRTB to ensure the closure _must_ be able to treat the lifetime as fresh? But then you’ll probably have other issues…
… which can largely be circumvented simply by pinning, in your reduced example, which probably doesn’t retain enough detail.
But why does pinning solve the issue? Fundamentally the lifetime of the future is unchanged as far as the compiler is concerned so in theory the callback should be capable of doing the same stashing, no?
The lifetime _is_ changed; this lets you use the lifetime from the HRTB instead of the function generics. It’s not so much the pinning itself that does it, for the type system, but using the trait object enables referring to that HRTB to require true lifetime generic (and then pinning comes along for the ride).
> Have you ever had success with Miri on non-trivial programs?
The key is to isolate the unsafe code and test it directly, so you're not really doing it with whole programs. At least that's what I try to do. Anyway, was just curious!
(I don't have anything to say about the specific code here that cmr didn't already say)
> Sure but I think we can agree that the deliberately unsound is for ergonomic and pragmatic compatibility with JS,
Oh absolutely, all I meant was that because they're starting from different goals, they can make different choices.
(not OP)
I love rust, bu I just think that using ' for lifetime was a huge mistake, and using <> for templates (rather than something like []) was a medium mistake.
There is something about how the brain is wired, that using ' for lifetime, just triggers the wrong immediate response to it.
Something like this would look so much nicer IMHO [$_], compared to this <'_>.
> Many C-derived languages are full of symbol soup.
C is not full of symbol soup though.
It is more full of symbol soup than Pascal or Modula 2, and back in the day when C was taking over other such languages, there were lots of complaints about C's syntax being like "line noise" and whatnot.
Yeah, I wonder what people are referring to when they say C is full of symbol soup. I mentioned it in my other comment that C is not, just like how Common Lisp is not due to the parentheses, its syntax is pretty simple.
re: try keyword in Rust - this is actually a thing on nightly, although instead of bubbling up errors directly, it creates a scope (within which ? is usable) that evaluates to a Result.
Symbols are just other letters in the alphabet. Something like <‘_> is as natural for me to read at this point as any of the other words in this sentence.
Math is also symbol soup. But those symbols mean things and they’ve usually been designed to compose nicely. Mathematicians using symbols—just like writers using alphabets—are able to use those symbols to concisely and precisely convey complicated concepts to one another.
I guess my point is that symbols shouldn’t be looked at as inherently a positive or negative thing. Are they clear and unambiguous in their use? Do they map clearly onto coherent concepts? When you need to compose them, is it straightforward and obvious to do so?
> Math is also symbol soup. But those symbols mean things and they’ve usually been designed to compose nicely. Mathematicians using symbols—just like writers using alphabets—are able to use those symbols to concisely and precisely convey complicated concepts to one another.
I just don't understand why one may take maths of all things as a positive example of something readable, when it's widely known to be utterly inscrutable to most humans on earth and even so many papers have differing conventions, using the same symbol for sometimes widely different or sometimes barely different things
I think many programming languages could benefit if we had an easy way to have both custom symbols and a convenient way to input them without extra friction. Take APL for example, once you know the language it's incredibly expressive, but the overhead to typing it is so strong that many use custom keyboards/caps.
Uiua (https://www.uiua.org/), broadly in the APL lineage, solves this problem nicely.
Like APL, it has a set of well-chosen symbols, but each symbol has an english name you can type just as you would a function name in another language, and it's automatically converted to the symbol when you run it.
To be fair the basic ASCII keyboard is also default in US/Britain. And most people assume that's all they get.
I have always used the "international" version of the US English keyboard on Linux.
And I can enter all common symbols pressing altgr or altgr-shift. I also use right Ctrl as a compose key fore more. I would be hard pressed remembering what combo to press, after years it's just muscle memory.
But how do you find out what layout and what compose key does what? Good luck. It's as documented as gesture and hidden menus on iOS and MacOS. sigh.
Even something like foo::<‘_, [T]>() is just not that hard to follow. Again, the symbols involved all compose nicely and are unambiguous. And frankly, you just don’t need something like that all that often (and when you do, there are usually other alternatives if you’re really put off by the symbols.
I prefer ("if a, err := bar() {"), the same things you said applies here, too. I write a lot of Go and I can glance through it quickly, there is no cognitive overhead for me.
Edit: actually, it was someone else who said this: "Human brain has a funny way of learning how to turn off the noise and focus on what really matters.".
The difference is, there is no room for bugs with ?. Zero. None.
I have fixed (and frankly, caused) many bugs in golang code where people’s brains “turned off the noise” and filtered out the copypasta’d error handling, which overwrote the wrong variable name or didn’t actually bubble up the error or actually had subtly wrong logic in the conditional part which was obscured by the noise.
Frankly, learning to ignore that 80% of your code is redundant noise feels to me like a symptom of Stockholm syndrome more than anything else.
One symbol to replace three lines of identical boilerplate is no less explicit and dramatically clearer.
Some things are. Some things aren’t. At one point, you could write
nil, err
without the return and it would happily compile. It’s also tragically easy for actual logic bugs to be obscured by all the boilerplate.
It’s not like three lines of error handling copypasta is some optimal amount. If golang required ten lines of boilerplate error handling, you’d still have just as many people arguing in favor of it because they “like it to be explicit” when it reality it’s verbose and the real underlying argument is that it’s what they’ve grown accustomed to. `?` is no less explicit, but it is less unnecessarily verbose.
What I can say here is that I am not one of these people who would argue in favor of Go's error handling were it 10 lines. shudders. I would definitely not use it, just like how I do not use Java for quite many reasons (unless I get paid for it, but would rather not). :P
It's a curly-brace language with some solid decisions (e.g. default immutability) that produces static binaries and without a need for a virtual machine, while making some guarantees that eliminate a swathe of possible bug types at compile time.
As others note, the symbol soup is something you learn to read fluently and isn't worth getting hung up on.
Basically it occupies something of a sweet spot in the power/useability/safety space and got a decent PR shove by coming out of Mozilla back when they were the cool kids. I like it a lot. YMMV.
"Curly-brace language" is a good way to put it. Rust does an excellent job of cribbing features that aren't mainstream and giving them a more intuitive name and design.
Most people will conk out if you start talking about how your language has "algebraic data types." But if you rephrase that as "we let you put payloads in your enum," well, that piques people's interest. It certainly worked on me.
<'_> is one of the most basic symbols in Rust. Reading that is almost like reading the letter 'a after just some very modest amount of time with the language.
> How come is it in demand?
Because there's a lot more to the language than just those not-really-unfamiliar symbols
Rust's design is designed to be more in the mentality of if it compiles that it is good enough, leaving less for runtime issues to occur unexpected, dictated by type and memory safety. So, it requires more type info (unless you use unidiomatic unsafe code) and talking with borrow checker. But, once you internalize its type system and borrow checker, it pays off if you care about compiler driven development (instead of dealing with errors in runtime).
Because it's a complicated language for building extremely low level things, when you have no other choice. IMO it's not the right tool for high level stuff (even though it does have some stuff which higher level languages should probably borrow).
The only other language that directly competes with Rust IMO is C++, which is equally full of symbol soup.
> IMO it's not the right tool for high level stuff
I thought that for a long time. But as time passes and I spend more time in languages like Typescript (Semi-Type Script more accurately) and Swift the more I yearn for Rust.
Yeah I feel that, not the entire language but, many of its choices, like error handling, sum types (with exhaustive enum matching) especially when writing in python.
Thats your opinion and I respect it. Especially the bit about complaining about syntax. There's no ther language directly competing with rust which had less syntax.
My opinion is that in Rust you have to make decisions on certain things which are take n for you by the garbage collector in other languages.
Should you store a reference or value in your struct? You can't just change it without modifying other places. I understand that this gives you the control to get the final 20% of performance in certain places but it's still lower level than other languages.
You could say just spam Arc everywhere and forget about references, but that itself is a low level decision that you make.
Lifetimes and annotations only look like symbol soup initially (when you have little to no experience in Rust). The more proficient you become in Rust more you end up ignoring it completely. Sort of like ads you see (or don't) in Search. Human brain has a funny way of learning how to turn off the noise and focus on what really matters.
Split ←((⊢-˜+`׬)∘=⊔⊢)
input2←' 'Split¨•file.Lines "../2020/2.txt" # change string to your file location
Day2←{
f←⊑{(⊑)+↕1+|-´}‿{-1} # Select the [I]ndex generator [F]unction
I←{F •BQN¨ '-' Split ⊑} # [I]ndices used to determine if the
C←{⊑1⊑} # [C]haracter appears in the
P←{⊑⌽} # [P]assword either
Part1←(I∊˜·+´C= P)¨ # a given number of times
Part2←(1= ·+´C=I⊏P)¨ # or at one of a pair of indices
⊑+´◶Part1‿Part2
}
•Show { Day2 input2}¨↕2
Fwiw I don’t think your parent is lying but I also don’t feel it’s really accurate. If you read https://graydon2.dreamwidth.org/307291.html for example, there’s some references that imply this, but it’s not really that “less symbols” was a goal so much as it is a secondary effect of other choices. Graydon wanted a simpler language and that implies simpler syntax, not the other way around. Even the grammar bit isn't really about "symbol soup."
Early Rust had other sorts of things that a lot of folks would consider readability problems unrelated to symbols too: no keyword was allowed to be over five characters, so return was ret, continue was cont, etc.
Ha! Well maybe my opinion has changed over time. To be honest, I struggle to call Rust “symbol soup” now or then; other than lifetimes, which is just one symbol, I don’t think Rust is a particularly symbol heavy language, or at least, not much more than any other curly brace and semicolon language.
Well, if you can have (and projects do have) >5 consecutive symbols, then it is symbol heavy. I am pretty sure I made this comment a long time ago with an example but paging on HN is dreadful and time-consuming. I will try to look for it. It was on GitHub. I came across it when I was interested in Rust and checked somewhat popular Rust codebases.
I think it also depends on how you think of symbols; I see "::" as a single operator, not two symbols. Do () and <> count as individual symbols? I believe you do, given that you have an example upthread.
If those are the case, well, I can construct something, but it's not something I've used directly. Four isn't unheard of if you're going by those rules, but five is a bit extra.
Yes, I consider "::" as two symbols, also yeah I am against ")?)?" but I have seen "worse" in the wild. I think I will have to look for what I saw before we continue. I might not be able to reply to this comment, however.
Off the bat: I like Rust. I'm still very much a novice with it, but I enjoy it.
But almost the entirety of Computer Science is based on abstractions because they're helpful to "dumb down" some details that aren't super-important for our day-to-day work. E.g., writing TCP protocols directly is Assembly would be too fine-grained detail for most people's usual work, and using some existing abstraction is "good enough" virtually all of the time (even though we might be able to optimize things for our use-cases if we did drop down to that level)
There exists programming work where fiddling with lifetimes is just too fiddly to be worthwhile (e.g., web development, probably is more than fine using a good ol' garbage collected language). This isn't about "dumbing down" anything, it's about refocusing on what's important for the job you're doing.
I am against dumbing things down, too (although I do not see its relevance to my comment), but for example I have no issues with OCaml, C, Factor, Ada, Common Lisp, etc. It is just a personal preference anyways.
What do you think about Rust for Rustaceans? I read it and there are very niche and useful information there about Rust that I didn't see anywhere. It's a solid book but for a book about programming there are so few real code examples that it can come off dry. I just bought Rust atomic and locks and it seems exercise based, so I'm excited to finish it. The first chapter seems promising
You are right about it not being a beginner friendly book. Hence why I placed it lower in the order of books to study.
Yeah Rust atomics and locks is essential if you truly want to understand low-level concurrency. But you might have to also refer to the C++ std::atomics reference [1] to get a complete idea. It took me a while to grasp those concepts.
"Programming Rust" by Jim Blandy et al was the book that really helped me to understand why many of the design decisions behind the implementation of Rust were made.
I found it more approachable than some of the other Rust books and highly recommend it as a first Rust book.
Unfortunately, I haven't read Programming Rust. The list includes just the books I used to learn Rust. But will definitely give Blandy's book a read. Thanks for the recommendation!
The Rust Programming Language does a great job imho. It got me up to speed by reading it before bed for a month. I’d never written C/C++ before, just a lot of Python. It starts out really simply by explaining the type system and the borrow checker. Take it from there and do a couple of side projects, I’d say.
Why do people say Rust follows the tradition of C++? Rust follows very different design decisions than C++ like a different approach to backwards compatibility, it does not tack on one feature on top of another, it is memory safe etc that are very different from C++. If you are just comparing the size of language, there are other complex languages out there like D, Ada etc
> Why do people say Rust follows the tradition of C++?
They mean the domain that Rust is in.
Before Rust there was only C or C++ for real time programming. C++ was an experiment (wildly successful IMO when I left it in 2001) trying to address the shortcomings of C. It turned out that too much of everything was in C++, long compile times, a manual several inches thick, huge executables. Some experiments turned out not to be a good idea (exceptions, multiple inheritance, inheritance from concrete classes....)
Rust is a successor in that sense. It draws on the lessons of C++ and functional programming.
I hope I live long enough to see the next language in this sequence that learns form the mistakes of Rust (there are a few, and it will take some more years to find them all)
Some of C++'s warts are still available in Rust, though, such as long compile times. Additionally it encourages using a lot of dependencies, too, just like npm does.
Anyways, I dislike C++, it is too bloated and I would rather just use C.
It was no experiment at all, it was Bjarne Stroustroup way to never ever repeat his downgrade experience from Simula to BCPL, after he started working at Bell Labs and was originally going to have to write a distributed systems infrastructure in C.
Also there have been alternatives to C and C++, even if they tend to be ignored by most folks.
The one big (and IMHO most problematic) thing that Rust and C++ have in common is the desire to implement important core features via the stdlib instead of new language syntax. Also both C++ and Rust use RAII for 'garbage collection' and the 'zero-cost-abstraction promise' is the same, with the same downsides (low debug-mode runtime performance and high release-mode build times).
While I don’t disagree that there’s a similar desire regarding libraries vs syntax, Rust is also more willing to make things first class language features if there’s a benefit. Enums vs std::variant, for example.
And it's a balance act, both approaches to language design have merit.
That being said, I can't work with std::variant, and God knows I tried to like it. Rust's enums look a lot nicer by comparison, haven't had enough experience to run into potential rough edges which I'm sure are there.
For me the defining feature of C++ are its move semantics. It permeates every corner of your C++ code and affects every decision you make as a C++ developer.
Rust's defining feature is its borrow checker, which solves a similar problem as move semantics, but is more powerful and has saner defaults.
Zig doesn't promise language or stdlib stability yet, but in reality the changes are quite manageable. And it's already good enough for some high-profile real-world projects like Bun (https://bun.sh/), Tigerbeetle (https://tigerbeetle.com/) or Ghostty (https://ghostty.org/).
In the end, language stability isn't as important as it used to be, people are quite used to fixing their code when upgrading dependencies to a new major version for instance.
It remains to be seen if any of those projects will be around in a couple of years.
I haven't yet seen something that would make me have to consider Zig, regardless of my personal opinion, like other languages that have grown to become unavoidable.
It was designed to be used with syntax highlighting and LSPs. The highlighting makes it pretty easy to read for me. Although there are some arcane generics with lifetimes that can be indecipherable in some libraries.
How come it is in demand?
Cool book though.