Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rust's thread safety only applies to the special case of those threads accessing in process data segments.

Rust's type system can do very little to help when those threads are accessing the same record on a database without transactions, OS IPC on shared memory, manipulating files without locks, handling hardware signals, handling duplicate RPC calls,...

Yeah but that ultimately requires an unsafe block, kind of true, except no one reads the code of all crates they depend on, and the direct dependencies might be safe in what concerns the direct consumers.



This is a point that you constantly bring up in these threads, do you think most developers believe that data race safety should extend beyond the bounds of the process?

One thing that Rust’s type system does allow you to do is define a consistent manner in which to access external systems, even add types that will mimic the same safety. Is it perfect? Will it protect you from a different process working against the DB? Will it enforce things in the other process? No. But will it give you higher level semantics to be able to construct a better model for operating against that external system? Yes.


Yes, when they care about data consistency in distributed systems.

Maybe many Rust devs don't care.


So what is your point? You’re changing the goal posts on safety and it’s a pointless reductive argument.

Even if we account for everything, solar storms will eventually flip bits unexpectedly. Does that make Rust’s guarantees worthless?


The goal posts stay on the same place. There are ways towards data corruption where Rust's fearsome concurrency is of no help.


This has been a recurring theme from you, but in the cases you're describing the risk is only a race condition (no general solution is possible) and not a data race (which safe Rust is able to deny by design). These are categorically different problems.


Because it is a recurring theme to ignore the other kind of race scenarios when promoting Rust's type system.


People have to consider race conditions anyway, they're part of our world. For example if you use git's ordinary --force to overwrite certain changes that's subject to a TOCTOU race, which is why force-with-lease exists. Even in the real world, I once opened a bedroom window to throw a spider out onto the garden below and a different spider came in through the window in the brief interval while it was open - exploiting the "open window to throw out spider" race opportunity.

Data race isn't just "Oh it's just a race condition" or Rust wouldn't care, data races destroy Sequential Consistency. And humans need sequential consistency to reason about the behaviour of non-trivial programs. So, without this writing concurrent software is a nightmare. Hence, "fearless concurrency".

You won't destroy sequential consistency by having non-transactional SQL queries. Try it.


I have tried plenty of times, and seen not so happy train travelers with the same ticket for the same place on the same train, hence why bring it up all the time.

It is obvious it is a subject that is irrelevant in the Rust community.

Who needs consistency in distributed systems when multiple threads from the same process are accessing the same external data without coordination.


> Who needs consistency

Programmers do. Programmers are human and so can't reason about the behaviour of non-trivial programs without sequential consistency.

If I was trying to debug software which sometimes mistakenly issues people duplicate tickets, I think I'd want to be able to reason about how the software works, and that's not going to be possible if it doesn't even exhibit sequential consistency.


Your argument amounts to "Rust prevents X, so X is important. Rust cannot prevent Y, therefore Y is less important."


Er, no? Sequential consistency isn't some Rust invention, Leslie Lamport (yes the LaTeX one) invented it for his 1979 paper "How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs"

I rather like Lamport's last observation about what happens if you don't have sequential consistency and instead your programs just put up with whatever was cheap/ efficient to implement (as will happen by default on a modern multi-core CPU): "verifying their correctness becomes a monumental task"

It was later proved that it's not merely "monumental" this is actually an undecidable problem in general which explains why humans aren't good at it.

So, this is important in principle to get right, and (safe) Rust does so. You are of course welcome to decide you don't care, why aim to write correct programs anyway? And for now at least it seems in our industry many people agree.


Is Rust really "fearless concurrency" ?

Considering the number of deadlock issues encountered by folks using async Rust, I think "fearless concurrency" is misinformation by Rust evangelists.

Deadlocks in production Rust microservice:

https://blog.polybdenum.com/2022/07/24/fixing-the-next-thous...


While deadlock is undesirable, this is still behaviour which you can (and the author did) successfully reason about. You still have your consistent model of the world, in which you are deadlocked and can see exactly why.

"Fearless concurrency" isn't "It's impossible to make mistakes" it's only "The mistakes are no scarier than they were in your conventional non-concurrent program".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: