I am Rust newbie myself, but I don't understand how this could be considered safe:
strict::with_metadata_of(new_ptr, self.shared.as_ptr() as *mut T)
I always thought pointer math was considered safe because math is safe. Which is fair enough, as adds and subtractions can't do much beyond overflow. What is unsafe is accessing the memory location using the result of that math, as the result could point anywhere.
They don't do that (access memory using the raw pointer) here. But if anything what does happen is worse: raw pointer that is considered unsafe to use is cast back to a typed pointer that is considered safe to use, and returned to the unsuspecting caller.
I'm not that deep into rust, especially unsafe rust, either.
But I'm not really sure where the strict:: comes from? If it is the same function from the standard library then I'd put my money towards "its probably meant to be unsafe, but hasn't been marked as such yet. The function is still considered unstable and is only available on the nightly compiler".
> a bug in safe code can easily cause unsound behavior in your unsafe code if you’re not careful.
I thought the whole point of safe code was that any bugs in safe code cannot cause unsound behaivor, so it's on the unsafe code to maintain soundness even if it's used incorrectly by safe code?
Calling an unsafe function means "I promise to uphold the invariants required" and often that requires writing safe code a certain way.
For example, in WGPU I get the surface of an OS window. This requires an unsafe function call. The invariant I must uphold is to not drop the window before the surface. This requires writing my safe code a certain way. And indeed, a malicious actor could change only safe code and cause UB by dropping the window (I believe this would cause a double free when the surface is dropped). However, the struct holding the window and surface presents a safe API to users of the struct, and no code outside the struct can cause UB.
If you're writing unsafe code meant to be used by safe code, the safety boundary is usually at the interface level (i.e. public functions not marked unsafe), not necessarily just the unsafe {} construct. So there's 'safe code' that only uses safe interfaces and should be memory safe whatever it does, there's unsafe code which explicitly opts into the unsafe operations where memory safety issues can occur, and code in the middle which is not marked unsafe but still needs to uphold invariants the unsafe code depends on. As a rule of thumb if you've got unsafe in library code, the entire module (which is the public/private boundary) may have an impact on safety.
I see, I suppose that kind of makes sense and is good enough. Maybe it would be nice to be able to mark fields / variables with critical invariants as unsafe, so that only unsafe code change them?
Yes; that's the whole point. There is even a computer verified proof that (for some version of std) if you only use the standard library and don't use unsafe, then you can't write a program that exhibits unsound behavior.
Of course, if there's one bug in unsafe code, then all bets are off, but that's still a bug in the unsafe code.
I think the "It’s not unsound; it’s not even incorrect." is wrong... Clearly, it's unsound -- the problem in this post was documented[0] as a safety precondition of the one function they call in that block!
In code I'd consider high-quality, there'd be a comment like this [1]:
// SAFETY: inner is only mutated by foo, bar, and baz, all of which
// ensure the pointer is valid.
[0] has an unsafe annotation, so in order to invoke it, you have to write the token "unsafe", which means all bets are off unless some sort of external proof of soundness exists.
There quite a few APIs which do not have safe wrappers. Recently I needed to disable IP fragmentation on a socket. I had to invoke the raw c library which is unsafe.