To be extra clear about this, Rust does support dynamic linking; it's just limited to interfacing via the stable C ABI. The ABI-resilience conerns described in the article are ultimately punted to the users and/or the dev ecosystem, albeit some facilities (namely, bindgen/cbindgen) are also provided to ease the task.
With a big caveat: you can't have a dependency on a crate that is a dylib/cdylib or even staticlib in Cargo.toml. Your only way out is to have a build.rs that somehow builds the dependency with cargo. Which, if you're using a workspace, will deadlock. https://github.com/rust-lang/cargo/issues/8938
technically speaking, you can dynamic link regardless of the abi or not, using a stable ABI just provides you degrees of freedom with which compiler version compiles what. They're orthogonal concepts, even if they're most useful when combined.
Yes, the same is true of e.g. GHC the Haskell compiler. There's nothing stopping you from using dynamic linking with GHC on Haskell code (i.e. your haskell app links to a haskell shared object), but if the language ABI isn't stable, you just rebuild all those shared objects with every new version of the compiler anyway. In fact you don't even need to use a different compiler version: aggressive cross-module inlining also reduces the amount of sharing you can do (even, potentially, down to zero sharing.)
So you rarely save disk and every app has different shared objects loaded, and even assuming you use one compiler for everything, it may not matter at all. If anything it's worse because it can negatively impact startup time. In practice, ~everyone compiles Haskell code statically, and only dynamically links to libraries that have exposed a stable ABI. I'm not surprised people do the same for Rust.
It seems like the fundamental issue is tradeoffs between compile-time and run-time efficiency. During development in almost every compiled language, it would make sense to have minimal cached, shared object (or shared intermediate) code in most use-cases while globally compiling and optimizing for releases. Invoking every run-time optimization and recompiling everything during development or first package installation makes zero sense because it's a waste of time.
Also, shared common package library units for releases by default shrinks binaries rather than having zillions of exhaustive, slightly-different combinations of similar structures and machine code hoarding space in binaries. I've never understood how 300 MiB binaries would ever be acceptable to anyone, especially going against decades of shared libraries that reduced binary sizes and memory usage. Deliberate waste is never acceptable because there is no free lunch.
Usually when a language supports dynamic loading we refer to the language itself, as integration with OS ABI dynamic loader is a given for any language worth using.