That's not a particularly interesting description for multithreaded code. The procedural/imperative element of it may be true of any given thread's execution, but with the right user-space or kernel scheduler, you may or may not care what the precise sequencing of things is across threads. This is true whether you're using lightweight user-space threads ("fibers") or kernel threads.
of course, but in that case it's not the base imperative abstraction that's leaky - it's the multi-thread/multi-core abstraction built on top of it - it's almost as easy as multiple threads happening in parallel. until it isn't.
the multi-thread abstraction attempts to be: "hey, you know how that imperative model works? now we can have more than one of these models (we'll call them threads!) running in parallel. neat, huh?"
it's things like memory barriers and race conditions, etc. that are the "leaks". but it's the threading/multi-core abstractions that are leaky not (back to the article) the "blocking" code.
i can write single-thread "blocking" code all day long that does all kinds of interesting non-trivial things and never have to worry about leaks wrt. blocking, in-order execution, etc. even in the presence of out-of-order execution processors and optimizing compilers, VMs, multi-user OS'es, etc. effects are observably identical to the naive model.
the author didn't do a good job of clearly defining anything, but i bristle at the idea that the basic "blocking" code abstraction whats leaky - it's the async, threads, event-driven models, etc. that are (necessarily) a bit leaky when they break the basic imperative blocking model or require accommodation of it.
good points. i'd add, though, that priority inversion is specifically a leakage of the blocking nature of some parts of the imperative code into the thread model. similarly, the implications of locks for real time code (something i work on) is another example of blocking leaking into a thread model.