Programmers just don't like switching between representations - it's the same reason that "printf debugging" will still be widespread after all of us are gone.
Another thing is the ability to actually have a breakpoint in the program. There is a lot of distributed systems (especially embedded ones) where just stoping the program and inspecting the state will cause the rest of the system to enter some kind of failure state.
I use printf debugging locally as well. Interactive debuggers and I just don't get on, and my test cases are built to start fast enough that it works out for me. Usually I have a 'debug_stderr' function of some sort that serialises + writes its argument and then returns it so I can stick it in the middle of expressions.
(I do make sure that anybody learning from me understands that debuggers are not evil, they're just not my thing, and that shouldn't stop everybody else at least trying to learn to use them)
Sure, that's just one use case. And logs can be useful for history as well as current state. But I think it's the mode shift to "do I set a breakpoint here, and then inspect the program state" as opposed to just continuing to sling code to print the state out.
What's funny is that I learned to debug with breakpoints, stepping, and inspectors. I use printf debugging now because the more experience you get, the easier and faster it becomes. There's a point where it's just a faster tool to validate your mental model than watching values churn in memory.
Printf debugging is actually a very primitive case of predicate reasoning. Hopefully in the not too distant future instead of using prints to check our assumptions about the code we will instead statically check them as SMT solvable assertions.