There is a world of programming where debuggers don't serve much purpose. Individual microservices are usually trivial, but push complexity into the interactions between services. Debuggers are not much use there; distributed tracing is more relevant. Functional programming, which is a growing part of the industry, really emphasizes code you can easily reason about. That's arguably the whole point of functional programming. Debuggers don't get much use there either.
Agreed, when I commented earlier I was also thinking about mentioning something similar with regards to classes of bugs due to interactions between systems.
Unless you have a setup where you can easily run one system with a debugger attached while connecting it to everything else, you’re basically restricted to running a debugger for bugs that can be reproduced locally.
I'm not a talented developer but I did spend my day on an F# piece of code that builds a databale based on a record type using reflection (and eventually realized it was going to need to be recursive) and I probably would've just quit if I wasn't allowed to use a debugger.
Now maybe if I was better that wouldn't be the case, but even the cleanest "wish i thought of that" functional code i've seen still looks like it'd be easier to fail fast using a debugger with.
In fairness though, I will admit I use the debugger a lot less when i'm not screwing with reflection on generic types or whatever because runtime errors just happen a lot less in functional styles. Usually if it compiles, it runs, because the compiler can sanity check the code better than you can.
It's really not. I'm quite terrible from any industry perspective.
F# isn't hard, it's just different. Hell in many ways i'd argue it's much much easier once you get used to it. It doesn't have the support C# does so often you're stuck with a library that WILL work but doesn't have documentation for doing it in F#, and that can lead to struggles, but that's not really a sign of being a good coder.
Most dotnet developers could probably code circles around me in F# if they knew it existed/gave it a chance.
Personally I stuck with it because it had the low code look of python with strong typing. It took a bit to wrap my head around some functional stuff (basically map/iter = foreach and if you want to update something on each loop you probably want a fold, or more likely a built in function), but once I got over that hurdle it was pretty smooth sailing.
The irony is that by far the hardest part is the library thing, which your average dotnet dev would handle WAAAAY better than me.
I’ve encountered the same thing. I’m a terrible programmer but find f# way clearer than most C#/java. But I work with tons of great developers who would rather cut off their finger than learn f#, it bothers me because it exposes some fundamental difference between us I dont like to believe exists.
Not a humblebrag. I’ve noticed I struggle with tracking lots of variables and states compared to many fellow developers. I’ve sort of tried to turn this weakness into a strength, by making code simpler, but sometimes I make it too terse because I like code golfing.
I don't agree, because learning F# is essentially learning two very different languages at the same time: first OCaml from which it was initially an implementation, then C# for all the object/CLR layer.
It's easy for people with functional programming knowledge, or who had OCaml as their first programming experience like I did at university, but for people without those exposures I can understand the difficulty.
Perhaps if you already know C#, otherwise I doubt it. Of course it depends on ones prior experience, but F# is functional but still requires you to understand OO in order to interact with the framework.
This was me - my first .net language was F# (although I'd dabbled a tiny bit in C#); it was hard to learn the .net standard library as well as trying to learn functional idioms...
I think it's harder if you already know C# or some other OO language. I really think it would be easier, or about the same, to teach a raw beginner the basics of F# vs C#.
The only reason F# winds up feeling harder isn't so much how F# works, but simply because the entire dotnet environment was built for C# first, so you do need to know how to handle the C# style. I do think that if you could just pick one and then suddenly have every library support it's styles, F# would probably be easier overall because it's just got a lot of nice features built in that make updating your code so much easier.
F# has two ways to invoke functions, ‘f a b’ and ‘f(a,b)’. You need to know when to use what. There is no way this is simpler to beginners compared to having one consistent way.
C# is not a simple language, but F# has basically all the complexity of C# with OCaml on top.
Either C# or OCaml would be simpler to learn, although the combination is powerful.
I don't know F# but the reference only mentions the `f a b` syntax.[1]
`f(a, b)` looks like a call to a function that takes a single tuple as its argument. So I would expect `f` to have the type signature `A * B -> C` instead of `A -> B -> C`. Is my intuition wrong? If it is, then what does F# use the parenthetical syntax for?
Methods from the .net framework and other libraries are called with a tuple of argument, since they are not compatible with the native F# was of calling functions.
This to me is a pretty simple thing to explain and deal with.
F# enforcing a lot of good practice code style stuff (order matters for example pisses off long time devs who already have styles, but prevents SOOOO much stupid bs from beginners) and basically eliminates runtime/chasing variable state errors completely so long as you can stay within style. Yes it'd be nicer if there was only one way to invoke functions but if i had to take a tradeoff I think it's a pretty easy one.
It is an issue that yes, like your example, you're often stuck ALSO learning OO because "oh you want to use X library, well that's OO so...", and even then you can isolate your mutable/OO areas really well, but this is more of an issue with it being an second fiddle language. If F# got F# specific libraries for all the C# stuff out there tomorrow I think it'd take off and most people would never look back.
If we're talking basic business logic/beginner programmer stuff, yeah I think F# offers a lot of stuff that makes it flat out easier to use. And if you want to point out complex issues, I feel the biggest one is that something that's intuitively much easier to understand in OO (create a variable/object, populate it on each iteration of a loop depending on logic), can feel daunting as hell in F# (fold).
Knowing a good bit of Rust helped me considerably, due to the commonalities of being expression-based, pattern matching and sum-types. F# almost feels like a more functional and GC'd Rust.
F# is heavily based on OCaml and Rust was heavily inspired by the ML family of languages, and I've often heard it described as an ML without GC and with memory management and a C++ style syntax.
It's 100% the latter. I get "ok we don't want to mix codebases" and that's fine, but if you can code in C# you can probably get up and running in F# in a week, maybe a month if you struggle with some of the concepts.
One major issue I do see coming from the C# side is "well how do I do this then?!", which often the answer is "you don't, because you don't need to" or "well what if it's more performant to do it mutably!" well then thankfully F# can absolutely do that.
If you keep an open mind it's really a very clean and simple language, but in an age where half of development is importing 8 well known libraries, not being the main supported language is a major weakness.
I normally write C++, but I've also written in Rust, Python, Ruby and bunch of other languages. I've never had trouble with a programming language until Haskell, and had to accept that I'm just probably just not smart enough to do it.
However, I wrote my first thing a port of a small C# tool of a couple hundred lines, to F# in about an hour. It's been about a week now, and things are considerably smoothing out.
To be considered niche, the F# tooling has been great. Also, having the .NET libraries available adds a lot of built-in capability.
The distributed tracing point makes sense, but I think debuggers are still quite useful for functional code. Though maybe less commonly needed than just the repl.
Hi! I'm that person! Senior engineer, decade of experience. I've used debuggers in the past, both for running code and looking at core dumps, but I really don't find them to be cost effective for the vast majority of problems. Just write a print statement! So when I switched from C to python and go a couple jobs ago, I never bothered learning how to use the debuggers for those languages. I don't miss them.
I am also this person. I'm a systems programmer (kernel and systems software, often in C, C++, golang, bit of rust, etc)
What I find is that if my code isn't working, I stop what I'm doing. I look at it. I think really hard, I add some print statements and asserts to verify some assumptions, and I iterate a small handful of times to find my faulty assumption and fix it. Many, many times during the 'think hard' and look at the code part, I can fix the bug without any iterations.
This almost always works if I really understand what I'm doing and I'm being thoughtful.
Sometimes though, I don't know what the hell is going on and I'm in deep waters. In those cases I might use a debugger, but I often feel like I've failed. I almost never use them. When I helped undergrads with debuggers it often felt like their time would be more productively spent reasoning about their code instead of watching it.
Your list of programming languages excluded Java. Please ignore this reply if Java is included.
Are you aware of the amazing Java debugger feature of "Drop to Frame"? Combined with "hot-injection" (compile new code, then inject into current debug'd JVM), it is crazy and amazing. (I love C#, but the hot-injection feature is much worse than Java -- more than 50% of the time, C# compiler rejects my hot-injection, but about 80% of the time, JVM accepts my hot-injection.) When working on source code where it is very difficult to acquire data for the algorithm, having the ability to inspect in a debugger, make minor changes the the algorithm, then re-compile, inject new class/method defs, the drop to frame, the re-exec the same code in the same debug session is incredibly powerful.
Yes, that sounds pretty cool and it doesn't take a lot of imagination to see the utility in this. I've done a lot work on lower level software, often enough on platforms where the debuggers are tough to get working well anyway.
The plus side of less capable tooling is it tends to limit how complex software can be--the pain is just too noticeable. I haven't liked java in the past because it seems very difficult without the tooling and I never had to do enough java to learn that stuff. Java's tooling does seem quite excellent once it is mastered.
This part: "I haven't liked java in the past because it seems very difficult without the tooling"
If you are a low level programmer, I understand your sentiment. A piece of advice, when you need to use Java (or other JVM languages), just submit to all the bloat -- use an IDE, like IntelliJ, that needs 4GB+ of RAM. The increase in programmer productivity is a wild ride coming from embedded and kernel programming. (The same can be said for C#.)
I think there's a sort of horseshoe effect where both beginners and some experienced programmers tend to use print statements a lot, only differently.
When you're extremely "fluent" in programming code and good at mentally modelling code state, understanding exactly what the code does by looking at it, stepping through it doesn't typically add all that much.
While I do use a debugger sometimes, I'll more often form a hypothesis by just looking at the code, and test it with a print statement. Using a debugger is much too slow.
This varies, but in a lot of environments, using a debugger is much _faster_ than adding a print statement and recompiling (then removing the print statement). Especially when you're trying to look at a complex object/structure/etc where you can't easily print everything.
I think there is a bit of a paradox, debugging can seem heavy, but then when you’ve added enough print statements you’ve spent more time and added more things to clean up than if you had just taken the time to debug, well, you should have debugged. But you don’t know until you know. This seems to also appear with the “it would’ve been faster to not try to automate/code a fuller solution” than address whatever you were doing.
In what way is using a debugger "slow"? I find that it speeds up iteration time because if my print statements aren't illustrative I have to add new ones and restart, whereas if I'm already sitting in the debugger when my hypothesis is wrong, I can just keep looking elsewhere.
I find I use the stepping debugger less and less as I get more experienced.
Early on it was a godsend. Start program, hit breakpoint, look at values, step a few lines, see values, make conclusions.
Now I rely on print statements. Most of all though, I just don't write code that requires stepping. If it panics it tells me where and looking at it will remind me I forgot some obvious thing. If it gives the wrong answer I place some print statements or asserts to verify assumptions.
Over time I've also created less and less state in my programs. I don't have a zillion variables anymore, intricately dependent on each other. Less spaghetti, more just a bunch of straight tubes or an assembly line.
I think it's possible that over the years I hit problems that couldn't easily be stepped. They got so complicated that even stepping the code didn't help much, it would take ages to really understand. So later programs got simpler, somehow.
I find I use the stepping debugger more and more as I get more experienced. Watching the live control flow and state changes allows me to notice latent defects and fix them before they ever cause actual problems. Developers ought to step through every line of course that they write.
Or assume that python debuggers aren't as nice to use, or that python does not lend itself to inspecting weird memory and pointer dereferences, or a bunch of other possibilities.
Most of what I worked with code wise growing up was either very niche or setup in such a way that debuggers weren't an option so I never really used them much either. I don't understand their appeal when print statements can give you more context to debug with anyway. I'm definitely no senior but I'm used to solving things the "hard way" as one developer told me. He wondered how I could even work because of how "bad" my tools were but I didn't know any better being self taught and with certain software it's just not compatible with the tools he mentioned.
It depends what you're doing. Sometimes inserting a print and capturing state works. Sometimes you're not sure what you need to capture, or it's going to take a few iterations. That's where pdb / breakpoint() / more interactive debuggers can be very helpful.
You don’t need breakpoints all the time though. If you’re familiar with the code (or just “talented”), you might have an intuition for what the problem is and it’s faster to just think through it (and maybe write a few quick prints) instead of interrupting your train of thought setting breakpoints, clicking continue, waiting for the IDE to freaking load the debugging session (cough Visual Studio), rerunning the test, etc.
Besides, every IDE has a different way to debug, so they might just not be familiar with the interface. I can’t tell you exactly how to debug in VSCode even though I’ve used it the most. I’ve had to run a debugger only a handful of times in the past couple of years and it’s always for codebases that are more tangled (e.g. .NET where there’s interfaces everywhere).
+1. i just used a debugger today at my work for the first time in 4 years by coincidence. normally i just throw a couple prints and rerun the test and today i was reminded why. takes like 8 minutes to run the test in debug mode. lots of useful info in there but usually i can guess where the error is without it.
it was indeed good at pinpointing the sigsegv though.
As an untalented developer, I used to make heavy use of debuggers, and knew them well. Currently, as a still untalented developer, I've fallen out of the habit of using them and don't know how to for my current toolchain.
Neither situation was at all related to my talent (or lack thereof).
Concluding that using printf to debug is superior to using a debugger would be a mistake!
I’ve been programming in C for nearly 20 years and primarily used printf for debugging for the first 12-15 years, and have used debuggers more and more. I use Emacs, and its gud mode is so nicely integrated into everything that using gdb is truly much, much faster than the alternative. I don’t use print debugging at all anymore.
It takes less time and cognitive overhead to just stop the program on the same line you would have inserted your printf, but you can now inspect the entire program state.
Obviously I’m not saying anything revelatory if you use an IDE on a regular basis… I guess this comment is for the folks who eschew IDEs.
This works well mostly and I use this. I usually implement a logging system where I can enable/disable individual components but use a debugger to examine functions which are not behaving as intended.. or core dumps. Usually in connection with an unit test that fails or started failing caused by my changes to the codebase that I'm working on.
Logging alone is fine, but it can often be difficult not to drown in information.
I do embedded programming in C mostly.
I don't usually use breakpoints in part because I use neovim and in part because... I seldom truly need them. Who are these people and what are these problems where stepping through method calls etc is actually necessary? I find it hard to believe. I've been successfully programming and problem solving this way for over 15 years.
Meh. I use the right tool for the job, and most of the time, a simple print statement put in the right place beats any debugger. Certainly the debugger has helped me in the past, but maybe one or two times only. Besides, putting a print statement costs nothing, and one knows exactly how to do so. Debuggers vary wildly: terminal, IDEs, etc.
I didn't use debuggers since switched from C to Rust. By the way I switched from emacs to VSCode and I do not know how to debug here. I never used debugger with lisp. Debugging is a language dependent technique.
I haven’t been shocked that fellow engineers don’t know how to use a debugger for at least ten years. Most jobs in the industry can be done adequately without getting into tools that low level.
Maybe rare but you may be very skilled at understanding structure and finding solution but knowing nothing about tools. As long as you're either one of them.