While laudable, this seems significantly harder to implement than banning advertising. Not that either are particularly feasible policies but this one seems harder.
https://www.hillelwayne.com/post/graph-types/ gives an interesting take on why we don't see a graph type as a primitive on more programming languages. Essentially boils down to graphs being very vague and depending on the topology of your graphs you are going to want different implementations for reasonable efficiency.
Interesting! Thanks for the link. I suppose the graph databases just take an opinionated approach. NetworkX is great; I always wished it had a simple backend.
I like this. Testing the browser extension now and pretty happy with it (after tweaking so returning to a tab has a grace period). I was using StayFocused, which is okay, but too tempting to just disable it (and annoying if I need to access a blocked site for work purposes).
Java Streams and Kotlin Sequences provide similar iterator capabilities. Iterators are great for this lazy performance but can sometimes be difficult to debug. Especially if you are nesting many iterators, then extracting the underlying collection can be complicated. But necessary in many workflows.
Excited to hear about Hardcover! I like StoryGraph but the lack of API frustrates me - I want to be able to sync back to my general notes store (Obsidian). Hopefully Hardcover works better with that.
Which is funny, StoryGraph is a Ruby on Rails app, exposing an API is a doable thing, which leads me to believe it is not a priority or a purposeful design decision.
Yeah Hardcover seems to have a GraphQL API they use for their UI, which they expose. There’s not a lot of extra polish for third party devs — it feels like “this is the API we use, use it or not, things may break”. On the other hand, StoryGraph does server-side rendering and so it doesn’t have an API already. So adding one would be a decent amount of work
To be clear, Hardcover also does server-side rendering.
GR ditching their API was one of the primary motivations for Adam Fortuna (Hardcover's founder and Lead Dev) to even think about trying to create a competitor, so when he did create one, having an API available to others was a primary focus.
Note: Hardcover is also working towards open sourcing at some level, hopefully in 2025.
>Debug information tends to be large and linking it slows down linking quite considerably. If you’re like many developers and you generally use println for debugging and rarely or never use an actual debugger, then this is wasted time.
Interesting. Is this true? In my work (java/kotlin, primarily in app code on a server, occasional postgres or frontend js/react stuff), I'm almost always reaching for a debugger as an enormously more powerful tool than println debugging. My tests are essentially the println, and if they fail for any interesting reason I'll want the debugger.
In my experience Java debuggers are exceptionally powerful, much more so than what I've seen from C/C++/Rust debuggers.
If I'm debugging some complicated TomEE application that might take 2 minutes to start up, then I'm absolutely reaching to an IntelliJ debugger as one of my first tools.
If I'm debugging some small command line application in Rust that will take 100ms to exhibit the failure mode, there's a very good chance that adding a println debug statement is what I'll try first
Sure. It’s got breakpoints, and conditional breakpoints, using the same engine as IntelliJ. It’s got evaluate, it’s got expression and count conditionals, it’s got rewind. It has the standard locals view.
Rust support has improved in 2024 pretty strongly (before this year it just shelled out to lldb); the expr parser and more importantly the variable viewer are greatly improved since January.
It can do any single expression, and the results are better than lldb, but it can’t do multiple statements and not everything in Rust can be one expression; you can’t use {} here
Hmm, I've only seen that in some 'new-ish' languages sometimes (like currently Zig), which I think is a compiler bug when generating debug info. In C/C++ I see such random stepping only when trying to debug an optimized program.
Depends on the developer. in the Practice of Programming https://en.m.wikipedia.org/wiki/The_Practice_of_Programming by Brian W. Kernighan and Rob Pike they say they use debuggers only to get a stack trace from a core dump and use printf for everything else. You can disagree but those are known very good programmers.
But what source code debuggers did they have available?
Other than gdb, I can't name any Unix C source code debuggers.
I believe they were working on Unix before GDB was created (wikipedia says gdb was created in 1986 - https://en.wikipedia.org/wiki/Gdb).
Plan 9 has acid, but from the man page and english manual, the debugger is closer to cli/tui than gui.
Printf debugging excels on environments where there's already good logging. Here I just need to pinpoint where in my logs things have already gone wrong and work my way backwards a bit.
You could do the same with a debugger setting up a breakpoint, but the logs can better surface key application-level decisions made in the process to get to the current bad state.
On a debugger I need to wind back on all functions, which it might get awful as some of them might be likely correct library calls that you need to skip over when going back in time, but that will take a huge portion of the functions called before the breakpoint.
I don't think it's impossible to do with a debugger, but logging sort of bypasses the process of telling the debugger what's relevant so it can hide the rest, and it might already be in your codebase, but there's no equivalent annotations already there in the code to help the debugger understand what's important.
To me printf helps surfacing the relevant application-level process to get to a broken state, and debuggers help understand hairy situations where things have gone wrong at a lower level, say missing fields or memory corruption, but these days with safer languages lower level issues should be way less frequent.
---
On a side-note, it doesn't help debuggers that going back in time was really hard with variable-length instructions. I might be wrong here, but it took a while until `rr` came out.
I do think that complexities like that resulted in spending too much time dealing with hairy details instead of improving the UI for debugging.
I really value debuggers. I have spent probably half my career solving problems that weren’t possible to solve with a debugger. When I can fall back on it, it helps me personally quite a bit.
I find them amazing, it's just that printf is unreasonably good given how cheap it is.
If I had the symbols, metadata, powerful debugger engine and a polished UI, I'll take that over printf everyday, but in the average situation printf is just too strong when fighting in mud.
Why not both? For any sufficiently complex app you will have some form of logging either way, and then you can further pinpoint the issue with a debugger.
Also, debuggers can do live evaluation of expressions, or do stuff like conditional breakpoints, or they can just simply add additional logs themselves. They are a very powerful utility.
Your existing logs will tell you roughly where and you just insert some more log lines to check the state of the data.
Depends how fast your build/run cycle is and how many different prcocesses/threads whether a debugger will be faster/easier but a lot of it just comes down to preference. Most time spent debugging for me at least is spent thinking about the probable cause then choosing what state to look at.
Faster than pressing F9 (to set a breakpoint on the current line) and then F5 (to start into the debugger)?
Printf-debugging has its uses, but they are very niche (for instance when you don't have access to a properly integrated debugger). Logging on the other hand is useful, but logs are only one small piece of the puzzle in the overall debugging workflow - usually only for debugging problems that slipped into production and when your code runs on a server (as opposed to a user machine).
It’s very interesting. I’ve tried to observe myself. It seems that if I can see a breakpoint somewhere and then examine state and then see what the problem is, a debugger is great.
If, however, it’s something where I need to examine state at multiple times in the execution, I lose track in my mind of the state I’ve seen before. This is where print debugging shines: I can see how state evolved over time and spot trends.
I'm not against printf at all, my lifetime commit history is evidence of that. Do you also think that in the case of a coredump not existing, that printf is faster? Sincere question. I'm having an internal argument with myself about it at the moment and some outside perspective would be most welcome.
The only time I use a debugger with Rust is when unsafe code in some library crate messes up. My own code has no "unsafe". I have debug symbols on and a panic catcher that displays a backtrace in a popup window. That covers most cases.
Rust development is mostly fixing compile errors, anyway. Once it compiles, it often works the first time. What matters is compile time for error compiles, which is pretty good.
Incremental compile time for my metaverse client is 1 minute 8 seconds in release mode. That's OK. Takes longer to test a new version.
Debuggers are not only useful for actual debugging as in 'finding and fixing bugs', they are basically interactive program state explorers. Also "once it compiles, it works" is true for every programming language unless you're a complete newbie. The interesting bugs usually only manifest after your code is hammered by actual users and/or realworld data.
Rust protects against undefined behavior. This is enough that programs either panic in a well-defined way, or continue to run well enough that logging works.
I use the debugger fairly regularly, though for me I'm on a stack where friction is minimal. In Go w/ VS Code, you can just write a test, set your breakpoints, hit "debug test", and you're in there in probably less than 20 seconds.
I am like you though, I don't typically resort to it immediately if I think I can figure out the problem with a quick log. And the times where I've not had access to a debugger with good UX, this tipping point can get pushed quite far out.
Debugging in Rust is substantially less common for me (and probably not only for me) because it is less often needed and more difficult - many things that are accessible in interpreted world don't exist in native binary.
I do care about usable tracebacks in error reports though.
Main challenge with debuggers in Rust is to map the data correctly into the complex type system. For this reason I rarely use debuggers, becase dbg! is superior in that sense.
println debugging is where everyone starts. Some people never graduate to knowing how to use a debugger.
Debugging through log data still has a place, of course. However, trying to do all of your debugging through println is so much harder, even though it feels easier than learning to use a debugger.
I wonder, do you use a separate debugger, or a debugger that's integrated into your IDE? "Reaching for a debugger" is just pressing F5 in an IDE.
E.g. I keep wondering whether the split between people who can't live without debuggers vs people who rarely use debuggers is actually people who use IDEs versus people who don't.
Data point: I develop in Java and I use IntelliJ. I run everything in debug mode. So it’s really easy for me to enter the debugger.
But I find that if I have to step around more than a handful of times to find the issue then I forget what happened five steps ago. So I teach for print debugging quite often.
To be fair, if your code is multithreaded and sensitive to pauses, it becomes harder to debug with a debugger.
Ultimately, if you have a good logging setup and kinda know where the issue is a quick log message could be faster than debugging if all you want to do is look a variable value.
Logging can change timing issues though. There are too many cases where an added log statement "fixed" a race condition, simply by altering the timing/adding some form of synchronization inherent in the logging library.
printed/println debugging works if you wrote the code or have a good idea of where to go.
I frequently find myself debugging large unfamiliar code bases, and typically it’s much easier to stick a breakpoint in and start following where it goes rather than blindly start instrumenting with print statements and hoping that you picked the right code path.
I also don't get it, debuggers as integral part of the programming workflow are a productivity multiplier. It does seem to be a fairly popular opinion in some programmer circles that step-debugging is useless, but I guess they never really used a properly integrated debugger to begin with (not a surprise tbh if all they know is gdb in the terminal).
That is why I found so great that Carmack's opinion on debuggers is similar to ours, at least there is some hope to educate the crowds that worship Carmack's achievements.
Is that crowd getting bigger or smaller though? When he worked for id Software, he was pretty popular in my circle of friends, because we were playing ioquake3 forks that we kept making mods for and so forth.
I trained in the cout school of debugging. I can use a debugger, and sometimes do, but it's really hard to use a debugger effectively when you're also dealing with concurrency and network clients. Maybe one day, I'll learn how to use one of the time traveling debuggers and then I can record the problem and then step through it to debug it.
In c++ for debugging a mid-sized app, gdb will sometime take up to 5 min to start (assuming no remote symbol cache used). On fairly powerful hardware - i7 13000-something, 64g of RAM.
I have the time to do 15 compile-edit-run cycles adding prints in that time span before I have even reached main() in it. (And I really tried every optimisation, gdb-index, caches, split DWARF etc. It jus is absolutely mind bogglingly slow and sometimes will even just crash when reaching a breakpoint. Same for lldb. Those are just not reliable tools. And I'm not even talking of the MSVS debugger which I once timed to take 18 minutes from "start debugging" to actually showing a window with all the symbol server stuff.
Vs story does not match my experience for AAA game projects. First vs always had start debugging without loading any symbols at all. 2nd one can load each one module on demand. 3rd local file cache for symbol servers can be very much warm (ie have most of needed symbols in RAM). 4th if your project is stuck on old Vs version you can still debug with latest version of debugger in many cases. Ie for us there are no limits of how many versions of vs dev has on their pc. It might be only available if org has volume deals with Ms though.
Downloading symbols for first time from network symbol server is long but its not part of debugging cycle, at least after 1st run.
I work in data engineering. I tend to do println debugging because the production data sets are not available from my machine. I tend to prefer REPL or notebook driven development from a computer that is connected to the production environment.
I came here to write exactly this .. if I was drinking something I would have spit it everywhere laughing when I read it.
I guess 'many developers' here probably refers to web developers who don't use the debugger, cause it's mostly useless/perpetually broken in JS land ..? I rely heavily on the debugger; can't imagine how people work without one.
the callstacks are hard to read and watching variables across context boundaries is difficult. yea you can pause the program with the debugger, but doing so doesn't give much of a picture of how the program is functioning. I've found seeing the prints from all the 'threads' gives a better sense of what's happening
From NP++, you could just ctrl-shift-f to "find in files" and it'll be quick about it, but I personally would grep from the root of the project. I usually keep a handful of command line tabs open in Console2, one for git, one for grep, one for build commands, others that are running local services etc. Anyways, the reason for this is the mental map / spatial geography of a project .. enough repetition cd'ing through folders and seeing file paths while grepping helps me visualize actual locations of things, which helps me grasp the entire structure of a project.
Meanwhile, in VS Code etc you can just hover over something and click to go directly to it, which is cool, but it's kind of like teleporting instead of actually driving to the destination enough to learn the roads.
I do a similar thing with git PRs -- for example, if you build something that follows a bundled pattern (ex. a component that has frontend, backend and data-related files, plus naming conventions), having a clean + complete reference PR to revisit when you make new components helps ensure I don't miss anything and stay consistent. I usually view these in-browser since Github/Bitbucket/Gitlab all have nice interfaces to see what files you need, where they go, how they're named, etc.
I do the same thing (though with Vim as my editor instead of NP++). Grep is seriously underrated. (well technically I use my own grep replacement[1] instead for a few reasons, but plain old grep can get the job done very well)
What are some of the challenges involved with international hiring in a remote environment? I work at a fully remote startup with ~200 employees. We hire from a couple dozen countries but I know there are fairly significant barriers whenever we add a new one. What are some of those challenges? Are they getting more streamlined?
From an immigration standpoint, there are no issues with U.S. companies employing foreign nationals who are working remotely OUTSIDE the U.S.; U.S. immigration doesn't come into play unless and until the individual will be working IN the U.S. For employees working remotely in the U.S., while this needs to be noted and referenced in any immigration application, it doesn't really change the immigration options and paths.
Ooh this is fun trivia - originally dragons had a hard time with human names, so it became a tradition for dragon riders (particularly males) to adopt the dragon’s pronunciation as an honorific. So “Simon” becomes “S’mon.”
You say it with a bit of a slur - suhMON or fuhLAX - where the first syllable is not only unemphasized but uttered as quickly as possible then slurred into the next. F’lar really is just “fuhLAR”
Unlikely, since no English speaker would be able to pronounce that cluster. The odds are overwhelming that it represents nothing at all, just like the apostrophe in "don't".
In Finnish, the apostrophe marks a syllable break between instances of the same vowel, and sounds like a very short pause. Maybe they use Finnish spelling rules on this planet.
Example word: vaa'an — genetive of "vaaka" = "scale"