It’s amazing to me not only how much things haven’t changed (many still use mouse, joystick, keyboard) but how much things have changed.
A dedicated keyset for frequent functions like this is certainly cool, and there was a time where it was cool to have num pad and function keys, but then most users started to use modifier keys (shift, ctrl, option, later added cmd) on their keyboards. We started with joysticks, then mice, then some trackballs, haptic joystick, light pen, touchpad, then haptic touch screen. And we have some voice interaction, then some immersive VR and augmented reality, then interaction with AI that can hear and see things, and we have some movement in brain interfacing over the years.
What is next?
(I apologize for leaving many things out and getting them in the wrong order. Just going on memory.)
Some things that don’t measure whether a developer is “good”:
- # LoC added, changed, removed
- number of points earned in a sprint, when those points aren’t quantitatively indicative of business value, and they never are
- number of on or off-the-clock hours schmoozing with others to solidify relationships with the business and “play the game”
- number of times they “sound like good developers / intelligent people” in meetings or presentations
- number of weeks they spent on really complex problems they worked to solve when they could have provided more incremental value earlier much more quickly
- number of solutions they provided the company quickly while leaving many times more LoC to maintain
- number of hours they spent honing code, formatting, updating to the latest versions but doing so for their own edification and preferences rather than focusing on the team and the business and what would help them
> Some things that don’t measure whether a developer is “good”:
> # LoC added, changed, removed
Everyone loves to say this, and yet at every company I've worked at, the top developers just cranked out code. High quality, performant code.
And at every company I've worked at, the lowest performers would take a week to write 100 lines of basic python.
"Oh, but khazhoux, those 100 lines were really very complex!" No, not actually.
"But won't people just pad their code with comments to increase their LOC?" 1) I've never seen anyone bother, 2) I'm not saying "managers should count LOC"... I'm saying managers should look to see how much code each developer is actually committing to the codebase. If someone isn't committing much code, but talks like they're writing a lot of code, then you may have a problem.
Honestly, somewhere along the way people seem to forget that software is made from code.
I would never suggest a manager stack-rank their team by LOC, but the notion that the code people write means nothing, is preposterous.
You can make massive LoC changes by reformatting everything or just adding superfluous code or changes.
You can rip out a lot of code rewriting it to simplify and then losing a ton of business functionality that was used, or causing the need for a lot of business changes in process that might not be for the business’s benefit.
You can, on your own or with AI, write many LoC, and maybe it provides business value, which seems to align with being a “good developer”, but someone has to maintain those LoC, so then you have to weigh the business value to the end user and the team. Is it ok to the team to have all the extra LoC to maintain? How often will those changes eventually result in valid further changes to get that code to work? Will other devs just rewrite it and are their changes good?
So in the end, no, LoC added/changed/removed is not a good indicator of added overall business value when weighing both the business value to the user and the ongoing maintenance time that ensues, even though “good developers” along with “average developers” and even “bad developers” may have high LoC added/changed/removed counts.
During what some come to think as their peak years, they still don’t think of things this way. But as we get older and more experienced, we realize that sometimes new or old crappy code is ok, sometimes we should do a better job if it makes sense for the business and team, that many can contribute in their own way if those are the people and resources we have, things may change, and overall there is a limited amount that humans can do.
If you base performance on LoC and alter the team accordingly, you may lose great developers. You may also introduce volatility and risk later having a codebase that is higher cost to the business.
But embracing change and working fast and loose may also be important, so it depends.
You're arguing a different point. I never said a manager should simply count the number of LOC changes as a productivity metric.
I said that good developers write a lot of code. And I'm not talking about senior developers who now do mostly advising/review/architecture work (and don't code much anymore). And sure, sometimes someone takes a long time for a critical few-line change, but that does not happen every day.
I find it exhausting, frankly, how much pushback this simple concept gets around here. It seems to be a reflection of the gigantic team sizes that are common these days, and the modern tolerance for low-output (but still highly paid!) developers. Maybe the popularization of 2-week sprints 15-20 years ago corrupted everyone into thinking that everything should take that long, minimum.
People are shockingly ok with taking 3 days to add an argparse block to a python script, or half-week to implement a single HTTP call. It's nuts!
I perused the three posts on this blog, and believe that an LLM was heavily used, because I used one daily, and this is the way its content reads.
The leisure the author speaks of may be their own, and the research of which they speak today may be done by a machine.
I think the intent is good, and if you as the reader get insight from it, then it is still valid, but I cannot read it, because I don’t feel a thread of consciousness helping me experience life with them.
If an author chose to instead pair with an LLM to research on their own and write themselves about it, perhaps it would be different.
Why do these posts keep getting to the front and even to the top of the HN feed? We are no better than machines, I guess.
A dedicated keyset for frequent functions like this is certainly cool, and there was a time where it was cool to have num pad and function keys, but then most users started to use modifier keys (shift, ctrl, option, later added cmd) on their keyboards. We started with joysticks, then mice, then some trackballs, haptic joystick, light pen, touchpad, then haptic touch screen. And we have some voice interaction, then some immersive VR and augmented reality, then interaction with AI that can hear and see things, and we have some movement in brain interfacing over the years.
What is next?
(I apologize for leaving many things out and getting them in the wrong order. Just going on memory.)