Compressed JSON is good enough and requires less human communication initially.
Sure it will blow up in your face when a field goes missing or value changes type.
People who advocate paying the higher cost ahead of time to perfectly type the entire data structure AND propose a process to do perform version updates to sync client/server are going to lose most of the time.
The zero cost of starting with JSON is too compelling even if it has a higher total cost due to production bugs later on.
When judging which alternative will succeed, lower perceived human cost beats lower machine cost every time.
This is why JSON is never going away, until it gets replaced with something with even lower human communication cost.
Print debugging is fine and all but I find that it pays massive dividends to learn how to use a debugger and actually inspect the values in scope rather than guessing which are worth printing. It also is useless when you need to debug a currently running system and can't change the code.
And since you need to translate it anyway, there's not much benefit in my mind to using something like msgpack which is more compact and self describing, you just need a decoder to convert to json when you display it.
I'm not guessing. I'm using my knowledge of the program and the error together to decide what to print. I never find the process laborious and I almost always get the right set of variables in the first debug run.
The only time I use a debugger is when working on someone else's code.
I didn't say it's some arcane skill, just that it's a useful one. I would also agree that _reading the code_ to find a bug is the most useful debugging tool. Debuggers are second. Print debugging third.
And that lines up with some of the appeals to authority there that are good, and that are bad (edited to be less toxic)
Even though I'm using the second person, I actually don't care at all to convince you particularly. You sound pretty set in your ways and that's perfectly fine. But there are other readers on HN who are already pretty efficient at log debugging or are developing the required analytical skills and I wanted to debunk the unsubstantiated and possibly misleading claims in your comments of some superiority in using a debugger for those people.
The logger vs debugger debate is decades old, with no argument suggesting that the latter is a clear winner, on the contrary. An earlier comment explained the log debugging process: carefully thinking about the code and well chosen spots to log the data structure under analysis. The link I posted was to confirms it as a valid methodology. Overall code analysis is the general debugging skill you want to sharpen. If you have it and decide to work with a debugger, it will look like log debugging, which is why many skilled programmers may choose to revert to just logging after a while. Usage of a debugger then tends to be focused on situations when the code itself is escaping you (e.g. bad code, intricate code, foreign code, etc).
If you're working on your own software and feel that you often need a debugger, maybe your analytical skills are atrophying and you should work on thinking more carefully about the code.
Debuggers are great when you can use them. Where I work (financial/insurance) we are not allowed to debug on production servers. I would guess that's true in a lot of high security environments.
So the skill of knowing how to "println" debug is still very useful.
It is only "human readable" since our tooling is so bad and the lowest common denominator tooling we have can dump out a sequence of bytes as ascii/utf-8 text somewhat reliably.
One can imagine a world where the lowest common denominator format being a richer structured binary format where every system has tooling to work with it out of the box and that would be considered human readable.
> People who advocate paying the higher cost ahead of time to perfectly type the entire data structure AND propose a process to do perform version updates to sync client/server are going to lose most of the time.
that's true. But people also rather argue about security vulnerabilities than getting it right from the get-go. Why spend an extra 15 mins effort during design when you can spend 3 months revisiting the ensuing problem later.
I've gone the all-JSON route many times, and pretty soon it starts getting annoying enough that I lament not using protos. I'm actually against static types in languages, but the API is one place they really matter (the other is the DB). Google made some unforced mistakes on proto usability/popularity though.
I once converted a fairly large JS codebase to TS and I found about 200 mismatching names/properties all over the place. Tons of properties we had nulls suddenly started getting values.
Sounds like this introduced behavior changes. How did you evaluate if the new behavior was desirable or not? I’ve definitely run into cases where the missing fields were load bearing in ways the types would not suggest, so I never take it for granted that type error in prod code = bug
The most terrifying systems to maintain are the ones that work accidentally. If what you describe is actually desired behavior, I hope you have good tests! For my part, I’ll take types that prevent load-bearing absences from arising in the first place, because that sounds like a nightmare.
Although, an esoteric language defined in terms of negative space might be interesting. A completely empty source file implements “hello world” because you didn’t write a main function. All integers are incremented for every statement that doesn’t include them. Your only variables are the ones you don’t declare. That kind of thing.
Makes sense. That sounds like a good reason to do it. Unfortulately I've also seen people try to add typescript or various linters without adequate respect for the danger associated with changing code that seems to be working but looks like a bug especially when it requires manual testing to verify.
It costs time, distracts some devs, and adds complexity for negligible safety improvement. Especially if/when the types end up being used everywhere because managers like that metric. I get using types if you have no tests, but you really need tests either way. I've done the opposite migration before, TS to JS.
Oh I forgot to qualify that I'm only talking about high level code, not things that you'd use C or Rust for. But part of the reason those langs have static types is they need to know sizes on stack at compile time.
Sure it will blow up in your face when a field goes missing or value changes type.
People who advocate paying the higher cost ahead of time to perfectly type the entire data structure AND propose a process to do perform version updates to sync client/server are going to lose most of the time.
The zero cost of starting with JSON is too compelling even if it has a higher total cost due to production bugs later on.
When judging which alternative will succeed, lower perceived human cost beats lower machine cost every time.
This is why JSON is never going away, until it gets replaced with something with even lower human communication cost.