That's a good assessment of the books, and one I put into the introduction. It's weird because what is great for a beginner with zero knowledge is incredibly painful for someone who's experienced. I think that's the key with my book's success and also why a lot of professional programmers seem to completely dislike it. It's also why I'm very honest and upfront about this right in the beginning.
You're welcome. I honestly write the books for people like you, and thankfully the world of programmers and the world of regular people really doesn't intersect much.
You're conflating internet interactions with real interactions.
I don't drink at all. Never done drugs and have never been drunk or anything. When I go to bars with friends I find that as the night goes on people become complete drunken idiots and are impossible to deal with. There's no polite way to tell a guy to stop talking to you about his chia pet collection or to keep someone from puking on your shoes or trying to start a fight. Best solution is to just not go to bars, which is what I do. If I need to fit in for some reason, I usually just pretend I'm kind of drunk like everyone else, which I haven't done since I was in my early 20s.
Twitter, these HN comments, IRC, the whole internet is like a massive bar full of drunk idiots. Especially toward the end of the day. Something about the internet makes people turn off their rational mind and just spew hate, stupidity, propaganda, and lies. In this case, it's the same interactions and I get tired of it. You can't tell a belligerent drunk to politely stop smashing your car any more than you can tell a belligerent twitter user to stop talking to you.
So, your comment amounts to judging my interactions with people professionally based on my interactions a bunch of drunk idiots at a bar. Nobody is a saint, and expecting me or anyone else to take abuse and poor behavior like Jesus before you'll think they're a good person is wrong.
And, the fact that you are actually spending your morning (day/night) telling people how I'm a fucked up asshole because of how I deal with fucked up assholes kind of says more about you than me.
I disagree with you about the internet. My bet is we're still learning what proper etiquette on the internet is. You have a great point about people not knowing when to let up online, but I also look around and see both the net etiquette evolving and the commentary on it evolving and to me that is a sign we're getting better.
Why am I spending my time today on this conversation? Because you've become reasonably influential (and certainly for good reason, mongrel, learn the hard way, etc) and because of your influence someone might say hey if I'm a good programmer it might be reasonable to have a short fuse with people and tell them to go fuck themselves because Zed Shaw does. And I want to use this space to say, hey I hope we can be nicer to each other. That's it. I really don't want it to be about anyone in particular, I just hope since programmers work with each other and write about each other's work on the internet we can be considerate and try to be nice to each other even when we're being critical.
I am contrarian and rude, but you most likely worship quite a few people in tech who are also contrarian and rude but you say nothing. I bet there's a CEO you admire who is even worse than me and you tout his words like they're a gospel.
I personally would love nothing more than to have the industry flush everyone who behaves like that totally out of it, but as long as people like you hold those with no power to different standards than those with power, it'll never change.
No, the relevant fact in RiaG is that DHH had been lying to people for years claiming that "rails scales" when he had to handle 400 restarts a day of his process.
It's interesting that you'd use an anonymous account to sling some slander, but I'll answer you:
Yep, that comment thread is great and people should read it for an explanation as to how completely insecure C is. It made me realize that nobody can teach C safely. Not me, not K&R, nobody. The language is completely unsafe by design. If you think K&R could, you should realize that they fixed the code in the book numerous times to make it more secure during it's 40+ printings.
Based on that, I killed my darlings. I should have never started this book as a "C book". I rewrote about 50% of it to instead focus on the things a mere mortal like me can teach:
0. How to learn any programming language quickly with some tricks I know.
1. Secure programming and defensive coding skills, which a broken language like C is perfect for teaching.
2. Testing and reliability.
3. Most of the C I've found safe and useful, and how to avoid UB when possible.
4. Algorithms and how to apply them.
5. And finally building projects as small challenges to get better at C.
So everyone was right, and I adapted the book to denote that. I also started a project, which hopefully I'll find time to do, that is going to catalog all of the UB in C, write a unit test for each one, and then attempt to assess the security failures of it:
I'm currently finishing up a book, but this project interests me because what I've seen is most "professional" C programmers end up pulling out UB whenever they are called out on a secure coding practice they fail at. I think a good catalog of how to cause security failure with C UB would be instructive to everyone.
And then we can all just stop using C. It's terrible.
Now that you have this new information, hopefully you'll update your slander.
I'm just commenting from the position of an observer who has found that your general attitude undermines the work you do, irrespective of whether you are right or wrong about the issue at hand.
We are all wrong at times. There is no shame in that. But finally admitting fault after months of intransigence (I'm taking about from the launch of the book, when people first started criticising it, until that thread) doesn't excuse your behaviour prior to that. With a bit more humility from the beginning none of this would have happened. If you are going to pick a fight with people, C language lawyers are probably about the worst target.
I don't care about "winning" the argument - just about everybody already agreed that C is highly unsafe, and personally I think K&R is quite outdated now, since it doesn't dwell as much as it should on all the dangerous and difficult aspects of C. Much like C itself, it generally assumes superhuman competence on the part of the reader.
Putting that aside, I do think we are lacking in resources that teach people about the many pitfalls of C in one place (if only to scare people away from the idea of using C for anything network facing). Especially in an era when a lot of people learning C are probably already familiar with the basic syntax and control flow, through knowledge of Java or other languages, and will thus probably be tempted to skim through beginner C books. People coming from that direction probably find C deceptively familiar, and aren't aware of a bunch of things like the undefined behaviour of certain integer overflows and shifts, or the strict aliasing rules, or possibly even reading uninitialized variables.
John Regehr has a lot of good blog posts on this topic:
As you can see in some of those examples UB can be very difficult to spot, even for experts like compiler engineers or crypto developers who are intimately familiar with the rules. Also some things are just plain tricky to do correctly and efficiently (e.g. http://blog.regehr.org/archives/1063) and in the past compilers were much less aggressive about optimisations that affected code containing undefined behaviour. So you could get away with it, and incorrect code became the accepted way to do some things. This resulted in a lot of gnashing of teeth and a few well known security vulns when old code started to break with newer compilers.
Well this if fun folks, but frankly, you all have proven just how conservative programmers are. My chapter advocating something as "revolutionary" as including the lengths of strings when you process them has received more hate mail than anything I've written. And I'm the guy who gets death threats because I don't like Ruby.
If you are reading this and saying, "This guy's wrong about including the lengths of strings!" Then I ask: Why are you using any modern language? Nearly every language in use today include the lengths of strings and backing buffers in their string implementation, and you all use it and appreciate it, but when I advocate it suddenly it is heresy and deserving of vitriol.
I personally don't care what you all think, which I know insults your pride at being the smartest people in the room, but until you are willing to admit that your hatred of that chapter is based entirely on nostalgia and not on the merits of my fairly simple claim that K&R (or any) C code is error prone, then you are not going to advance the state of the art any further than the 1970s.
This is what makes me sad about computing today. You are all desperately clinging to the notion that you are radical free thinking futurists while you desperately cling to reverence for the past and are incredibly resistant to any change. If something as simple as a thought experiment to look objectively at how things are done and try something better makes you angry, then you are not free thinking futurists.
Anyway, enjoy your day folks. I'm going to go do some painting.
That'd be up to my publisher (A/W) but I do believe they roll out French versions of my book depending on market demand. How that demand happens I have no idea.
> The worst possible scenario for the safercopy() function is that you are given an erroneous length for one of the strings and that string does not have a '\0' properly, so the function buffer overflows.
Your argument is that the safercopy() function is "safer" in that it is guaranteed to terminate regardless of whether the underlying buffer has a NUL byte in it. While that's true, it's sort of missing the point a bit, I think. The unsafe copy() function wasn't primarily unsafe because there was no guarantee it would terminate -- it was unsafe because it corrupts a bunch of memory, exposing you to a wide range of insecurities (the least of which is that your program might crash). safercopy() is still prone to that behavior if the lengths you pass in aren't accurate. While it is guaranteed to only corrupt n bytes of memory rather than an arbitrarily large number of bytes, the damage might as well already be done by the time you corrupt those n bytes. So to answer your question:
> How do you think an alternative copy function that uses lengths would have buffer overflows?
It's unsafe in exactly the same way that the unsafe copy() function is: if the arguments you pass into the function are incorrect/don't point to the data you think they point to, you'll corrupt memory. Now, you could make the argument that it's much easier to just remember the lengths of all the buffers you allocate than it is to remember to NUL-terminate all your C strings -- I would agree -- but I don't know if the article does a great job of explaining that.
> It's unsafe in exactly the same way that the unsafe copy() function is
No, that's the logic error every programmer makes. The copy() function is always wrong, because it can't confirm that the string has the right length without looking at the string which causes the error.
With my function I can go to as great a length as I want to confirm that the string is actually as long as I say it is. I can't mitigate every possible error of misuse, but the errors safercopy() can have are much smaller than copy().
Your argument is effectively stating that because you can exploit one with a general "UB" error, that it's the same size and classification of errors as with the other. That's invalid, and proven in my writing.
If the lengths are wrong. You said that the function shouldn't assume that strings are null-terminated correctly -- why should it assume that the lengths are correct?
People are downvoting you, but as the author of this I can say you're right. Not necessarily lisp, but using any language that doesn't have the fatal flaw of totally broken strings is better.
C is still a large historical influence on many languages, and knowing it makes people more capable as programmers. That's why I teach it, and I use it because I'm just old like that. Once someone knows C they can fix all kinds of problems, and then can learn a huge number of languages fairly easily.
But these days, without a solid reason for using it, I'd avoid C and use a better alternative.