Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've always sort of thought of TDD a bit of a software development methodology cryptid. At best you get shaky camcorder footage (although on closer investigation it sure looks like Uncle Bob in a gorilla suit).

Lots of shops claim to do TDD, but in practice what they mean is that they sometimes write unit tests. I've literally never encountered it outside of toy examples and small academic exercises.

Where is the software successfully developed according to TDD principles? Surely a superior method of software development should produce abundant examples of superior software? TDD has been around for a pretty long time.



In my current company, I'm practicing TDD (not religiously, in a reasonable way). What this means for us (for me, my coworkers and my manager):

1. No bug is ever fixed before we have at least one failing test. Test needs to fail, and then turn green after bugfix. [1]

2. No new code ever committed without a test specifically testing the behavior expected from the new code. Test needs to fail, and then turn green after the new code.

3. If we're writing a brand new service/product/program etc, we first create a spec in human language. Turn the spec into tests. This doesn't mean, formally speaking "write tests first, code later" because we do write tests and code at the same. It's just that everything in the spec has to have an accompanying test, and every behavior in the code needs to have a test. This is checked informally.

As they say, unittests are also code, and all code has bugs. In particular, tests have bugs too. So, this framework is not bullet-proof either, but I've personally been enjoying working in this flow.

[1] The only exception is if there is a serious prod incident. Then we fix the bug first. When this happens, I, personally, remove the fix, make sure a test fails, then add the fix back.


Of all your tests, what is the proportion of tests that test exceptional code paths vs regular flow?


I use TDD as a tool. I find it quite heavy handed for maintenance of legacy code where I basically know the solution to the task up front. I can either just rely on having enough existing coverage or create one test for my change and fix it all in one step.

The times I actually use TDD are basically limited to really tricky problems I don't know how to solve or break down or when I have a problem with some rough ideas for domain boundaries but I don't quite know where I should draw the lines around things. TDD pulls these out of thin air like magic and they consistently take less time to reach than if I just sit there and think about it for a week by trying different approaches out.


I’ve worked at a place where we did TDD quite a bit. What I discovered was the important part was knowing what makes code easy to test and not the actual TDD methodology.


I've worked at three companies that did TDD rigorously. It absolutely does exist.


Was it worth it? In what languages?


I thought it was great. I found that TDD forced me think through the functionality i was about to add before writing the code. Base cases, corner cases, what the API should look like, etc. Then good at making sure i actually did it properly. Reasonably useful for making sure nobody broke it later, but not cast-iron.

TDD sometimes doesn't feel as fast as just smashing out code, but i honestly think it produces good-quality code at a faster and more consistent rate.

It was almost all in Java, with a bit of JavaScript. Some people at one company did Ruby, and did TDD in that, but i never did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: