Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The big issue I see when people have trouble with TDD is really a cultural one and one around the definition of tests, especially unit tests.

If you're thinking of unit tests as the thing that catches bugs before going to production and proves your code is correct, and want to write a suite of tests before writing code, that is far beyond the capabilities of most software engineers in most orgs, including my own. Some folks can do it, good for them.

But if you think of unit tests as a way to make sure individual little bits of your code work as you're writing them (that is, you're testing "the screws" and "the legs" of the tables, not the whole table), then it's quite simple and really does save time, and you certainly do not need full specs or even know what you're doing.

Write 2-3 simple tests, write a function, write a few more tests, write another function, realize the first function was wrong, replace the tests, write the next function.

You need to test your code anyway and type systems only catch so much, so even if you're the most agile place ever and have no idea how the code will work, that approach will work fine.

If you do it right, the tests are trivial to write and are very short and disposable (so you don't feel bad when you have to delete them in the next refactor).

Do you have a useful test suite to do regression testing at the end? Absolutely not! In the analogy, if you have tests for a screw attaching the leg of a table, and you change the type of legs and the screws to hook them up, of course the tests won't work anymore. What you have is a set of disposable but useful specs for every piece of the code though.

You'll still need to write tests to handle regressions and integration, but that's okay.



And I think most people who don't write tests in code work that way anyway, just manually -- they F5 the page, or run the code some other way.

But the end result of writing tests is often that you create a lot of testing tied to what should be implementation details of the code.

E.g. to write "more testable" code, some people advocate making very small functions. But the public API doesn't change. So if you test only the internal functions, you're just making it harder to refractor.


> But the end result of writing tests is often that you create a lot of testing tied to what should be implementation details of the code.

This is the major issue I have with blind obedience to TDD.

It often feels like the question of "What SHOULD this be doing" isn't asked and instead what you end up with is a test suite that answers the question "What is this currently doing?"

If refactoring code causes you to refactor tests, then your tests are too tightly coupled to implementation.

Perhaps the missing step to TDD is deleting or refactoring the test at the end of the process so you better capture intent rather than the flow of consciousness.

Example: I've seen code that had different code paths to send in a test "logger" to ensure the logger was called at the right locations and said the right messages. That made it difficult add new information to the logger or add new logger messages. And for what?


If your goal is to avoid coupling tests to implementation then TDD seems like the most obvious strategy. You write the test before the implementation, so it is much harder to end up with the coupling than other strategies.


I often TDD little chunks of code, end up deciding they make more sense inside larger methods, and delete the tests. But that's ok, the test was still useful to help me develop the chunk of code.


Many people have a wrong perception of TDD. The main idea is to break a large, complicated thing into many small ones until there is nothing left, like you said.

You're not supposed to write every single test upfront, you write a tiny test first. Then you add more and refactor your code, repeat until there is nothing left of that large complicated thing you were working on.

There are also people who test stupid things and 3rd party code in their tests and either they get a fatigue from it and/or think their tests are well written.


How do you break those testes for a ray tracing algorithm on the GPU?


Probably start with “when code is run make sure GPU is utilised”.


Don't write any GPU code without a test....


>If you do it right, the tests are trivial to write and are very short and disposable (so you don't feel bad when you have to delete them in the next refactor).

The raison d'etre of TDD is that developers can't be trusted to write tests that pass for the right reason - that they can't be trusted to write code that isn't buggy. Yet it depends on them being able to write tests with enough velocity that they're cheap enough to dispose?


Yep, TDD for little chunks of code is really nice, I think of it like just a more structured way to trying things out in a repl as you go (and it works for languages without repls). Even if you decide to never check the test in because the chunk of code ended up being too simple for a regression test to be useful, if it was helpful in testing assumptions while developing the code, that's great.

But yeah, trying to write all the tests for a whole big component up front, unless it's for something with a stable spec (eg. I once implemented some portions of the websockets spec in servo, and it was awesome to have an executable spec as the tests), is usually an exercise in frustration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: