Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you think early startup product might be an exception?

Since whole features can be quickly ditched frequentl. Sometimes even complete product pivot.



No excuses. I've done this like 12 times as startup CTO.

The habit is important. If you don't start, after three pivots you'll have a huge mountain of tests for a system nobody understands. Plus all the wasted time manually "testing".

Tests are so critical for the success of the business. It's fiscally irresponsible to skip.


What about side projects that you are working alone on?


I find them vital for side projects - especially if they get set aside for a week/month/year, I sometimes will lose track of assumptions I’ve made or use cases for apis that tests tend to expose.

Sure you can encode all of that as comments, but unless you reread each file when you return from a break, you can’t always trace those thoughts and see where they lead. On the other hand if you “find all references” in your ide or change some implementation so that a test breaks, past-you can save the day with that extra information about what they intended at the time.


I really value tests on projects I work on alone because the work is usually intermittent. I don't have the time or energy to deal with regressions, and I may not remember everything about the system by the next time I work on it.

I also find manually testing to be tedious, so I'd rather spend that time writing code that does it for me.


Well, I don't skip those (but, I started my career (95) in test). I think that it would be OK.

But if the objective is a professional output, test, test, test.


No, I don't think there is any exception. If you intend to maintain a piece of software for any length of time (i.e. it's not just a throwaway demo), you should write tests for it.

Over time you realize that testing truly does not slow down development as much as many people think it does. Maybe devs who just aren't used to testing find it difficult, but after a while it becomes second nature.

The best thing an early startup CTO can do is enforce testing across the board, so people don't just test when they feel like it.


>Over time you realize that testing truly does not slow down development as much as many people think it does.

Not my experience. We just built a new codebase, rewriting an older project with typescript and all the modern libraries and conveniences. We spent about 2x more time writing the tests than we did any of the API code.

Tests can be so fiddly and not exactly straight-forward. It takes a lot of time, but that isn't a reason not to do it. But don't suggest it's going to take less time, even in the long run, because it isn't - you essentially have to maintain 2 codebases now, one for the actual code, and one for the tests. Both are points of failure and both can be a time-sink.


Only if your runway is measured in days rather than weeks or months.

The payback for good testing is very fast, especially once you have set it up for the first feature.


I an early start-up is the exception, but my boss didn't. We were still in "stealth mode" and the CTO wanted 100% test coverage on our nodejs based social website, from the very start. 6 months in and we didn't have all that much built, because they couldn't really decide what they wanted us to build. So we built the most well-tested email sign-up form that ever existed, and a bunch of other user-account related stuff too, but then the company completely pivoted at around 6 months and I was now somehow doing PHP programming (which I hate) hacking the code of some ad server and bolting it on to a mobile app (not what we set out to build), and at that point the requirement for tests had been forgotten, because the company was desperate to find any viable path forward. It dissolved about 3 months after that, and now those tests seem pretty pointless.


100% coverage is a vanity metric and a waste of time.

Focus on testing the use cases that are important to users, not on covering every single line.


Any number under 100% is also a vanity metric. Focus on why there are test holes and whether they matter.


For me personally tests have a positive ROI within hours.

Even if I was doing a one day hackathon I'd probably have some sort of test feedback loop.

I've dealt with P1 bugs that cost the company 100k/minute and still took the time to write a test for the fix because you really don't have time to get the fix wrong and not find out until it is deployed.


Whenever I’m told there isn’t time, I remind the PM :Yes, but if it’s wrong or doesn’t work we will have time to do it over - it will just cost more later.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: