Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a technical cofounder who just finished the YC W20 batch (https://terusama.com), I can agree with some of what you are saying.

At its core, an early stage startup's only goal is to create business value as ruthlessly as possible. Let's talk about how I apply this principle to my testing strategy.

Do automated test suites help create business value? Absolutely, I no longer have to test everything after making a change. Your application is going to be tested, either by you, or by your users.

Does having a well defined layout of UI, service, & integration tests, a-la Martin Fowler add business value? I would argue it does not. I write mostly integration tests, because you get more, 'bang for your buck', or 'tested code per minutes spent writing tests'.

Does this testing strategy create tech debt? Absolutely. I view this as a good thing. I am causing problems for myself in the future, in exchange for expediency in the present. Either my company grows to be successful enough to care about these problems, or we go out of business. If we become successful enough to care about rampant tech debt, hooray! we are successful. If we fail, it does not matter that we leveraged tech debt, we still failed.

Writing good code is an art. There are people out there who are incredibly talented at writing good code that will be battle-tested, and infinitely scalable. These are often not skills that an early staged startup needs, when trying to find product-market fit.



I think I disagree with this. I think the short-term harm of this kind of tech debt is more substantial than you're leading on. "Causing myself problems for the future" might be true, but that future could be in a week when you need to pivot because of user testing, a shift in the market, product market fit etc.

I think the mistake you're making is conflating "getting code written now" with expediency. Adding/removing features and shifting when necessary are "expediency." That's the value of a thorough test suite.


It's not just the test suite that is the subject of tradeoffs. One may write good code that fundamentally doesn't scale beyond a small number of customers e.g. doing everything with postgres and no batching because it's easy. Or building a solution for a demo to an individual customer.

These solutions will break, and if monitoring is skipped will break at 2 AM when customers really start using the product.

These situations can be avoided with better product research and a stronger emphasis on design, but these are also the approaches large established companies take who can't afford to lose customer trust, and will gladly build a product on a 2 year time horizon.

As a startup you need to weigh the risk of failure, the need for direct customer engagement, and limited resources against the risk of losing customer trust. If you're a startup making a new DB, then you're product lifespan is approximately equal to the time until your first high level customer failure or poor jepsen test. A new consumer startup, may simply be able to patch scaling issues as they emerge rather than investing in a billion user infra from the get go.


I don't understand how pivoting is an example of the value of testing. Wouldn't it instead show how investing in tests didn't pay off because the codebase got scrapped for another? Your tests pay themselves each time you adjust code that is tested. But there are many cases where you never end up adjusting the code, such as when that whole service is scrapped, or it was simple enough that it never got touched again.

The value of tests amplify with the number of people adjusting the code, and with the time range over which this is happening. These are both conditions that minimize themselves at early-stage startups.

Now, of course, caveat is, you need to know how to strike the right balances and tradeoffs as an engineer to get the right results for the right amount of effort. But that's what startup engineering is about.


When I write an MVP I usually don't write unit test. What I do is write testable code with dependency injection and whatnots in mind, so when the product is mostly finalized, I can write unit tests with little or no modification to the original code.


Unit tests are a lot easier to write though and run faster. I think the trick is to assess risk and don’t try to get 100% coverage for the sake of it


I think that depends heavily on what sort of tooling you're using. E.g. writing unit tests for django is near impossible while integration tests are much easier. The ORM invariably has its tendrils throughout the entire codebase, and mocking it out is a project unto its own.


ORMs are excepted in my book. Good point. But DB integration tests are possible; slower than unit but still faster than end to end. I created some recently for a side project to make assertions about full text search behaviour, so I could swap out psql for elastc search (for my sins) and have coverage still.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: