Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, "code quality" vs "features" is simply not a real tradeoff. Writing clean code with tests, function documentation, a good level of modularity, automated deployments... etc will save you time in the short term. It's pretty simple:

1. Writing quality code is not substantially slower in the first place, especially when you factor in debugging time. You just have to have the right habits from the get-go. People avoid writing quality code because they don't have and don't want to build these habits, not because it's inherently harder.

2. After the initial push, code quality makes it much easier to make broad changes, try new things and add quick features. This is exactly what you need when iterating on a product! Without it, you'll be wasting time dealing with production issues and bugs.

The only reason people say startups don't fail because of code quality is that code quality is never the proximate cause—you run out of funding because you couldn't find product-market fit. But would you have found product-market fit if you had been able to iterate faster, try more ideas out and didn't spend 50% of your time fighting fires? Almost definitely.

Pulling all-nighters dealing with production issues, spending weeks quashing bugs in a new feature and duct-taping hacks with more hacks is not heroic, it's self-sabotaging. Writing good code makes your own life easier, even on startup timeframes. (Hell, it makes your life easier even on hackathon timeframes!)



As a technical cofounder who just finished the YC W20 batch (https://terusama.com), I can agree with some of what you are saying.

At its core, an early stage startup's only goal is to create business value as ruthlessly as possible. Let's talk about how I apply this principle to my testing strategy.

Do automated test suites help create business value? Absolutely, I no longer have to test everything after making a change. Your application is going to be tested, either by you, or by your users.

Does having a well defined layout of UI, service, & integration tests, a-la Martin Fowler add business value? I would argue it does not. I write mostly integration tests, because you get more, 'bang for your buck', or 'tested code per minutes spent writing tests'.

Does this testing strategy create tech debt? Absolutely. I view this as a good thing. I am causing problems for myself in the future, in exchange for expediency in the present. Either my company grows to be successful enough to care about these problems, or we go out of business. If we become successful enough to care about rampant tech debt, hooray! we are successful. If we fail, it does not matter that we leveraged tech debt, we still failed.

Writing good code is an art. There are people out there who are incredibly talented at writing good code that will be battle-tested, and infinitely scalable. These are often not skills that an early staged startup needs, when trying to find product-market fit.


I think I disagree with this. I think the short-term harm of this kind of tech debt is more substantial than you're leading on. "Causing myself problems for the future" might be true, but that future could be in a week when you need to pivot because of user testing, a shift in the market, product market fit etc.

I think the mistake you're making is conflating "getting code written now" with expediency. Adding/removing features and shifting when necessary are "expediency." That's the value of a thorough test suite.


It's not just the test suite that is the subject of tradeoffs. One may write good code that fundamentally doesn't scale beyond a small number of customers e.g. doing everything with postgres and no batching because it's easy. Or building a solution for a demo to an individual customer.

These solutions will break, and if monitoring is skipped will break at 2 AM when customers really start using the product.

These situations can be avoided with better product research and a stronger emphasis on design, but these are also the approaches large established companies take who can't afford to lose customer trust, and will gladly build a product on a 2 year time horizon.

As a startup you need to weigh the risk of failure, the need for direct customer engagement, and limited resources against the risk of losing customer trust. If you're a startup making a new DB, then you're product lifespan is approximately equal to the time until your first high level customer failure or poor jepsen test. A new consumer startup, may simply be able to patch scaling issues as they emerge rather than investing in a billion user infra from the get go.


I don't understand how pivoting is an example of the value of testing. Wouldn't it instead show how investing in tests didn't pay off because the codebase got scrapped for another? Your tests pay themselves each time you adjust code that is tested. But there are many cases where you never end up adjusting the code, such as when that whole service is scrapped, or it was simple enough that it never got touched again.

The value of tests amplify with the number of people adjusting the code, and with the time range over which this is happening. These are both conditions that minimize themselves at early-stage startups.

Now, of course, caveat is, you need to know how to strike the right balances and tradeoffs as an engineer to get the right results for the right amount of effort. But that's what startup engineering is about.


When I write an MVP I usually don't write unit test. What I do is write testable code with dependency injection and whatnots in mind, so when the product is mostly finalized, I can write unit tests with little or no modification to the original code.


Unit tests are a lot easier to write though and run faster. I think the trick is to assess risk and don’t try to get 100% coverage for the sake of it


I think that depends heavily on what sort of tooling you're using. E.g. writing unit tests for django is near impossible while integration tests are much easier. The ORM invariably has its tendrils throughout the entire codebase, and mocking it out is a project unto its own.


ORMs are excepted in my book. Good point. But DB integration tests are possible; slower than unit but still faster than end to end. I created some recently for a side project to make assertions about full text search behaviour, so I could swap out psql for elastc search (for my sins) and have coverage still.


While what you said is true, the problem usually don't manifest in the way you described. Most engineers I know understand all these principles and adhere to them. But the codebase still end up a huge mess.

Code evolves. The technical decision you made when you were serving 30 customers doesn't make sense anymore now that you're serving 30k. The corner case you never thought would happen turns out to be very common. Suddenly your boss decide you should sell on-prem solution whereas you've been building a cloud offering.

You can make the best decision every time these new requirements come along, yet still end up in a disaster, because hill-climbing algorithm can get you stuck in the local maxima.

Oh, you say, you should've refactored your code, or rewrite from the scratch! OK, now you need to choose between spending time refactoring your code or delivering features. That's the trade-off!

Hmm ok, you say, you should've refactored along the way, so you don't need one giant refactor! Great, now you've basically asked me to predict the future.

So, shipping new feature and code quality is definitely a trade off, and is the job of us, software engineers, to make that call appropriately :)


I strongly agree, and have been involved in companies where "move fast and break things" was taken too far, and developer-years of effort was burned due to poor engineering investment. There is a time and a place for quickly hacking stuff together, but it has to be done with consideration. It is a short term gain, with a long term cost. If you are constantly breaking things, then you never get to MOVE ON. Good code is an investment. Invest in it, and let it create value - and build on that value - while you move onto the next thing.


This is the usual response from engineers who love their work :) In my experience there is always a point to NOT doing things "well" the first time round. Read the old article on "programming is terrible": write code that is easy to delete, not easy to extend: https://programmingisterrible.com/post/139222674273/write-co...


This is extremely well said. When I started coding I didn't have these habits and I was convinced it simply wasn't possible to write clean code as fast as I was writing sloppy code. Then, I met the person who is now my Head of Engineering, and he was able to code significantly faster than I was while also writing immaculate, readable, and largely bug-free code.

After spending some time working with him, I was convinced I was simply being lazy and started forcing myself to do all of the "clean code" steps that I was ignoring in lieu of speed. I slowed down for a month, maybe two, but then I was back up to speed and writing code that I could actually feel good about.

I haven't quite caught up to his pace, but I've got the second best thing in having him on my team now.

It's shocking how sure I was that I couldn't write clean code as fast as I do now. I'm lucky to have met someone that was able to teach me by example - I'm not sure I would have ever corrected my habits had I not worked next to someone who had.


Any particular approaches / design philosophies you found useful, either to discard or embrace? ie: Testing, "SOLID" principles, etc.


Sure! I have a ton of thoughts on this, so I'll just touch on a handful of higher level things that I think matter:

- Avoid rules and tools

Until you understand why someone came up with them originally. My biggest blind spots came from reliance on things like React Dev Tools. Every time I had to sort out a bug, I would start poking through React Dev Tools immediately without thinking, find the issue, and then patch it. The closest thing I can equate this to now is using a GPS to navigate. You'll get to your destination, but you won't learn the roads you drive on every day. Throw out the GPS, and you find pretty quickly that you know the roads, which is a faster + easier way to navigate. Same thing with debugging. You can still use the GPS once you know the roads if you need to get somewhere unusual, but you shouldn't usually need it. Same thing with debugging tools. Big time saver.

Rules are in a similar camp. Strict global rules are always bad. "Don't repeat yourself" is a horrible as a strict law in a codebase. I repeat myself all the time intentionally to avoid prematurely abstracting things. That being said, there are times it's absolutely mission critical to abstract something or risk massive difficult-to-unfuck technical debt moving forward. Knowing the "spirit" of these rules (which is difficult to garner via anything other than experience) is incredibly important in using them effectively, which in turn also saves a ton of time.

- Use a tool from your toolbox

As much as you can, don't make/use new tools. The law of the instrument[0] can actually work to your benefit in engineering if you use it correctly. The fewer tools in your coding toolbox, the more proficient you'll be at using those tools. The bugs that arise from that limited set of tools are more predictable and become easier to avoid and easier to diagnose. Your code ends up naturally looking more consistent when you're trying to treat every problem as if it's from the same set of problems (and in my experience, very few problems are not from a very small set of types of problems). The less unique problems you're solving, the less time you're spending learning how to use new tools. This effect compounds as time goes on, and ends up being incredibly powerful over time.

- Be even more explicit than you think you need to be

Implicit functionality is the beginning of the end of any codebase. If you think comments are necessary, the code is too implicit. If you have to ask the author what the code does, the code is too implicit. Code should make intuitive sense like a great UI makes intuitive sense. Non-engineers with an understanding of your product should be able to traverse your folder structure and find something if they want to (most members on our team are able locate and modify copy in emails/notification/interface files without much trouble). But it's not for them, it's for you. Trying to remember what you did on a feature you built 6 months ago is borderline impossible, so don't try. Write it so that you don't have to, and that'll guarantee other engineers working on it at any time can dive in without talking to you about it first. A lot of time is saved in not having to bring yourself or anyone else up to speed on anything.

- "Think slow"[1] about everything

Your brain is going to want to "think fast" all the time. Brains weren't made to code, so they're really bad at knowing when to rely on instinct and when to consider something more thoroughly. It likes to think fast more than it likes to think slow, so you'll end up coding instinctually if you're not intentional about it. That will result in code that looks like the most significant project you worked on before this one, and that code probably doesn't make any sense in whatever you're coding right now. So you have to force yourself to think slow about literally every single little detail of the code you're writing at first. Every styling detail, every filename, every semicolon, literally everything. Consider it, make sure you understand exactly why you're doing it, and make sure it makes sense to you in this specific scenario. Be able to explain why you do everything the way you do.

This is immensely tiring up front, it feels impossibly unsustainable. And it would be if you had to do it every time you wrote code, but you don't. Once you understand exactly why you're doing each thing you're doing, it's incredibly quick to understand if it still makes sense in each following scenario. Once you've gathered most of the reasons for the way you write code the way you do, the process starts to fade into the background - you can start to "think fast" again. And even better, you've trained your brain to stop and "think slow" when you don't have an explicit reason for doing something a particular way, thus preventing any "relapse". It's locked in. Engineers who have this figured out can avoid writing unpredictable/difficult-to-debug code with clinical precision and save themselves days of pain per month.

I think this last step is actually the core of "having the right habits from the get-go". I think every great engineer can explain every tiny little nuance of why their code is the way it is. I don't think that comes from being a great engineer, I actually think that's how you become one in the first place.

My pet theory is that this list is part of how one becomes a so-called "10X engineer". You don't have to code 10X faster, you just have to use clever compounding tricks to spend 10X less time on the noise in between.

[0]https://en.wikipedia.org/wiki/Law_of_the_instrument

[1]https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow


Thank you so much for getting back to me. Awesome reply.

I agree on the pragmatic vs dogmatic approach to "Don't repeat yourself". I have been playing a lot with this at my company right now. It's hard not to reach for an abstraction right away, but if the abstraction muddles the code / implementation, I find myself questioning its value.

I also love your comment "Code should make intuitive sense like a great UI makes intuitive sense". This forces the engineer to avoid being too fancy, and allows future devs to work on it without needing the original (who might be on vacation or moved on).

Kahneman's work is awesome, and I'd never really thought of it in the context of coding. We have been struggling at work with our redux lately, because the developers haven't stopped to question why we were structuring things in a certain way, or how we were dispatching actions and handling async flow. We just kept building things and trying to move quickly. It can be tough when product is asking for things to get released, and you are trying not to bikeshed / overcomplicate things. But a healthy dose of stepping back and rethinking why you are doing things is helpful.


I agree with nearly everything you said except modularity, I don’t find it’s a metric of code quality. At the early stage I fail to see how to properly modularize, most of the time features and customer requirements evolve so much between customers that I usually find it’s more pains than gains for the rare time the initial design hold. In my experience (B2B/high touch sales) I sometimes write a plain and simple hardcoded « if customer == "XYZ" » until the use case is refined and the market for the feature proven with at least a few paying customers.


I think that you’re fast at what you practice - if you usually write clean code, you get better at writing clean code faster. And if you complain that writing clean code takes a long time and don’t usually do it, then yes - it will take time because you never practice doing it.


I agree - it isn't a real trade-off. I always hear people saying this and am puzzled by it. Indeed writing "good" code - modular, clean, tested is a matter of having good habits and acting on them. Sure, it technically takes "longer", but it's in the order of magnitude of several hours more over a course of weeks. It isn't something that I would even bring up with management - whether to do it or not is superficial because of the low cost.

What I do understand is that if you hire someone who doesn't have these habits already, or if you yourself don't have them, then it could take much longer than doing what you're used to. But that's on you.


> But would you have found product-market fit if you had been able to iterate faster, try more ideas out and didn't spend 50% of your time fighting fires? Almost definitely.

I wouldn't say definitely, but your chances would be way better.


Haha, yes, that's exactly what I meant to say, but messed up the wording. There are no guarantees, but you can improve your chances by making it easier to iterate.


This isn't a real tradeoff if you are/have good engineering. "You just have to have the right habits from the get-go" is a pretty big given :)


Can you suggest a way to understand and develop the right habits? Or Any good method to improve code quality at personal and team level? Would be really useful for my case.


I imagine you'll get comments here like "have good code hygiene" and "aim for good test coverage", which are not wrong. However, for me, what really stuck was learning directly from senior developers.

Anecdotally, having at least one senior developer on a team dramatically changes the long-term prospects of a project, even if they are not the ones actually leading the project. I would be curious to hear other's experiences, to see if that generalizes.

(with the caveat that not all senior devs had good habits)


Unfortunately being senior is not a good indicator of someone's ability to design and write good code. Yes, it should be. No, it not always is, as my experience shows.

Now, if you are senior yourself, probably, you can see that. But when you're junior developer it's very easily to misguide yourself by looking at the senior staff without questioning anything.

So, I'd add to your advice this. Yes, learn from senior developers by observing what they do and then read more about the topic to see if they are doing right thing. Also, try to find out what are the other approaches and opinions. Even if you don't agree with them it's good to diversify your knowledge.


> aim for good test coverage

From the beginning? Not. That's completely counterproductive.

The first thing you have to do is be sure that you are testing the correct thing. Only after it you write your tests down.

Specs come before tests, and on most problems you will need to write a lot of code before your get the specs down.


Is the correct thing that doesn’t work correctly under some conditions always the correct thing. Users are polite and will not tell you about the annoying whack a mole bugs that keep cropping up...


For me Clean Code was a bit of an eye opener. I generally found following SOLID principles to be valuable.

It's however important to understand that principles are not unbreakable laws of the universe, but rather things that should guide you but that are also subject to questioning.


They are also principles for the second or third pass. Get it working (rough draft, brain storm) and then refactor.


I think this is the best answer.


What is good code. I prefer DDD

But there's no substitute for quick development and iterations than a simple monolith in 1 project and 1 person, if you know what you are doing ofc.

You'll get the disadvantages with this method as soon as someone joins the team.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: