Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love the section on tests. It really is exactly what I've come to learn over the years. Integration tests are the sweet spot for finding bugs. Mocks tend to over complicate things (I still use them sometimes but I avoid using them systematically) and unit tests are too brittle in face of refactoring whereas integration tests help with refactoring.


Can't agree more. My favorite way to cheat with them is to have integration tests that follow demoing scenarios, so you can run them right before the demo (preferably twice)


Strongly disagree. Integration tests work brilliantly until a certain size or complexity is hit and then they become really bad. Unit tests are harder to write and maintain, but they will serve you much better in the long run because when they fail it’s much easier to understand and debug.

The worst sort of tests are integration tests which secretly depend on another integration test having run first, which will be true 99% of the time, until a change you make changes the order.


> The worst sort of tests are integration tests which secretly depend on another integration test having run first

That's an example of bad integration tests. Well engineered integrations tests don't do that.


It's a property of the code under test, not the tests themselves.

If the system is crappy/stateful/implicit, and you somehow manage to write nice/clean/stateless integration tests against it, I'd argue that the tests won't be close enough to the expected running of the system to tell you anything useful about it.


Nice and clean in the context of an integration doesn't mean no state it just means no state outside of the context of that test.

If I set up an integration test that sets up a database from scratch and tears it down and tests only the behavior of the app in that rigidly defined context, then yes, it will be useful. It will tell you how the code behaves in the scenario you've created.

Bad integration tests will share state with each other - e.g. by using the staging DB.


That's precisely what I meant by my comment. The 'cleaner' the integration test, the less it will behave like the real-world system.

> Bad integration tests will share state with each other

The real-world system shares state.

> If I set up an integration test that sets up a database from scratch and tears it down and tests only the behavior of the app in that rigidly defined context

... constructing a particular set of circumstances which will never occur in the real-world system.


>... constructing a particular set of circumstances which will never occur in the real-world system.

What do you find unlikely about a scenario where a test uses an app in a realistic way (e.g. with a browser) set up in a realistic context (e.g. with some fixed sample data) to reproduce a realistic scenario (e.g. a bug that already happened)?

I wouldn't say that isolation and realism are completely orthogonal but I find that well engineered integration tests are usually able to reproduce 90% of bugs sourced from production while unit tests can often manage only 10 or 15%. Bug in the SQL? Browser is involved? No can do.


> I wouldn't say that isolation and realism are completely orthogonal

Neither would I. I'm arguing that when you write a test method, you deliberately make the choice to include some kind of 'before-all' method, or not.

The reasons you would choose to include a 'before-all' method will vary from case to case. Let's say you're testing an addUser method. If you choose to isolate its state to avoid 'test flakiness', it is you making the call that addUser is flaky when run against shared state.

What is it about your application code that would make you think that addUser is flaky enough to need a clean slate to run against? Why not change the application code instead?


>The reasons you would choose to include a 'before-all' method will vary from case to case.

Not really. I would always purge anything that would cause tests to share state. I wouldn't do it on a case by case basis.

>What is it about your application code that would make you think that addUser is flaky enough to need a clean slate to run against?

The user already existing in the database? The behavior of the app would change in that case. Something has to wipe the db clean to test that scenario.

Thats why tests shouldn't share databases.


Unit tests have lower up front costs but higher ongoing costs. By their very nature they couple more to implementation details, so they will not give you clear confirmation when code is broken.

Integration tests can give unclear signals when they are flaky, but when they are engineered well they will give a much clearer signal that things work when they pass and that something is broken when they fail.

It's harder to engineer a good integration test - this includes making them isolated and independent e.g. of test ordering or indeed, anything else.


For what it's worth, the conclusion that I've come to with respect to tests is:

If a team has simple code, then tests can help a lot. However, if a team does not have simple code (and usually they don't) then it's better to spend time simplifying the code than writing tests.


I think a lot depends on what you think a "unit". If you use a leaf function as your unit I think that that's often much too low level but larger modules with relatively stable interfaces can make good units that are productive to test.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: