Make Tests Fail
This is about a simple testing technique that is probably obvious, but I’ll share it anyway.
In case you are not following TDD, when writing tests, make them fail, in order to be sure you are testing the right thing. You can make them fail either by changing some preconditions (the “given” or “when” parts, if you like), or by changing something small in the code. After you make them fail, you revert the failing change and don’t commit it.
Let me try to give an examples why this matters.
Suppose you want to test that a service triggers some calculation only in case a set of rules are in place.
(using Mockito to mock dependencies and verify if they are invoked)
@Test
public void testTriggeringFoo() {
Foo foo = mock(Foo.class);
StubConfiguration config new StubConfiguration();
config.enableFoo();
Service service = new Service(foo, config);
service.processOptionallyTriggeringFoo();
verify(foo).calculate(); //verify the foo calculation is invoked
}
That test passes, and you are happy. But it must fail if you do not call enableFoo()
. Comment that out and run it again – if it passes again, there’s something wrong and you should investigate.
The obvious question here is – shouldn’t you have a negative test case instead (i.e. test that’s testing the opposite behaviour – i.e. that if you don’t enable foo, calculate() is not called)? Sometimes, yes. But sometimes it’s not worth having the negative test case. And sometimes it’s not about the functionality that you are testing.
Even if you code is working, your mocks and stubs might not be implemented correctly, and you may think you are testing something that you aren’t actually testing. That’s why making a test fail while writing it is not about the code you are testing, it’s about your test code. In the above example, if StubConfiguration
is ignoring enableFoo()
, but has it set to true by default, then the test won’t fail. But in this case the test is not useful at all – it always passes. And when you refactor your code later, and the condition is no longer met, your test won’t indicate that.
So, make sure your test and test infrastructure is actually testing the code the way you intend it to, by making the test fail.
This is about a simple testing technique that is probably obvious, but I’ll share it anyway.
In case you are not following TDD, when writing tests, make them fail, in order to be sure you are testing the right thing. You can make them fail either by changing some preconditions (the “given” or “when” parts, if you like), or by changing something small in the code. After you make them fail, you revert the failing change and don’t commit it.
Let me try to give an examples why this matters.
Suppose you want to test that a service triggers some calculation only in case a set of rules are in place.
(using Mockito to mock dependencies and verify if they are invoked)
@Test public void testTriggeringFoo() { Foo foo = mock(Foo.class); StubConfiguration config new StubConfiguration(); config.enableFoo(); Service service = new Service(foo, config); service.processOptionallyTriggeringFoo(); verify(foo).calculate(); //verify the foo calculation is invoked }
That test passes, and you are happy. But it must fail if you do not call enableFoo()
. Comment that out and run it again – if it passes again, there’s something wrong and you should investigate.
The obvious question here is – shouldn’t you have a negative test case instead (i.e. test that’s testing the opposite behaviour – i.e. that if you don’t enable foo, calculate() is not called)? Sometimes, yes. But sometimes it’s not worth having the negative test case. And sometimes it’s not about the functionality that you are testing.
Even if you code is working, your mocks and stubs might not be implemented correctly, and you may think you are testing something that you aren’t actually testing. That’s why making a test fail while writing it is not about the code you are testing, it’s about your test code. In the above example, if StubConfiguration
is ignoring enableFoo()
, but has it set to true by default, then the test won’t fail. But in this case the test is not useful at all – it always passes. And when you refactor your code later, and the condition is no longer met, your test won’t indicate that.
So, make sure your test and test infrastructure is actually testing the code the way you intend it to, by making the test fail.
Good point! At the previous company I have worked we had to make a short demo for the QA before committing and they after showing them that all our tests passed, they usually started making random changes to the tests and to our code to make sure we didn’t write tests that were always passing. It was surprising how often did they find always-passing tests…
Congratulations, you just reinvented code coverage.
Well, yes and no. We have code coverage of 91% and still the other day I caught an always-passing test by trying to make it fail. The code coverage percentage doesn’t indicate explicitly what you’ve missed.
@Foo This is not the same as code coverage, which is a placebo. Running code doesn’t mean testing it. You could have a bug in your test.
Foo’s dismissive comment is interesting in that code coverage of a particular kind would achieve this. Normal code coverage ensures that execution can cover all paths. The kind required here would ensure that the test code could cover all paths (pass and fail), given all possible buggy (and correct) implementations of the code under test.