Tag: Testing

Key Software Factory Tools in the Java World

Here are some popular free and/or freemium tools for creating a software factory for a Java project :

I’ve written about these build tools before.

Here’s a chart showing the relative popularity of these 3 tools.

There are many more unit test facilitators available for free, and I’ve developed some lesser-known open-source unit test facilitators myself.

  • Integration test facilitators
    • Arquillian – allows you to run a test on a Java EE server (either a remote server or one configured and launched by your test code)
    • OpenEJB – a lightweight EJB framework implementation that can be launched in a test
    • DBUnit – set up and clean up after tests that use real database connections
    • Docker – actually, Docker is a facilitator for just about any sort of automation you want to do, but I put it in the integration test category because it allows you to spin up a database or other server from a test.

Note: do not mix the Arquillian and OpenEJB together in the same test suite – jar conflicts can ensue when tests are run manually in Eclipse

There are surely other tools I’ve left out.  Feel free to mention your favorites in the comments.

Advertisements

Does TDD “damage” your design?

I recently came across a couple articles that challenged some of my beliefs about best practices.

In this article, Simon Brown makes the case for components tightly coupling a service with its data access implementation and for testing each component as a unit rather than testing the service with mocked-out data access. Brown also cites David Heinemeir Hansson, the creator of Rails, who has written a couple of incendiary articles discouraging isolated tests and even TDD in general. Heinemeir Hansson goes so far as to suggest that TDD results in “code that is warped out of shape solely to accomodate testing objectives.Ouch.

These are thought-provoking articles written by smart, accomplished engineers, but I disagree with them.

For those unfamiliar with the (volatile and sometimes confusing and controversial) terminology, isolated tests are tests which mock out dependencies of the unit under test. This is done both for performance reasons (which Heinemeir Hansson calls into question) and for focus on the unit (if a service calls the database and the test fails, is the problem in the service or the SQL or the database tables or the network connection?). There’s also a question of the difficulty of setting up and maintaining tests with database dependencies. There are tools for that, but there’s a learning curve and some set-up required (which hopefully can be Dockerized to make life easier). And there’s one more very important reason which I’ll get to later…

Both Brown and Heinemeir Hansson argue against adding what they consider unnecessary layers of indirection. If your design is test-driven, the need for unit tests will nudge you to de-couple things that Brown and Heinemeir Hansson think should remain coupled. The real dilemma is where should we put the inevitable complexity in any design? As an extreme example, to avoid all sorts of “unnecessary” code you could just put all your business logic into stored procedures in the database.

“Gang of Four” member Ralph Johnson described a paradox:

There is no theoretical reason that anything is hard to change about software. If you pick any one aspect of software then you can make it easy to change, but we don’t know how to make everything easy to change. Making something easy to change makes the overall system a little more complex, and making everything easy to change makes the entire system very complex. Complexity is what makes software hard to change. That, and duplication.

TDD, especially the “mockist” variety, nudges us to add layers of indirection to separate responsibilities cleanly. Johnson seems to be implying that doing this systematically can add unnecessary complexity to the system, making it harder to change, paradoxically undermining one of TDD’s goals.

I do not think that lots of loose coupling makes things harder to change. It does increase the number of interfaces, but it makes it easier to swap out implementations or to limit behavior changes to a single class.

And what about the complexity of the test code? Brown and Heinemeir Hansson seem to act as if reducing the complexity of the test code does not matter. Or rather, that you don’t need to write tests for code that’s hard to test because you should just expand the scope of the tests to do verification at the level of whole components.

Here’s where I get back to that other important reason why “isolated” tests are necessary: math. J.B. Rainsberger simply destroys the arguments of the kind that Brown and Heinemeir Hansson make and their emphasis on component-level tests. He points out that there’s an explosive multiplicative effect on the number of tests needed when you test classes in combination. For an oversimplified example, if your service class has 10 execution paths and its calls to your storage class have 10 execution paths on average, testing them as a component, you may need to write as may as 100 tests to get full coverage of the component. Testing them as separate units, you only need 20 tests to get the same coverage. Imagine your component has 10 interdependent classes like that… Do you have the developer bandwidth to write all those tests? If you do write them all, how easy is it to change something in your component? How many of those tests will break if you make one simple change?

So I reject the idea that TDD “damages” the design. If you think TDD would damage your design, maybe you just don’t know how bad your design is, because most of your code is not really tested.

As for Heinemeir Hansson’s contention that it’s outdated thinking to isolate tests from database access, he may be right about performance issues (not everyone has expensive development machines with fancy SSD drives, but there should be a way to run a modest number of database tests quickly). If a class’s single responsibility is closely linked to the database, I’m in favor of unit-testing it against a real database, but any other test that hits a real database should be considered an integration test. Brown proposes a re-shaped, “architecturally-aligned” testing pyramid with fewer unit tests and more integrated component tests. Because of the aforementioned combinatorial effect of coupling classes in the component, that approach would seem to require either writing (and frequently running) a lot more tests or releasing components which are not exhaustively tested.

A Testing Toolbox for Java 8

A few months ago, I presented interface-it, a java-8 tool to generate mixin interfaces. In case you don’t know what that means or why mixins could be useful to you, I wrote a short article which explains it.

One of the key motivations for the interface-it tool was to be able to generate mixins for the latest versions of unit-testing libraries like Mockito and AssertJ. Now you no longer have to worry about that, because I’m doing it for you. And more.

I now have several more projects to present to you.

Presenting tdd-mixins-junit4

Working backwards – if you want to have a great test fixture by adding only one dependency in your build configuration (your Maven Pom, Ivy xml, Gradle, or just a fat jar added to your classpath), use tdd-mixins-junit4. It gives you all the basics you need to do mocking and assertions with fluidity, simplicity and power.

Normally, that’s all you should need for your tests. Mockito allows you to to set up collaborating objects and verify behavior, and the extensions I added make it even easier to handle cases where, for example, you want to mock behavior based on arguments passed to the mock which are generated by your unit under test. As for verifying returned results, AssertJ and JUnit assertions allow you to verify any data returned by the unit under test.

Presenting tdd-mixins-core

If you do not want to use JUnit 4 (maybe you want to use TestNG or an early version of JUnit 5), then you can use tdd-mixins-core, which has everything that tdd-mixins-junit4 has, except the mixin for JUnit assertions and JUnit itself.

Presenting extended-mockito

So these tdd-mixins libraries notably give you mixins for the aforementioned libraries Mockito and AssertJ. As for Mockito, they use my extended-mockito (TODO: link to project) library, which not only provides mixins for classes like Mockito and BDDMockito, but it also provides extra matcher methods to simplify specifying matching arguments for mocked methods. For example:

when(myMockSpamFilter.isSpam(containsOneOrMoreOf("Viagra", "Cialis", "Payday loan")))
            .thenReturn(Boolean.TRUE);

See the project’s home page or the unit tests for more details.

Presenting template-example

As for AssertJ, it’s already quite extended for general purpose use, so there is no extended-assertj project, but if you want to take things farther, I did create a project called template-example, which shows how, with a little tweaking, you can use a Maven plugin to auto-generate custom assertions for your own JavaBeans which are combined with the AssertJ mixin from tdd-mixins-core. These custom assertions allow you to do smooth, fluent assertions for your own data types, allowing this sort of validation call:

assertThat(employee).employer().address().postalCode().startsWith("11");

With these tools, you can more productively write unit tests with powerful assertions and mocking. They give you a fixture that you can set up in any test class by implementing an interface or two – for example:

public class MyTest implements ExtendedMockito, AllAssertions {

Not Included

What’s missing from these tools? I wanted to keep the toolset light, so there are some excellent but more specialized tools which are not included. For example, I have generated a mixin for Jsoup, which is very useful if you need to validate generated HTML, but unless I hear a clamoring for it, I will leave it out of tdd-mixins-core because it adds a dependency that lots of people may not need. Same for extensions to AssertJ – I generated mixins for AssertJ-DB  and for AssertJGuava (UPDATE: also added one for Awaitility), but did not include them in tdd-mixins (you can copy and paste the generated mixins’ source files if you want to use them).

Another library which is useful but which does not lend itself to mixins (because it uses annotations rather than static calls) is Zohhak it simplifies testing methods which return results that depend on a wide variety of possible input values (such as mathematical calculations or business rules).

Fear of Streams

There’s an article in the new issue of the French magazine Programmez which got my attention. It chronicles the efforts of some programmers for a B2C site to investigate converting their old for loops into more readable and maintainable Java-8 streams. The article is kind of a mess, actually, but to a software craftsman or craftswoman who works on Java legacy code, the message is a bit frightening.  The programmers did performance tests of the methods they changed, but because some of the more complex methods they modified took twice as long to execute as they did before, their employer was not convinced and does not want them to proceed with a refactor.

If you thought that it was difficult to sell your product owner on the idea of adding days to the schedule to clean up legacy code, imagine having to reveal that the clean-up also involves the risk of performance issues!

Not all of their changes slowed down execution – some even speeded it up a bit.  What troubled me about the article, though, is that I could not predict, looking at the examples they gave of their modifications, which ones would run faster and which ones would run slower than before.

I turned to StackOverflow to see if somebody knows how to do that in general. Is there any way, in a code review, for example, to see if a stream will run significantly slower (a constant-order slowdown, but enough to annoy a user or kill a refactoring initiative) based on just looking at the code?  Of course, it’s possible to code stupidly in a way that will obviously cause a slowdown.  But, apart from some useful guidance about when to use parallel streams and when not to, the message I got from some smart, informed developers is that you can’t, or else it’s not worth the trouble for what amounts to at worst a constant-order slowdown. Of course you should review code changes to make sure they don’t affect the correctness or the complexity of the algorithm, and maybe you can tell whether going parallel would help or hurt performance, but that’s all you can usefully detect.

I think the biggest mistake that the developers in the Programmez article made was to evaluate the performance based on isolated unit tests (microtests).  Performance of a method is not interesting to the end-user.  A  constant-order decrease in performance of a method will frighten a product owner, but maybe the end-user won’t even notice it.  For example, it may be that doubling the execution time of a loop results in a 0.001% decrease in performance of a web request because it’s a loop to process results of a heavy database query.

If the code you work on is for a financial trading application where every nanosecond counts, then you’re not going to be using Java streams at all – you likely have customized collections that run super-fast with no syntactic frills.  For the rest of us, we need to do automated end-to-end performance tests for the scenarios where performance might be an issue.

If you convert for loops to streams, don’t convert everything on the same day. Space out these kinds of changes enough that end-to-end tests will catch only one performance issue at a time.  Then you can reassure the product owner that there will be no “noticeable” decrease in performance from the refactor, but that the code will be more readable and understandable, and thus easier to change and less bug-prone.  For certain iterations which can be parallelized efficiently, there may even be noticeable performance gains.

Legacy Lexicon? Naming different types of tests

At a recent software craftsmanship meetup I attended, there was an hour-long group discussion of the definitions of the terms “unit test”, “integration test” and “acceptance test”. How can this be, 2 decades after Kent Beck started developing and using the automated test software that has now become xUnit, that the very people who are most interested in pursuing best practices in object-oriented software development have doubts or confusion about the meaning of these fundamental terms?

The confusion is, in fact, widespread. Even J.B. Rainsberger, one of the world’s finest experts on test-driven design, ran into problems with this lexical quicksand.

Unit Test

The term “unit test” comes from a paradigm of scope, but there is some disagreement of what that scope is. Some experts, such as Roy Osherove  and Michael Feathers, have tried to impose some precision on that notion of scope. Stack Overflow users seem to agree with them. There remains some ambiguity, however, as Martin Fowler has recognized and explained – particularly as a result of the differences between “mockist” and “classicist” testing approaches. Personally, I prefer Fowler’s more inclusive definition of unit tests (tests focused on a unit which assume that collaborators work – whether you mock them out or integrate them).

Integration Test vs. Integrated Test

Then there’s the term “integration test”. There is a paradigm of scope in this name, as well – the word “integration” implies combining multiple “unit” scopes (or at least combining 1 unit scope with a tool or service external to the code, such as a database). But is every test that involves more than 1 unit an integration test? There’s another paradigm involved, which is that there’s the notion of testing the integration of units rather than the logic of the units themselves. This is where Rainsberger mis-stepped. He now rightly makes a distinction between integration tests – tests which validate the integration of different components as well as databases, the file system, etc. – and integrated tests – tests which attempt to do the work of validating units without fully isolating those units (tests which he finds to be mostly counterproductive over time because they are often too heavy to use for thorough testing of the logic of individual units, and they give a false sense of security).

This is a more finessed approach than Osherove’s, who seems to say that any test which is not a “good” unit test by his definition is an integration test.

Acceptance Test

The third notion of scope is that of acceptance tests. The scope is the whole product (end-to-end), limiting the functionality tested to 1 user story.

Boundaries

So we have notions of 3 different categories of tests: unit, integration and acceptance. These 3 categories are not exhaustive. If we adhere to strict definitions and use Rainsberger’s term of “integrated tests” as a fourth category there are no longer ambiguities between unit and integration tests because those 2 categories of test serve different purposes (it’s no longer a question of scope). There remains an ambiguity between the notion of unit testing and integrated testing – over the notion of the boundaries of a unit (for example, testing a root aggregate may involve several non-mocked classes, but I would argue that it’s a unit test because the root aggregate and its aggregate elements form a single unit). Since Rainsberger has labeled integrated tests as often harmful (“a scam” in his words), defining this boundary can give a notion of the quality of testing.

As an aside, I find that sometimes a test can start as a unit test and then become an integrated test because of a useful refactoring. Enforcing SRP can have this effect. You have a classicist-style test for a method that eventually does too much. So you delegate part of the method’s implementation to another class.  Do you also need to rewrite the test to mock out the new delegate, even though the test passes as-is? Probably the answer is to keep the original test unchanged and add additional focused unit tests to the delegate. I suppose Osherove would say in this case that either the original test’s definition of a unit was too broad (so for him, it’s an integration test from the start), or else the refactoring merely made the unit span multiple classes (so it remains a unit test).

More Useful Terms?

Are these terms the most useful ones? Maybe not, especially given the confusion they engender.

Rainsberger prefers to promote the idea of collaboration tests and contract tests. Collaboration tests are mockist-style tests where a test verifies the interactions between the method under test and its collaborators (which are mocked or otherwise replaced by test doubles). Contract tests verify that a method, given certain inputs, produces a certain output (or possibly an internal state change that can be verified). Generally, the contract test applies to an interface or abstract class, (that which was mocked out in the collaboration tests) and can be applied to any concrete subtype of the abstract type under test. The 2 categories are complementary: You test collaborations to verify high-level services and then you use contract tests to verify in detail the functionality that was mocked out in the collaboration tests. I believe that some contract tests can (and often should) be integrated tests – for example focused tests that touch a database to verify low-level data model logic (which goes beyond just checking that the database is integrated). This is a pattern which I came to use spontaneously in my exploration of the hexagonal architecture.

Rainsberger also prefers the term microtests over unit tests. The term does seem to give a clearer notion of scope than “unit”.

In Fowler’s article, we see 2 other interesting categorizations.

There’s Jay Fields’s distinction between “solitary” and “sociable” tests. In solitary tests, there are no collaborators which are not replaced by test doubles. Sociable tests generally involve at least one real collaborator.

Fowler also distinguishes between 2 test suites: a “compile” suite (run on every build) and a “commit” suite (run before a commit, or in continuous integration). This is a useful idea for TDD, and the categorization is less ambiguous: if it runs fast, it’s in “compile”, otherwise it’s in “commit”. The only ambiguity is in the speed threshold separating the 2 categories, and that’s more a question of practicality and personal or team preference – there isn’t a “right answer”.

Catch-all terms

There’s are also catch-all terms if you’re not sure which type of test you have:

  •  developer tests – this term is obviously about who is responsible for creating and maintaining the test rather than the purpose or scope of the test.
  • automated tests – this term is obviously about how the tests are performed, so it would include developer tests and, for example, selenium tests created by Q.A.

A picture’s worth 1000 words

Using Martin Fowler’s more inclusive view of “unit tests”:

TestLexiconFowler

The colored area seems to be a source of some confusion – if you have a collaborating or delegate class that is not mocked out it can still be considered a unit test. Feathers does not address this area in his definition. Osherove does: “A unit of work can span a single method, a whole class or multiple classes…”  So a unit test can be an integrated test. Whether it should is a different question based on notions of good design and the actual execution speed of the test.

London Mockists to the Rescue?

In “Growing Object-Oriented Software, Guided by Tests”, Steve Freeman and Nat Pryce manage, using simple questions, to define the major test scopes without getting hung up on details of things like unit boundaries:

  • Acceptance: Does the whole system work?
  • Integration: Does our code work against code we can’t change?
  • Unit: Do our objects do the right thing, are they convenient to work with?

That seems like enough of a definition to do some useful tests.

Conclusion

In conclusion, don’t lose sleep over these definitions or let them stop you from doing tests.  There exist terms which are clear and precise enough to describe the good practices you need, and we can live with some gray areas where the most commonly used terms are concerned. Just be careful when you’re discussing tests to make sure that everyone in the discussion has the same understanding of the terms.

 

UPDATE:

I just discovered another interesting approach to this topic, by Simon Brown.