Tag: JGoTest

Dan North’s JGoTesting library

Dan North, who is a pioneer in the field of Behavior-Driven Development (or rather “Behaviour-Driven” – he’s British) and a very entertaining public speaker on agile/extreme development practices, has created a new Java library which brings some ideas from the Go language to JUnit 4. His library JGoTesting is in the early stages, and there are some aspects which only seem useful for retrofitting existing tests. There’s also some competition from JUnit 5 and AssertJ for accomplishing some of the same goals, but it’s worth trying out.

The major goal of JGoTesting is to allow a single test to log multiple failures. In other words, the test goes on after the first failure, and at the end the tests fails with a list of errors. This is also possible with AssertJ SoftAssertions, and it’s standard functionality in JUnit 5 with assertAll(). One nice bonus in JGoTesting is that you can also log messages which are ignored if the test succeeded. So for integration tests or tests involving random generated values, you can get extra info about the cause of a failure without log pollution in the case of tests which succeed.

JGoTesting offers a testing DSL which distinguishes between failed checks and more serious errors. Syntactically, validations and logging can be called from the unique rule object for the test class. Calls can be chained together, and the calls are basically lambda-friendly for tests in Java 8 (JGoTesting is compiled in Java 7 for wider compatibility). JGoTesting also offers static delegate calls to replace calls to JUnit 4 assertions. These only seem useful for converting existing tests. As someone who auto-generates non-static mixin delegate calls for such API’s (TODO : LINK to tdd-mixins-junit4), I can’t really be a fan of replacing one static call by another, especially when there’s already a ready-made object with all the necessary semantics in non-static methods.

I took JGoTesting for a test drive (code available on github) to see what it can do and to see if it plays well with other API’s – I tried it with Zohhak, AssertJ and Mockito (the latter two via my tdd-mixins-junit4 library). Because JGoTesting uses a rule instead of a runner, there’s no conflict between it and Zohhak’s runner.

I discovered that JGoTest can be used in conjunction with Mockito verify() and with AssertJ assumptions and SoftAssumptions. JGoTest failed checks and logs which are executed before the failing error (whether it’s a Mockito failure, an AssertJ failure or a JGoTest failure, do indeed appear in the failure logs whether the test is run from Maven or from Eclipse. Maven gets a bit confused about the number of actual tests when there are JGoTest check failures.

My Maven output is available online if you don’t have time to run the test yourself, and here are some screen captures of results when run from Eclipse:

jgomockitowithlog

jgosoft1

Conclusion:

I don’t write a lot of tests which need a tool like JGoTest. I prefer tests which are focused on one case. For tests like that, there’s no need to log extra information or multiple failures– knowing the name of the test and the fact that it failed is enough to detect and quickly resolve the problem. For tests which are more integrated (with longer scenarios), or which test a number of possible parameters in a loop (as opposed to Zohhak-style parameterized tests), JGoTest’s log feature could be helpful to indicate where the test went wrong. As far as checking multiple failures in a test goes, while I like the simple syntax of JgoTest, I prefer the output generated by AssertJ SoftAssumptions. These are very early days for JGoTest, so I will keep an eye on it to see how it improves over time.