Dan North’s JGoTesting library

Dan North, who is a pioneer in the field of Behavior-Driven Development (or rather “Behaviour-Driven” – he’s British) and a very entertaining public speaker on agile/extreme development practices, has created a new Java library which brings some ideas from the Go language to JUnit 4. His library JGoTesting is in the early stages, and there are some aspects which only seem useful for retrofitting existing tests. There’s also some competition from JUnit 5 and AssertJ for accomplishing some of the same goals, but it’s worth trying out.

The major goal of JGoTesting is to allow a single test to log multiple failures. In other words, the test goes on after the first failure, and at the end the tests fails with a list of errors. This is also possible with AssertJ SoftAssertions, and it’s standard functionality in JUnit 5 with assertAll(). One nice bonus in JGoTesting is that you can also log messages which are ignored if the test succeeded. So for integration tests or tests involving random generated values, you can get extra info about the cause of a failure without log pollution in the case of tests which succeed.

JGoTesting offers a testing DSL which distinguishes between failed checks and more serious errors. Syntactically, validations and logging can be called from the unique rule object for the test class. Calls can be chained together, and the calls are basically lambda-friendly for tests in Java 8 (JGoTesting is compiled in Java 7 for wider compatibility). JGoTesting also offers static delegate calls to replace calls to JUnit 4 assertions. These only seem useful for converting existing tests. As someone who auto-generates non-static mixin delegate calls for such API’s (TODO : LINK to tdd-mixins-junit4), I can’t really be a fan of replacing one static call by another, especially when there’s already a ready-made object with all the necessary semantics in non-static methods.

I took JGoTesting for a test drive (code available on github) to see what it can do and to see if it plays well with other API’s – I tried it with Zohhak, AssertJ and Mockito (the latter two via my tdd-mixins-junit4 library). Because JGoTesting uses a rule instead of a runner, there’s no conflict between it and Zohhak’s runner.

I discovered that JGoTest can be used in conjunction with Mockito verify() and with AssertJ assumptions and SoftAssumptions. JGoTest failed checks and logs which are executed before the failing error (whether it’s a Mockito failure, an AssertJ failure or a JGoTest failure, do indeed appear in the failure logs whether the test is run from Maven or from Eclipse. Maven gets a bit confused about the number of actual tests when there are JGoTest check failures.

My Maven output is available online if you don’t have time to run the test yourself, and here are some screen captures of results when run from Eclipse:




I don’t write a lot of tests which need a tool like JGoTest. I prefer tests which are focused on one case. For tests like that, there’s no need to log extra information or multiple failures– knowing the name of the test and the fact that it failed is enough to detect and quickly resolve the problem. For tests which are more integrated (with longer scenarios), or which test a number of possible parameters in a loop (as opposed to Zohhak-style parameterized tests), JGoTest’s log feature could be helpful to indicate where the test went wrong. As far as checking multiple failures in a test goes, while I like the simple syntax of JgoTest, I prefer the output generated by AssertJ SoftAssumptions. These are very early days for JGoTest, so I will keep an eye on it to see how it improves over time.


Java build tools: Ant, Maven or Gradle?

When you create a java project (for a library or an application or a web site), while you could just rely on your IDE to build things for you, in most cases you really need a build script. For a build script, you basically have 3 choices: Ant, Maven or Gradle.

Now that I’ve used all three of these tools, thought I’d share some thoughts about how to choose between them. This is not an article about how to migrate from one of them to another – that’s usually not worth the pain if your build is complicated, and it’s fairly trivial if your build is very simple.

In the context of a new project, let’s take a look at each of the choices:


Ant is the old-school tool. Your build script is basically a list of configured commands in XML, organized as a set of interdependent tasks. It’s easy to learn, and relatively easy to debug. It’s pretty easy to extend with custom tags implemented in Java (I’ve created one myself). Ant does not naturally include dependency management (loading external jar files), bit that can be added with Apache Ivy.

Strong points:

  • easy to learn
  • easy to understand
  • excellent integration in IDE’s
  • lots of examples and help online

Weak points:

  • Procedural (build scripts can get very complicated)
  • inelegant
  • For dependency management, Ivy is less intuitive and requires extra configuration and tooling


Maven is the opinionated tool which forces a project structure and build life cycle on you, and it’s also the tool that introduced built-in dependency management and a central repository for dependencies. Like Ant, it’s based on XML. Some people hate Maven for making them do things Maven’s way, but the benefit is simplicity. Unlike with Ant and Gradle scripts, two Maven scripts which do roughly the same thing are very similar. I have not tried to extend its functionality, but Maven has a plugin architecture, and a lot of useful plugins are available. Maven is also well-integrated in a number of popular tools.

Strong points:

  • generally very little thought or debugging required to get a typical build working
  • strong integration with various CI and IDE tools
  • simple dependency management
  • lots of examples and explanations online

Weak points:

  • Lots of pain if you don’t organize your project the Maven way
  • inelegant


A relative newcomer, Gradle offers an elegant groovy-based DSL to run your build. Like Maven, Gradle has built-in dependency-management and makes some assumptions about project structure, but like Ant its execution is based on tasks. Unlike both Ant and Maven, Gradle has an elegant syntax which allows you to essentially write your build code in Groovy. Unlike Maven, the assumptions about project structure are conventions rather than rules. I have to say, though, that I recently encountered an official Gradle plugin which takes some of those assumptions as hard rules, and I had to abandon the plugin and use another tool. Hopefully this situation will improve over time. One thing I really like about Gradle is its dependency management syntax, which allows much simpler find-replace for version upgrades than Maven’s XML tags. The library name and version are all together on one line.

Strong points:

  • Elegance (i.e. lack of XML tags)
  • flexibility
  • simple dependency management
  • concision

Weak points:

  • Immature tooling and immature integration with popular tools (CI, IDE)
  • limited examples and help online
  • steep learning curve (especially for Groovy noobs)
  • inflexible plugins.


All three of these tools are good tools which can get the job done if you know what you’re doing. Personally, for a new Java project, I’d chose Maven for basic building of jar and war files. It’s what I did choose, even for my open-source Ant plugin. I Ant also, for non-build tasks (mass file copying with text replacement, for example) and tasks around the builds (for example, to update versions of dependency libraries and to rebuild the delegate interfaces using interface-it ant). Gradle is a very elegant and promising tool which will surely resolve some of its weak points over time. If you’re comfortable with groovy, it might be the right choice for you, but only if you stick to their recommended multi-project structure and don’t lean too heavily on your IDE.

What are you building?

A project, a product, or a brand?

Actually, luck has played a big role in our industry, and sometimes you don’t know what you’re building until later. I could have called this post “What’s in a name?” or “Why I have to work around MySQL performance bugs when there’s at least one technically superior and totally free database we could have used instead.”

This is a story of well-meaning people who made what seemed like reasonable choices at the time, some of which have caused mild suffering. They say the road to hell is paved with good intentions.

So what’s in a name? In the 1980’s if you’re a geeky professor creating a project to replace the “Ingres” database project, “Postgres” sounds like a pretty clever name. If SQL becomes a popular standard and is now supported, changing “Postgres” to “PostgreSQL” also seems pretty sensible. There’s continuity. The handful of students who used it at the time were not confused at all. But even for them it might have been better to choose the name of a painter more recent than Ingres (DegaSQL, MatisSQL, PicasSQL, KandinSQL, MonetSQL, RenoirDB, SeuratSQL, DaliBase?).

Then there’s MySQL. Sigh. Maybe we’re all shallow, narcissistic people at heart, or maybe we all just follow the orders of people who are, but “MySQL” is a near-perfect brand name because it’s all about “me“, and it’s “mine”. No need to explain what’s up with the name. Sold. Add to that a focus on the needs of ISP’s and Linux users, and it becomes second only to Oracle in popularity. PostgreSQL is in 4th place now, but the usage scores drop off a cliff after 3rd place (which is Microsoft SQL Server).

It turns out, though, that the brilliant name choice of “MySQL” was just as random as the awful name chosen for its competitor. “Monty” Widenus named MySQL after his first daughter, My. By the way, more-actively-maintained-and-improved fork of MySQL, languishing in 20th place in usage rankings, is MariaDB, named after his second daughter. If he’d had only one daughter, named Ingrid instead of My, the fork might have been called “PostgridSQL”.

So, do I have a point? Just that you might want to think when you create something about who it’s for and to think, when naming or re-branding it, about how people who are not currently “in the loop” will perceive it. Also, if your technology is really solid and well-designed, please be lucky.

For info, recent database usage rankings from db-engines.com:

1. Oracle Relational DBMS 1449.25
2. MySQL  Relational DBMS 1370.13
3. Microsoft SQL Server Relational DBMS 1165.81
4. MongoDB  Document store 314.62
5. PostgreSQL Relational DBMS 306.60
6. DB2 Relational DBMS 188.57
7. Cassandra  Wide column store 131.12
8. Microsoft Access Relational DBMS 126.22
9. SQLite Relational DBMS 106.78
10. Redis  Key-value store 104.49
11. Elasticsearch  Search engine 87.41
20. MariaDB Relational DBMS 34.66


Improved matching error messages in Extended-Mockito

I’ve recently made some improvements to Extended-Mockito in the area of failure messages.

In early versions, messages from a failure to match in a verify() call were a bit cryptic, especially when matching based on lambdas.  This is what you used to get :

Wanted but not invoked:
    <Extended matchers$$ lambda$ 6/ 1 5 0 2 6 8 5 4 0>,
    <Extended matchers$$ lambda$ 8/ 3 6 1 5 7 1 9 6 8>,
    <custom argument matcher>,
    <Extended matchers$$ lambda$ 1 1/ 2 1 0 5 0 6 4 1 2>

In the first of my examples with the new improvements (available starting in version 2.0.78-beta.1 of extended-mockito and transitively in version 0.9.0 of tdd-mixins-core and tdd-mixins-junit4 ) it’s now possible to show more clearly what kind of arguments were expected:

Wanted but not invoked:
{String containing all of: [Butcher,Baker,Candlestick Maker]},
[All items matching the given Predicate],
[One or more items matching the given Predicate],
SomeBean where val1 > 5

For info, the expectation call which gives this failure message is:

verify(expecting).doAThingWithSomeParameters(this.containsAllOf("Butcher", "Baker", "Candlestick Maker"),
				allSetItemsMatch(s -> s.startsWith("A")),
				oneOrMoreListItemsMatch(s -> s.startsWith("B")),
				objectMatches((SomeBean o) -> o.getVal1() > 5, "SomeBean where val1 > 5"));

The TestNest Pattern

I discovered the TestNest pattern – the idea of using a suite of nested test classes in order to create a hierarchical organization of tests – in this blog post by Robert C. Martin, but apart from one other article I haven’t found a lot of information online about this technique. So I thought I’d try it out in a sort of hello-world to get a feel for it. As a bonus, I added in some tests using the excellent Zohhak test library which generates automatically nested tests for sets of values via an annotation.

My test class can be found here.

Here are some of the highlights…

The outer class declaration with its JUnit Suite annotations:


@SuiteClasses({ MyExampleClassTest.Method1.class, MyExampleClassTest.Method2.class,

MyExampleClassTest.Calculation1.class, MyExampleClassTest.Calculation2.class})

public class MyExampleClassTest {

One of the nested classes in MyExampleClassTest is called SharedState. It contains the object under test (called underTest) and is the parent of all the other nested test classes (and it also uses a mixin from tdd-mixins-junit4 which gives all its subclasses the ability to call assertions non-statically).

There are 4 nested classes which contain tests – one for each method in the class under test. This seemed like a logical organization, though it’s not what Uncle Bob did in his aforementioned blog post. It might make sense to further subdivide between happy path tests and weird corner case tests (what Uncle Bob called “DegenerateTests” in his example), and there may be better ways to divide tests than along class and method lines (though I would hope that my classes and methods under test are cohesive units of organization).

Using the Suite runner in the parent class doesn’t prevent you from using other runners in the nested classes. Two of the nested classes use the Zohhak test runner:


I intentionally put 2 failures in the tests (one fail() in a normal test and one error in a Zohhak value set) to see what the failures would look like.

Here’s part of what Maven had to say about these failures:

Failed tests: should_fail(org.example.MyExampleClassTest$Method2): intentional failure to illustrate nested test failure messages

should_calculate_cube [-1, 1](org.example.MyExampleClassTest$Calculation2): expected:<[]1> but was:<[-]1>

Here’s what I see in Eclipse’s JUnit sub-window:

Eclipse JUnit run results showing 2 nested errors

This seems like a minor improvement over long, descriptive test method names for quickly getting a feel for where the problem isespecially when there are multiple simultaneous failures. I didn’t have to put the name of the method under test in the test method name, and in the eclipse JUnit runner UI the tests for each method are nicely grouped together.  Zohhak works well with this approach, as well (and seems like a pleasure to use in general for testing calculation results).


Does TDD “damage” your design?

I recently came across a couple articles that challenged some of my beliefs about best practices.

In this article, Simon Brown makes the case for components tightly coupling a service with its data access implementation and for testing each component as a unit rather than testing the service with mocked-out data access. Brown also cites David Heinemeir Hansson, the creator of Rails, who has written a couple of incendiary articles discouraging isolated tests and even TDD in general. Heinemeir Hansson goes so far as to suggest that TDD results in “code that is warped out of shape solely to accomodate testing objectives.Ouch.

These are thought-provoking articles written by smart, accomplished engineers, but I disagree with them.

For those unfamiliar with the (volatile and sometimes confusing and controversial) terminology, isolated tests are tests which mock out dependencies of the unit under test. This is done both for performance reasons (which Heinemeir Hansson calls into question) and for focus on the unit (if a service calls the database and the test fails, is the problem in the service or the SQL or the database tables or the network connection?). There’s also a question of the difficulty of setting up and maintaining tests with database dependencies. There are tools for that, but there’s a learning curve and some set-up required (which hopefully can be Dockerized to make life easier). And there’s one more very important reason which I’ll get to later…

Both Brown and Heinemeir Hansson argue against adding what they consider unnecessary layers of indirection. If your design is test-driven, the need for unit tests will nudge you to de-couple things that Brown and Heinemeir Hansson think should remain coupled. The real dilemma is where should we put the inevitable complexity in any design? As an extreme example, to avoid all sorts of “unnecessary” code you could just put all your business logic into stored procedures in the database.

“Gang of Four” member Ralph Johnson described a paradox:

There is no theoretical reason that anything is hard to change about software. If you pick any one aspect of software then you can make it easy to change, but we don’t know how to make everything easy to change. Making something easy to change makes the overall system a little more complex, and making everything easy to change makes the entire system very complex. Complexity is what makes software hard to change. That, and duplication.

TDD, especially the “mockist” variety, nudges us to add layers of indirection to separate responsibilities cleanly. Johnson seems to be implying that doing this systematically can add unnecessary complexity to the system, making it harder to change, paradoxically undermining one of TDD’s goals.

I do not think that lots of loose coupling makes things harder to change. It does increase the number of interfaces, but it makes it easier to swap out implementations or to limit behavior changes to a single class.

And what about the complexity of the test code? Brown and Heinemeir Hansson seem to act as if reducing the complexity of the test code does not matter. Or rather, that you don’t need to write tests for code that’s hard to test because you should just expand the scope of the tests to do verification at the level of whole components.

Here’s where I get back to that other important reason why “isolated” tests are necessary: math. J.B. Rainsberger simply destroys the arguments of the kind that Brown and Heinemeir Hansson make and their emphasis on component-level tests. He points out that there’s an explosive multiplicative effect on the number of tests needed when you test classes in combination. For an oversimplified example, if your service class has 10 execution paths and its calls to your storage class have 10 execution paths on average, testing them as a component, you may need to write as may as 100 tests to get full coverage of the component. Testing them as separate units, you only need 20 tests to get the same coverage. Imagine your component has 10 interdependent classes like that… Do you have the developer bandwidth to write all those tests? If you do write them all, how easy is it to change something in your component? How many of those tests will break if you make one simple change?

So I reject the idea that TDD “damages” the design. If you think TDD would damage your design, maybe you just don’t know how bad your design is, because most of your code is not really tested.

As for Heinemeir Hansson’s contention that it’s outdated thinking to isolate tests from database access, he may be right about performance issues (not everyone has expensive development machines with fancy SSD drives, but there should be a way to run a modest number of database tests quickly). If a class’s single responsibility is closely linked to the database, I’m in favor of unit-testing it against a real database, but any other test that hits a real database should be considered an integration test. Brown proposes a re-shaped, “architecturally-aligned” testing pyramid with fewer unit tests and more integrated component tests. Because of the aforementioned combinatorial effect of coupling classes in the component, that approach would seem to require either writing (and frequently running) a lot more tests or releasing components which are not exhaustively tested.

Are you sure you’ve sanitized your inputs?

This boggles the mind. Using an alphabet of just 6 non-alphanumeric characters, anyone can write any javascript code. The problem of how to allow some friendly javascript code while blocking anything unfriendly might be a subject worthy of computer science research.

In the meantime, eBay (and others) really should do something to reduce this vulnerability. I have a quick-and-dirty solution in Java based on detecting significantly long runs of the 6 characters in question. The weakness of the attack in question is of course that you need a lot of characters to do anything evil in the obfuscated javascript, so there should be long runs containing only  the 6 characters. It’s possible to include spaces and line breaks and even comments to break up the runs – I took this into account in my solution.  I chose 10 as the run length threshold for detecting the obfuscation, because I don’t know of something legitimate you can do in javascript using 10 of these characters in a row that you couldn’t do another way using some alphabetic characters, and if I saw code with 10 of those characters in a row, I would suspect it right away.

Here’s some of the code in my solution. First, the implementation of containsSneakyJavascript:

public static boolean containsSneakyJavascriptCode(final String userInput) {
	SneakyJSDetectionContext ctx = new SneakyJSDetectionContext(userInput);
	while (ctx.notDone()) {
	return ctx.detectedSneakyJS();

That’s code at a pretty high level of abstraction, so here’s more detail with the implementation of the processCurrentChar() call that you see in the code above. It ignores whitespace and characters inside comments and otherwise checks whether the current character adds to or ends the current run of suspect characters and whether it starts a comment:

void processCurrentChar() {
	if (insideAComment()) {
	} else if (isNotWhiteSpace()) {
		if (isInSneakyAlphabet(currentChar())) {
		} else {
			if (isStartOfComment()) {
			} else {

The full implementation code is here, and for good measure here are the unit tests for it.

You’re welcome, eBay.

Some Thoughts About the Big Trial

I’ve been following with interest the second trial between Oracle and Google, thanks to excellent coverage on ArsTechnica by Joe Mullin. I am not a lawyer, but I have seen some apocalyptic hyperbole about this trial which seems misplaced. And I’m not sure to what extent Google’s victory in this round is a win for the Java programming community, as Google has announced.

Intellectual property is part of software development, whether we like it or not (and we humans tend to like the idea of property a lot more when we are its owners), no matter what the ultimate outcome of this trial (Oracle has vowed to appeal, so it’s not really over).   It can be a pain to have to deal with copyright for what seems like a minor part of one’s creation – just ask Men at Work – but it also allows people who create things to make money. And just because Google has thus far been able to resist a legal assault based on copyright doesn’t mean your little startup will.

As software developers in the world of commercial software, we are for the most part paid by companies which tell their investors that they are investing in intellectual property.  Without that investment, our field might not exist outside of some obscure corners of universities.

There is of course a trade-off between freedom from restrictions, which attracts development activity, and restrictions, which attract investment.  In our industry, it seems like a majority companies end up failing, and so developer salaries have probably come more from investment than from consumer purchases or ad revenues.

Sun Microsystems tried to have it both ways, touting Java as free to use while assuring investors, including ultimately Oracle, that they had intellectual property protections. This is the gray area where Android was created. Oracle’s lawyers thought they had a smoking gun in their case with Google’s internal email from 2010 (PDF file), but the issues around open source and licensing are complex, and this jury didn’t see things the way Oracle’s lawyers do. In 2006, Sun announced Java as being free and open – as summarized by the Free Software Foundation. Note that the announcement by Sun linked to in the FSF article is no longer functional on Oracle’s web site, but I think I have found it in Oracle’s blog archives.

Android seems to be good for Java usage.  Java usage was in decline until late 2013.  It has sharply risen since then.  The release of Java 8  and the decline of Objective-C after Apple’s switch to Swift probably contributed to this rise, but I think that Android is a big factor.

TIOBE index graph in 2016 showing recent rise of Java
TIOBE index graph showing the decline and rise of Java usage