Key Software Factory Tools in the Java World

Here are some popular free and/or freemium tools for creating a software factory for a Java project :

I’ve written about these build tools before.

Here’s a chart showing the relative popularity of these 3 tools.

There are many more unit test facilitators available for free, and I’ve developed some lesser-known open-source unit test facilitators myself.

  • Integration test facilitators
    • Arquillian – allows you to run a test on a Java EE server (either a remote server or one configured and launched by your test code)
    • OpenEJB – a lightweight EJB framework implementation that can be launched in a test
    • DBUnit – set up and clean up after tests that use real database connections
    • Docker – actually, Docker is a facilitator for just about any sort of automation you want to do, but I put it in the integration test category because it allows you to spin up a database or other server from a test.

Note: do not mix the Arquillian and OpenEJB together in the same test suite – jar conflicts can ensue when tests are run manually in Eclipse

There are surely other tools I’ve left out.  Feel free to mention your favorites in the comments.

Advertisements

Software factories

In the real world, most of us are not quite there, but it’s now possible with only free tools to have a reliable product release process which consists of just developing and committing code. In the case of an editor releasing a versioned software product, or for a site where new features are released gradually or via AB testing, or in all cases where human testers and/or management approval are necessary, there would also have to be the click of a button to approve a release. I’m speaking in broad, hand-waving generality, but such a process, when achieved, can provide a competitive advantage to a product or site by reducing time-to-market for new features and reducing the risk of quality issues (translation: easier to develop faster with fewer bugs).

Of course setting up such a process requires a lot of up-front work, considerable expertise, and occasional maintenance down the road, and such a system will (by design, normally) come to a halt very quickly if it is fed anything other than clean, well-tested code.

This notion has been called a “software factory” or “continuous delivery” or “continuous deployment” system.

I’ll have more to say on this subject soon, but today I will just point you to a couple of interesting diagrams:

“Web Security Basics” on Martin Fowler’s Site

Security should be at the heart of our profession, but I’m not sure it ever has been. Job interview preparation for developer jobs is largely focused on algorithms, API’s and problem-solving. It’s probably happened somewhere, but I’ve never heard of an interviewer asking a candidate for a typical developer job about how to prevent SQL injection or XSS attacks. I’ve never seen a job posting for an ordinary developer job which mentioned OWASP in the list of desired buzzwords.

That may be changing slowly. The value created by software development has attracted the attention of a lot of thieves who would be happy to steal it. The software craftsperson has a duty to produce software which will discourage and repel those thieves, but good learning materials in the field of security have been scarce. The OWASP site has a great deal of info about security best practices, but it’s not organized in a way which makes it easy to learn.

Martin Fowler’s blog is now hosting an evolving document written by two software security experts which gives a lot of good advice which is also well-organized and well-explained. It’s a bit presumptious for them to call the document “Web Security Basics”, since most web sites do not yet follow all of their recommended practices (such as using only https for all connections), but the article may move a lot of companies to better protect their users’ data and information security. It should be required reading for software craftspeople.

Dan North’s JGoTesting library

Dan North, who is a pioneer in the field of Behavior-Driven Development (or rather “Behaviour-Driven” – he’s British) and a very entertaining public speaker on agile/extreme development practices, has created a new Java library which brings some ideas from the Go language to JUnit 4. His library JGoTesting is in the early stages, and there are some aspects which only seem useful for retrofitting existing tests. There’s also some competition from JUnit 5 and AssertJ for accomplishing some of the same goals, but it’s worth trying out.

The major goal of JGoTesting is to allow a single test to log multiple failures. In other words, the test goes on after the first failure, and at the end the tests fails with a list of errors. This is also possible with AssertJ SoftAssertions, and it’s standard functionality in JUnit 5 with assertAll(). One nice bonus in JGoTesting is that you can also log messages which are ignored if the test succeeded. So for integration tests or tests involving random generated values, you can get extra info about the cause of a failure without log pollution in the case of tests which succeed.

JGoTesting offers a testing DSL which distinguishes between failed checks and more serious errors. Syntactically, validations and logging can be called from the unique rule object for the test class. Calls can be chained together, and the calls are basically lambda-friendly for tests in Java 8 (JGoTesting is compiled in Java 7 for wider compatibility). JGoTesting also offers static delegate calls to replace calls to JUnit 4 assertions. These only seem useful for converting existing tests. As someone who auto-generates non-static mixin delegate calls for such API’s (TODO : LINK to tdd-mixins-junit4), I can’t really be a fan of replacing one static call by another, especially when there’s already a ready-made object with all the necessary semantics in non-static methods.

I took JGoTesting for a test drive (code available on github) to see what it can do and to see if it plays well with other API’s – I tried it with Zohhak, AssertJ and Mockito (the latter two via my tdd-mixins-junit4 library). Because JGoTesting uses a rule instead of a runner, there’s no conflict between it and Zohhak’s runner.

I discovered that JGoTest can be used in conjunction with Mockito verify() and with AssertJ assumptions and SoftAssumptions. JGoTest failed checks and logs which are executed before the failing error (whether it’s a Mockito failure, an AssertJ failure or a JGoTest failure, do indeed appear in the failure logs whether the test is run from Maven or from Eclipse. Maven gets a bit confused about the number of actual tests when there are JGoTest check failures.

My Maven output is available online if you don’t have time to run the test yourself, and here are some screen captures of results when run from Eclipse:

jgomockitowithlog

jgosoft1

Conclusion:

I don’t write a lot of tests which need a tool like JGoTest. I prefer tests which are focused on one case. For tests like that, there’s no need to log extra information or multiple failures– knowing the name of the test and the fact that it failed is enough to detect and quickly resolve the problem. For tests which are more integrated (with longer scenarios), or which test a number of possible parameters in a loop (as opposed to Zohhak-style parameterized tests), JGoTest’s log feature could be helpful to indicate where the test went wrong. As far as checking multiple failures in a test goes, while I like the simple syntax of JgoTest, I prefer the output generated by AssertJ SoftAssumptions. These are very early days for JGoTest, so I will keep an eye on it to see how it improves over time.

Java build tools: Ant, Maven or Gradle?

When you create a java project (for a library or an application or a web site), while you could just rely on your IDE to build things for you, in most cases you really need a build script. For a build script, you basically have 3 choices: Ant, Maven or Gradle.

Now that I’ve used all three of these tools, thought I’d share some thoughts about how to choose between them. This is not an article about how to migrate from one of them to another – that’s usually not worth the pain if your build is complicated, and it’s fairly trivial if your build is very simple.

In the context of a new project, let’s take a look at each of the choices:

Ant

Ant is the old-school tool. Your build script is basically a list of configured commands in XML, organized as a set of interdependent tasks. It’s easy to learn, and relatively easy to debug. It’s pretty easy to extend with custom tags implemented in Java (I’ve created one myself). Ant does not naturally include dependency management (loading external jar files), bit that can be added with Apache Ivy.

Strong points:

  • easy to learn
  • easy to understand
  • excellent integration in IDE’s
  • lots of examples and help online

Weak points:

  • Procedural (build scripts can get very complicated)
  • inelegant
  • For dependency management, Ivy is less intuitive and requires extra configuration and tooling

Maven

Maven is the opinionated tool which forces a project structure and build life cycle on you, and it’s also the tool that introduced built-in dependency management and a central repository for dependencies. Like Ant, it’s based on XML. Some people hate Maven for making them do things Maven’s way, but the benefit is simplicity. Unlike with Ant and Gradle scripts, two Maven scripts which do roughly the same thing are very similar. I have not tried to extend its functionality, but Maven has a plugin architecture, and a lot of useful plugins are available. Maven is also well-integrated in a number of popular tools.

Strong points:

  • generally very little thought or debugging required to get a typical build working
  • strong integration with various CI and IDE tools
  • simple dependency management
  • lots of examples and explanations online

Weak points:

  • Lots of pain if you don’t organize your project the Maven way
  • inelegant

Gradle

A relative newcomer, Gradle offers an elegant groovy-based DSL to run your build. Like Maven, Gradle has built-in dependency-management and makes some assumptions about project structure, but like Ant its execution is based on tasks. Unlike both Ant and Maven, Gradle has an elegant syntax which allows you to essentially write your build code in Groovy. Unlike Maven, the assumptions about project structure are conventions rather than rules. I have to say, though, that I recently encountered an official Gradle plugin which takes some of those assumptions as hard rules, and I had to abandon the plugin and use another tool. Hopefully this situation will improve over time. One thing I really like about Gradle is its dependency management syntax, which allows much simpler find-replace for version upgrades than Maven’s XML tags. The library name and version are all together on one line.

Strong points:

  • Elegance (i.e. lack of XML tags)
  • flexibility
  • simple dependency management
  • concision

Weak points:

  • Immature tooling and immature integration with popular tools (CI, IDE)
  • limited examples and help online
  • steep learning curve (especially for Groovy noobs)
  • inflexible plugins.

Conclusion

All three of these tools are good tools which can get the job done if you know what you’re doing. Personally, for a new Java project, I’d chose Maven for basic building of jar and war files. It’s what I did choose, even for my open-source Ant plugin. I Ant also, for non-build tasks (mass file copying with text replacement, for example) and tasks around the builds (for example, to update versions of dependency libraries and to rebuild the delegate interfaces using interface-it ant). Gradle is a very elegant and promising tool which will surely resolve some of its weak points over time. If you’re comfortable with groovy, it might be the right choice for you, but only if you stick to their recommended multi-project structure and don’t lean too heavily on your IDE.

What are you building?

A project, a product, or a brand?

Actually, luck has played a big role in our industry, and sometimes you don’t know what you’re building until later. I could have called this post “What’s in a name?” or “Why I have to work around MySQL performance bugs when there’s at least one technically superior and totally free database we could have used instead.”

This is a story of well-meaning people who made what seemed like reasonable choices at the time, some of which have caused mild suffering. They say the road to hell is paved with good intentions.

So what’s in a name? In the 1980’s if you’re a geeky professor creating a project to replace the “Ingres” database project, “Postgres” sounds like a pretty clever name. If SQL becomes a popular standard and is now supported, changing “Postgres” to “PostgreSQL” also seems pretty sensible. There’s continuity. The handful of students who used it at the time were not confused at all. But even for them it might have been better to choose the name of a painter more recent than Ingres (DegaSQL, MatisSQL, PicasSQL, KandinSQL, MonetSQL, RenoirDB, SeuratSQL, DaliBase?).

Then there’s MySQL. Sigh. Maybe we’re all shallow, narcissistic people at heart, or maybe we all just follow the orders of people who are, but “MySQL” is a near-perfect brand name because it’s all about “me“, and it’s “mine”. No need to explain what’s up with the name. Sold. Add to that a focus on the needs of ISP’s and Linux users, and it becomes second only to Oracle in popularity. PostgreSQL is in 4th place now, but the usage scores drop off a cliff after 3rd place (which is Microsoft SQL Server).

It turns out, though, that the brilliant name choice of “MySQL” was just as random as the awful name chosen for its competitor. “Monty” Widenus named MySQL after his first daughter, My. By the way, more-actively-maintained-and-improved fork of MySQL, languishing in 20th place in usage rankings, is MariaDB, named after his second daughter. If he’d had only one daughter, named Ingrid instead of My, the fork might have been called “PostgridSQL”.

So, do I have a point? Just that you might want to think when you create something about who it’s for and to think, when naming or re-branding it, about how people who are not currently “in the loop” will perceive it. Also, if your technology is really solid and well-designed, please be lucky.

For info, recent database usage rankings from db-engines.com:

1. Oracle Relational DBMS 1449.25
2. MySQL  Relational DBMS 1370.13
3. Microsoft SQL Server Relational DBMS 1165.81
4. MongoDB  Document store 314.62
5. PostgreSQL Relational DBMS 306.60
6. DB2 Relational DBMS 188.57
7. Cassandra  Wide column store 131.12
8. Microsoft Access Relational DBMS 126.22
9. SQLite Relational DBMS 106.78
10. Redis  Key-value store 104.49
11. Elasticsearch  Search engine 87.41
20. MariaDB Relational DBMS 34.66

 

Improved matching error messages in Extended-Mockito

I’ve recently made some improvements to Extended-Mockito in the area of failure messages.

In early versions, messages from a failure to match in a verify() call were a bit cryptic, especially when matching based on lambdas.  This is what you used to get :

Wanted but not invoked:
exampleService.doAThingWithSomeParameters(
    <Extended matchers$$ lambda$ 6/ 1 5 0 2 6 8 5 4 0>,
    <Extended matchers$$ lambda$ 8/ 3 6 1 5 7 1 9 6 8>,
    <custom argument matcher>,
    <Extended matchers$$ lambda$ 1 1/ 2 1 0 5 0 6 4 1 2>
);

In the first of my examples with the new improvements (available starting in version 2.0.78-beta.1 of extended-mockito and transitively in version 0.9.0 of tdd-mixins-core and tdd-mixins-junit4 ) it’s now possible to show more clearly what kind of arguments were expected:

Wanted but not invoked:
exampleService.doAThingWithSomeParameters(
{String containing all of: [Butcher,Baker,Candlestick Maker]},
[All items matching the given Predicate],
[One or more items matching the given Predicate],
SomeBean where val1 > 5
);

For info, the expectation call which gives this failure message is:

verify(expecting).doAThingWithSomeParameters(this.containsAllOf("Butcher", "Baker", "Candlestick Maker"),
				allSetItemsMatch(s -> s.startsWith("A")),
				oneOrMoreListItemsMatch(s -> s.startsWith("B")),
				objectMatches((SomeBean o) -> o.getVal1() > 5, "SomeBean where val1 > 5"));

The TestNest Pattern

I discovered the TestNest pattern – the idea of using a suite of nested test classes in order to create a hierarchical organization of tests – in this blog post by Robert C. Martin, but apart from one other article I haven’t found a lot of information online about this technique. So I thought I’d try it out in a sort of hello-world to get a feel for it. As a bonus, I added in some tests using the excellent Zohhak test library which generates automatically nested tests for sets of values via an annotation.

My test class can be found here.

Here are some of the highlights…

The outer class declaration with its JUnit Suite annotations:

@RunWith(Suite.class)

@SuiteClasses({ MyExampleClassTest.Method1.class, MyExampleClassTest.Method2.class,

MyExampleClassTest.Calculation1.class, MyExampleClassTest.Calculation2.class})

public class MyExampleClassTest {

One of the nested classes in MyExampleClassTest is called SharedState. It contains the object under test (called underTest) and is the parent of all the other nested test classes (and it also uses a mixin from tdd-mixins-junit4 which gives all its subclasses the ability to call assertions non-statically).

There are 4 nested classes which contain tests – one for each method in the class under test. This seemed like a logical organization, though it’s not what Uncle Bob did in his aforementioned blog post. It might make sense to further subdivide between happy path tests and weird corner case tests (what Uncle Bob called “DegenerateTests” in his example), and there may be better ways to divide tests than along class and method lines (though I would hope that my classes and methods under test are cohesive units of organization).

Using the Suite runner in the parent class doesn’t prevent you from using other runners in the nested classes. Two of the nested classes use the Zohhak test runner:

@RunWith(ZohhakRunner.class)

I intentionally put 2 failures in the tests (one fail() in a normal test and one error in a Zohhak value set) to see what the failures would look like.

Here’s part of what Maven had to say about these failures:

Failed tests: should_fail(org.example.MyExampleClassTest$Method2): intentional failure to illustrate nested test failure messages

should_calculate_cube [-1, 1](org.example.MyExampleClassTest$Calculation2): expected:<[]1> but was:<[-]1>

Here’s what I see in Eclipse’s JUnit sub-window:

Eclipse JUnit run results showing 2 nested errors

This seems like a minor improvement over long, descriptive test method names for quickly getting a feel for where the problem isespecially when there are multiple simultaneous failures. I didn’t have to put the name of the method under test in the test method name, and in the eclipse JUnit runner UI the tests for each method are nicely grouped together.  Zohhak works well with this approach, as well (and seems like a pleasure to use in general for testing calculation results).