December 10, 2009

Maven Plugins: Upgrade with Care!

Upgrading Maven Plugins: Tips and Issues

After having shown the list of current Maven plugin versions in my previous post, now I'm going to share my experiences with upgrading. Just like expected, some of the new plugins did not work out of the box or required some changes in configuration:

maven-checkstyle-plugin

We have used version 2.3 previously which is based on Checkstyle 4.4. In contrast, plugin version 2.4 is eventually built on top of Checkstyle 5 which better supports Java 5 language features (I wrote a post on that issue). The configuration is not fully compatible, so you would have to upgrade it.

maven-javadoc-plugin

Starting with version 2.6, JavaDoc plugin can detect the Java API link for the current build. This did not work for us (probably due to missing proxy configuration), so we had to switch it off by setting detectJavaApiLink property to false. See plugin site.

taglist-maven-plugin

The configuration of tags has changed and is now a bit more extensive. Old format (using tags element) is still supported, but deprecated. See plugin documentation.

findbugs-maven-plugin

I have used version 2.2 which was the most recent one when changing the POM. However, this yielded some strange errors during site generation. This is the strack trace:

Generating "FindBugs Report" report.
Plugin Artifacts to be added ->...
AuxClasspath is ->D:\Profiles\Default User\.m2\repository\org\apache\maven\reporting\maven-reporting-impl\2.0\maven-reporting-impl-2.0.jar;...
[java] Exception in thread "main" java.io.FileNotFoundException: D:\builds\...\User\.m2\repository\...\maven-reporting-impl-2.0.jar;D:\Profiles\Default (The filename, directory name, or volume label syntax is incorrect)
[java] at java.util.zip.ZipFile.open(Native Method)
[java] at java.util.zip.ZipFile.<init>(ZipFile.java:114)
[java] at java.util.zip.ZipFile.<init>(ZipFile.java:131)
[java] at edu.umd.cs.findbugs.classfile.impl.ZipFileCodeBase.<init>(ZipFileCodeBase.java:53)
[java] at edu.umd.cs.findbugs.classfile.impl.ZipCodeBaseFactory.countUsingZipFile(ZipCodeBaseFactory.java:92)
[java] at edu.umd.cs.findbugs.classfile.impl.ZipCodeBaseFactory.makeZipCodeBase(ZipCodeBaseFactory.java:46)
[java] at edu.umd.cs.findbugs.classfile.impl.ClassFactory.createFilesystemCodeBase(ClassFactory.java:97)
[java] at edu.umd.cs.findbugs.classfile.impl.FilesystemCodeBaseLocator.openCodeBase(FilesystemCodeBaseLocator.java:75)
[java] at edu.umd.cs.findbugs.classfile.impl.ClassPathBuilder.processWorkList(ClassPathBuilder.java:564)
[java] at edu.umd.cs.findbugs.classfile.impl.ClassPathBuilder.build(ClassPathBuilder.java:195)
[java] at edu.umd.cs.findbugs.FindBugs2.buildClassPath(FindBugs2.java:584)
[java] at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:181)
[java] at edu.umd.cs.findbugs.FindBugs.runMain(FindBugs.java:348)
[java] at edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1057)
[java] Java Result: 1

As you can see, we are using Windows default for local Maven repository (D:\Profiles\Default User\.m2) which obviously is causing a problem with the path later on. How sick is that?!?

Then, after having tried this and that, I discovered there is a brand new version 2.3 available so I tested that one and guess what – everything works fine again! Hence, don't use version 2.2, it seems to be broken...

docbkx-maven-plugin

The latest version of that plugin is 2.0.9, but that did not work correctly. It failed the build with this error:

ValidationException: null:30:723: Error(30/723): fo:table-body is missing child elements.
Required Content Model: marker* (table-row+|table-cell+)

The given line/position information did not match to anything suspicious, and I could not see anything wrong in our XSL files. So, no idea what this issue is telling me or what could I do about it. That's why I rolled back to 2.0.8 which just works fine.

The Bottom Line

With the given plugin versions, we are up to date again and managed to get rid of some issues hitting us since quite some time. Additionally, we are well prepared for upgrading to Maven 3. I hope I will be able to do so soon, to check if this is really a "drop-in replacement"... I will let you know!

Maven Plugins: Current Versions

Upgrading Maven Plugins

In preparation for a later switch to Maven 3 (which is already knocking on the door) as well as to get rid of some plugin related issues we are suffering from, I decided to update the Maven plugins we use for build and site generation.

Of course, we are following best practice and are locking down the version of all plugins in project.build.pluginManagment.plugins section. This is done in the company's topmost POM, so that all company projects would use the same versions once they reference the latest parent POM.

As you might know, upgrading to new plugin versions is always an adventure and you have to test your builds seriously, which is of course hard when you are going to change the company settings...

Build Plugins

Well, here is the list of build plugins we now use with their current version, as well as (in brackets) the version that Maven 3.0-alpha5 defines in its internal POM. I have highlighted where both versions differ:

  • maven-archetype-plugin: 2.0-alpha-5 (see comment below)
  • maven-assembly-plugin: 2.2-beta-4 (2.2-beta-4)
  • maven-clean-plugin: 2.3 (2.3)
  • maven-compiler-plugin: 2.0.2 (2.0.2)
  • maven-dependency-plugin: 2.1 (2.0)
  • maven-deploy-plugin: 2.4 (2.4)
  • maven-ear-plugin: 2.4 (2.3.1)
  • maven-ejb-plugin: 2.2 (2.1)
  • maven-enforcer-plugin: 1.0-beta-1
  • maven-help-plugin: 2.1 (2.1)
  • maven-install-plugin: 2.3 (2.3)
  • maven-javadoc-plugin: 2.6.1 (2.5)
  • maven-jar-plugin: 2.3 (2.2)
  • maven-release-plugin: 2.0-beta-9 (2.0-beta-9)
  • maven-resources-plugin: 2.4.1 (2.4.1)
  • maven-site-plugin: 2.0.1 (2.0.1)
  • maven-source-plugin: 2.1.1 (2.0.4)
  • maven-surefire-plugin: 2.4.3 (2.4.3)
  • maven-war-plugin: 2.1-beta-1 (2.1-alpha-1)
  • build-helper-maven-plugin: 1.4
  • failsafe-maven-plugin: 2.4.3-alpha-1
  • cargo-maven2-plugin: 1.0
  • docbkx-maven-plugin: 2.0.8

It's a bit strange that Maven 3.0-alpha5 (which came out end of November) does not use the latest version of all those plugins, most of them having been released before that date. I don't know if this was intentional or not... Let's hope it's not because of unsure quality of latest plugin versions ;-) Anyways, I decided to upgrade to the latest available version for all plugins.

Reporting Plugins

Here's the list for plugins related to site reports:

  • cobertura-maven-plugin: 2.3
  • findbugs-maven-plugin: 2.3
  • jdepend-maven-plugin: 2.0-beta-2
  • maven-checkstyle-plugin: 2.4
  • maven-jxr-plugin: 2.1
  • maven-pmd-plugin: 2.4
  • maven-project-info-reports-plugin: 2.1.2
  • maven-surefire-report-plugin: 2.4.3
  • taglist-maven-plugin: 2.4

I'm going to show some tips and issues when upgrading to these versions in an upcoming post...

December 2, 2009

Unit and Integration Testing with Maven, Part 2

Welome Back...

... to the second installment of this little series. After having seen the requirements and hassles when using Maven for testing in the last post, we are now going to list the possible solutions.

Remember, Maven does support different phases for unit and integration tests, but there is only one source directory (usually src/test/java), making it a bit difficult to setup and organize test environments. Hence, we have to use some way to separate both types of test.

Option 1: Separate Module for Integration Tests

In case you are using modules for your project anyways and want to have integration tests which are testing these modules in integration, this is the natural solution: just create another module that depends on the other ones and only contains the integration test sources and resources.

However, if you instead want to integration-test each module individually, this approach just doubles the number of modules, which can be difficult to manage. You could put all integration tests into one big module – but that has other disadvantages, of course.

Since we do not have any sources (only test sources) in that integration-test project, the usual build lifecycle would execute a lot of unneeded steps like copying resources, compiling, testing, creating JAR file etc. We could use the POM packaging type to suppress all of this, but then we have to configure the maven-compiler-plugin to force compilation of test sources.

Independantly of packaging type, we have to do some more configurations:

  • Execute Surefire plugin in integration-test phase.
  • Prepare integration tests (for instance, start the container and deploy the application) in pre-integration-test phase. This is quite easy with Cargo Maven plugin.
  • Shutdown integration tests (stop the container) in post-integration-test phase.

Here is the relevant part of such a POM file:

<build>
<plugins>

<!-- *** Compiler plugin: we must force test compile because we're using a
pom packaging that doesn't have this lifecycle mapping. -->
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>

<!-- *** Surefire plugin: run integration tests *** -->
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<executions>
<execution>
<phase>integration-test</phase>
<goals>
<goal>test</goal>
</goals>
</execution>
</executions>
</plugin>

<!-- *** Cargo plugin: start/stop application server and deploy the ear
file before/after integration tests *** -->
<plugin>
<groupId>org.codehaus.cargo</groupId>
<artifactId>cargo-maven2-plugin</artifactId>
<version>1.0</version>
<configuration>
...
</configuration>

<executions>
<!-- before integration tests are run: start server -->
<execution>
<id>start-container</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start</goal>
</goals>
</execution>
<!-- after integration tests are run: stop server -->
<execution>
<id>stop-container</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>

</plugins>
</build>

Option 2: Different Source Directories

In this scenario, unit and integration test sources are placed in separate source directories, like src/test/java and src/integrationtest/java. I think this would definitely be the best solution, and it actually should be what Maven supports out of the box. Sadly, Maven does not, and as far as I know Maven 3 won't either :-(

Well, we should be able to configure things this way. For compiling and executing integration tests, you would have to configure Compiler plugin to compile the integration test sources in pre-integration-test phase (using the includes parameter), and then configure a second Surefire execution using testClassesDirectory parameter to point it to the integration test folder.

However, this option seems a bit fragile to me. Even if it works (I haven't checked), the integration test folder would probably not show up in Eclipse when using m2eclipse plugin and other plugins may have issues with this additional test source path as well. That's why I do not recommend this option.

Option 3: Different File Name Patterns

In this scenario, the package or file name is used to distinguish between unit and integration test source files. Let's say, for instance, that all integration test classes start with IT* and hence the filename pattern is **/IT*.java. We'd have to configure the Surefire plugin to execute twice: once in test phase for executing the unit tests only, and another time in integration-test phase to execute, well, the integration tests (and only those)

A common way to do so is this:

  • In configuration of Surefire plugin, set skip parameter to true to bypass all tests.
  • Add an execution element for unit tests, where skip is set to false and exclude parameter is used to exclude the integration tests.
  • Add another execution element for integration tests, where skip is set to false and include parameter is used to just include the integration tests and nothing else.
  • Prepare and shutdown of integration tests are like before.

Here is the POM section:

<build>
<plugins>

<!-- *** Surefire plugin: run unit and integration tests in
separate lifecycle phases, using file name pattern *** -->
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
<executions>
<execution>
<id>unit-test</id>
<phase>test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<skip>false</skip>
<excludes>
<exclude>**/IT*.java</exclude>
</excludes>
</configuration>
</execution>

<execution>
<id>integration-test</id>
<phase>integration-test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<skip>false</skip>
<includes>
<include>**/IT*.java</include>
</includes>
</configuration>
</execution>
</executions>
</plugin>

<!-- *** Cargo plugin: start/stop application server and deploy the ear
file before/after integration tests *** -->
...

</plugins>
</build>

There is another approach to achieve the same result:

  • In configuration of Surefire plugin, use the exclude parameter to exclude the integration tests when executing unit tests in test phase.
  • Add an execution element for integration tests, where exclude is set to any dummy value (to override the default configuration) and include parameter is used to just include the integration tests and nothing else.

It's a bit shorter than the former configuration, but in the end it's a matter of taste. Again, here is the POM snippet:

<build>
<plugins>

<!-- *** Surefire plugin: run unit and integration tests in
separate lifecycle phases, using file name pattern *** -->
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<excludes>
<exclude>**/IT*.java</exclude>
</excludes>
</configuration>
<executions>
<execution>
<id>integration-test</id>
<phase>integration-test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<excludes>
<exclude>none</exclude>
</excludes>
<includes>
<include>**/IT*.java</include>
</includes>
</configuration>
</execution>
</executions>
</plugin>

<!-- *** Cargo plugin: start/stop application server and deploy the ear
file before/after integration tests *** -->
...

</plugins>
</build>

Failsafe Plugin

Instead of using the Surefire plugin with custom filename pattern to distinguish between unit and integration test classes, you should rather use the Failsafe plugin for executing integration tests.

This plugin is a fork of the Surefire plugin designed to run integration tests.
It is used during the integration-test and verify phases of the build lifecycle to execute the integration tests of an application. Other than Surefire plugin, the Failsafe plugin will not fail the build when executing tests thus enabling the post-integration-test phase to execute.

Failsafe plugin has its own naming convention. By default, the Surefire plugin executes **/Test*.java, **/*Test.java, and **/*TestCase.java test classes. In contrast, the Failsafe plugin will look for **/IT*.java, **/*IT.java, and **/*ITCase.java. Did you note that this matches what we used before for our integration tests? ;-)

When using Failsafe, the last POM (of option 3) looks like this:

<build>
<plugins>

<!-- *** Surefire plugin: run unit and exclude integration tests *** -->
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<excludes>
<exclude>**/IT*.java</exclude>
</excludes>
</configuration>
</plugin>

<!-- *** Failsafe plugin: run integration tests *** -->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>failsafe-maven-plugin</artifactId>
<version>2.4.3-alpha-1</version>
<executions>
<execution>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>

<!-- *** Cargo plugin: start/stop application server and deploy the ear
file before/after integration tests *** -->
...

</plugins>
</build>

Of course, Failsafe would also work with first option (separate integration test module).

Conclusion

Well, after having shown all the ways that came into my mind, here are my personal "best practices":

  • Use the Failsafe plugin to execute integration tests.
  • If keeping integration tests in a separate module feels alright, do so (see option 1).
  • If you want to have unit and integration tests in the same module, choose a file or package name pattern to distinguish between both, and configure Surefire and Failsafe plugins accordingly.

Update (2010/02/18)

Note that there is a new version 2.5 of failsafe plugin available with changed group id: org.apache.maven.plugins:maven-failsafe-plugin:2.5. See the plugin site for details. Thanks to stug23 for pointing that out!

November 28, 2009

Unit and Integration Testing with Maven, Part 1

Test Types

In my last post, I talked about integration testing with Maven's Cargo plugin and JBoss Application Server. Now let's see how integration testing fits into an overall testing strategy with Maven.

When thinking about tests and testing strategies, there is one important thing to keep in mind: integration tests are not unit tests (even though JUnit may be used to write integration tests). For the sake of completeness, let's pin down the main characteristics of both:

Unit Tests:

  • are testing a small piece of code in isolation
  • are independant from other tests
  • are usually written by software developers
  • have to be very fast because they are run quite often

In contrast, Integration Tests:

  • individual software modules are combined and tested as a group
  • are usually much slower than unit tests because a context has to be established (Spring, database, web server etc.)
  • normally are run after unit testing
  • may be created by QA team using tools, but also by developers using JUnit test cases (which still does not turn them into unit tests)

Testing with Maven

As you know, Maven Build Lifecycle provides phases both for testing and integration testing. Related phases are:

...
generate-test-sources
generate-test-resources
process-test-resources
test-compile
process-test-classes
test
...
pre-integration-test
integration-test
post-integration-test
...

Unfortunately, Maven does not support separate source directories for both test types. This is really complicating things (as we'll see in a minute). There have been some discussions on how to fix that, but I don't think it made it into Maven 3 (not quite sure, though).

Hence, what we need to have is this:

  • Both unit and integration tests have to be compiled by javac.
  • Nevertheless, they should be clearly separated from each other by a path or file name pattern.
  • Unit tests have to be run in test lifecycle phase, and the build should stop if they do not succeed.
  • Integration tests should be run in integration-test phase, and at least the post-integration-test phase has to be run independantly of test result (to be able to shutdown a running container, close database connections etc.).

The next part of this small series will show how these requirements can be met with Maven and what has to be configured, so please stand by...

November 27, 2009

Integration test with Maven, Cargo and JBoss

It's Friday...

... and I thought it would be a good idea to setup an integration test of some EJBs we are creating in a new project. Actually, it was not, since it nearly ruined my evening. But eventually I got it to work, and here is how.

To test the EJBs, I need to create an EAR file, deploy that to the application server (JBoss 5.1.0 in our case) and run JUnit test cases against this server. For Maven, I have setup a separate integration-test module for executing the integration test for the following reasons:

  • to better separate unit tests (with JUnit) from integration tests (also with JUnit), which simplifies Maven configuration a bit.
  • to be able to run this module outside of the normal CI (continuous integration) build due to its lack of performance.

Cargo Maven Plugin

For automatically starting the container, deploying the EAR file and stopping the container when the tests are finished, I use the Cargo Maven plugin. I did this several times before (with Tomcat, though) – so I thought that'd be easy...

Well, when using JBoss, there are some tricks you have to know. Before going into the details, some more information on Cargo.

A Container is the base concept in Cargo. It represents an existing application server runtime, and Cargo hides all the details of the actual server implementation for you. There are two types of containers:

  • Local Container: this is executing on the machine where Cargo runs. This can either be an Installed Container which is, well, installed on the local machine and is run in a separate VM, or an Embedded Container that is executing in the same JVM where Cargo is running (currently only supported for Jetty).
  • Remote Container: a container that is already running anywhere (local or remote). It's not under Cargo's control and can't be started or stopped by Cargo.

You use a Configuration to specify how the container is configured (logging, data sources, location where to put the deployables, etc). The available configuration depends on the container type:

  • Local Configuration: for local containers. There are two local configuration types: Standalone Local Configuration which configures the container from scratch in a directory of your choice, and Existing Local Configuration that re-uses an existing container installation already residing on your hard drive.
  • Runtime Configuration: You use a runtime configuration when you want to access your container as a black box through a remote protocol (JMX, etc). This is perfect for remote containers.

In my case, I wanted to use a Local Installed Container with a Standalone Local Configuration, to eliminate dependencies from other deployments.

JBoss with Cargo

Well, and here are the pitfalls when using JBoss in this setting:

  1. Experimental: JBoss 5.x is still an experimental container for Cargo. This is a bit strange given the fact that this version is now out for a while, but fortunately not really an issue.
  2. Extensive Logging: When Cargo builds the JBoss configuration – remember, I use Standalone Local Configuration so Cargo creates one from scratch – it uses a logging setup (independantly from what is used with your JBoss installation!) that is way too chatty. The console scrolls forever, and things are slowing down in a way that you think everything is stuck in an infinite loop.
    Thus, you have to tell Cargo to use another logging configuration file, which is a bit tricky and not documented very well (see this issue).
  3. Shutdown Port: Now the container starts up, the tests are run, but after that JBoss AS is not shutting down. It's telling me javax.naming.CommunicationException: Could not obtain connection to any of these urls: localhost:1299, which means the wrong port is used for shutdown. Standard shutdown port is 1099, so we have to tell Cargo to use that port number.

All in all, the configuration now looks like this. Mentioned settings are highlighted. Perhaps this is useful for someone else...


<!-- *** Cargo plugin: start/stop JBoss application server and deploy the ear
file before/after integration tests *** -->
<plugin>
<groupId>org.codehaus.cargo</groupId>
<artifactId>cargo-maven2-plugin</artifactId>
<version>1.0</version>
<configuration>
<wait>false</wait>
<!-- Container configuration -->
<container>
<containerId>jboss5x</containerId>
<type>installed</type>
<home>${it.jboss5x.home}</home>
<timeout>300000</timeout>
</container>
<!-- Configuration to use with the Container -->
<configuration>
<type>standalone</type>
<home>${project.build.directory}/jboss5x</home>
<properties>
<cargo.jboss.configuration>default</cargo.jboss.configuration>
<cargo.servlet.port>${it.jboss5x.port}</cargo.servlet.port>
<cargo.rmi.port>1099</cargo.rmi.port>
<cargo.jvmargs>-Xmx512m</cargo.jvmargs>
</properties>
<deployables>
<deployable>
<groupId>com.fja.ipl</groupId>
<artifactId>ipl-lc-ear</artifactId>
<type>ear</type>
</deployable>
</deployables>
<!-- Override logging created by Cargo (which is way to chatty) with default
file from JBoss (see http://jira.codehaus.org/browse/CARGO-585) -->
<configfiles>
<configfile>
<file>src/test/resources/jboss-log4j.xml</file>
<todir>conf</todir>
</configfile>
</configfiles>
</configuration>
</configuration>

<executions>
<!-- before integration tests are run: start server -->
<execution>
<id>start-container</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start</goal>
</goals>
</execution>
<!-- after integration tests are run: stop server -->
<execution>
<id>stop-container</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
<properties>
<it.jboss5x.home>${basedir}/../../../tools/bin/jboss-5.1.0.GA</it.jboss5x.home>
<it.jboss5x.port>8080</it.jboss5x.port>
</properties>

November 19, 2009

JBoss and Maven

The Challenge

During the last days I was busy trying to build and deploy a JEE application on JBoss – using Maven, of course. One interesting task was to run integration tests, i.e. JUnit tests that are testing a service bean deployed on JBoss application server. Integration tests with Maven are an interesting issue, worth its own blog entry probably. But this post is all about JBoss.

Well, this task doesn't sound hard for a Maven expert, but unexpectedly it was not that easy. Our test code (calling the EJB) was depending on JBoss artifacts, which might be questionable in itself but that's how it's currently done. Hence, during runtime the code needs the jbossall-client.jar from JBoss' client directory in the classpath.

The Eclipse Way

We are using JBoss 5.x, and in this version the jbossall-client.jar is rather small since it only references all required jar files in the Manifest's Class-Path element:

Manifest-Version: 1.0
Specification-Title: JBossAS
Specification-Version: 5.0.0.GA
...
Implementation-Vendor: JBoss.org
Implementation-Vendor-Id: http://www.jboss.org/
Class-Path: commons-logging.jar concurrent.jar ejb3-persistence.jar hi
bernate-annotations.jar jboss-aop-client.jar jboss-appclient.jar jbos
s-aspect-jdk50-client.jar jboss-client.jar jboss-common-core.jar jbos
s-deployers-client-spi.jar jboss-deployers-client.jar jboss-deployers
-core-spi.jar jboss-deployers-core.jar jboss-deployment.jar jboss-ejb
3-common-client.jar jboss-ejb3-core-client.jar jboss-ejb3-ext-api.jar
jboss-ejb3-proxy-clustered-client.jar jboss-ejb3-proxy-impl-client.j
ar jboss-ejb3-proxy-spi-client.jar jboss-ejb3-security-client.jar jbo
ss-ha-client.jar jboss-ha-legacy-client.jar jboss-iiop-client.jar jbo
ss-integration.jar jboss-j2se.jar jboss-javaee.jar jboss-jsr77-client
.jar jboss-logging-jdk.jar jboss-logging-log4j.jar jboss-logging-spi.
jar jboss-main-client.jar jboss-mdr.jar jboss-messaging-client.jar jb
oss-remoting.jar jboss-security-spi.jar jboss-serialization.jar jboss
-srp-client.jar jboss-system-client.jar jboss-system-jmx-client.jar j
bosscx-client.jar jbossjts-integration.jar jbossjts.jar jbosssx-as-cl
ient.jar jbosssx-client.jar jmx-client.jar jmx-invoker-adaptor-client
.jar jnp-client.jar slf4j-api.jar slf4j-jboss-logging.jar xmlsec.jar

This list is impressive... and the approach is working with current Eclipse. The equivalent for the Maven world would be a POM that references all other required libraries as normal dependencies (see this discussion).

The Maven Way

And yes, such a POM org.jboss.jbossas:jboss-as-client is available on JBoss Maven repository (note: it's not org.jboss.client:jbossall-client which is the reference-by-manifest's-class-path version!).


However, this approach involves two issues:

  • By following the dependencies defined in this org.jboss.client:jbossall-client transitively, Maven will download a vast number of JBoss and JEE libraries which you actually don't want to be used and packaged in your client. This includes things like org.jboss.jbossas:jboss-as-server:jar:5.1.0.GA and jacorb:jacorb:jar:2.3.0jboss.patch6-brew. Does not sound confidence-building, does it? Seems like JBoss should exclude transitive dependencies at the right places.
  • Moreover, some of the dependencies can't be found on any of the Maven repositories we configured to proxy in our Nexus. This includes Maven Central, JBoss of course, and a couple of others, so I do not know what else to add to provide the missing jars. Maybe it's just that the reference itself is wrong (version number?).

Instead of excluding everything we do not use currently, next idea is to just include those dependencies to the JBoss jar files that the client actually requires. But... it's quite hard to find out the corresponding Maven coordinates. This includes guessing the groupd and artifact id, but also the version – and unfortunately the version information given in the jar's manifest file is not useful since it notes the JBoss AS version instead of the version of the library which is required for Maven.

Lost in Space?

So, I ended up re-packaging a jar that holds the content of the required client jars and putting this on our Nexus...

I honestly wonder how everybody else is using JBoss 5.x with Maven on the client side???

November 13, 2009

Spring dm Server Considered Cool?

w-jax is over...

I'm just returning from w-jax 2009, one of the biggest german Java conferences, covering every technology related to Java actually.

If there are any hypes this year, they are these: Scala, Cloud Computing and OSGi. And, of course, Spring – which is all around.

Spring Source employs a lot of brave guys, and hearing first-hand about their newest coolest technologies is always refreshing. Did you know that Tomcat is to a large extent pushed by guys paid by Spring Source? I forgot the exact number, but Eberhard Wolff said the majority of bug fixes and commits is done by Spring Source's Tomcat team.

Spring dm Server is Cool!

There have been some sessions about Spring dm Server, and when taking a closer look it seems to be clear why Spring Source invests in Tomcat that much. Spring dm server is based on Tomcat, and they add the ability to deploy web application modules as OSGi bundles. This way, you can partition your web apps into separate bundles to deploy them independantly and dynamically into the server. Yes, they are preserving OSGi's dynamic, meaning that you can stop, start or refresh a particular bundle without the need to stop and restart the server. As you would expect, other bundles continue to work. If a requested OSGi service is temporarily not available, dm Server will wait 5 minutes for it being redeployed.

Spring dm server is part of Spring Tool Suite, an Eclipse based development environment which is also able to auto-deploy a module whenever it changes.

All of this is quite gorgeous from a technical perspective, and the Spring guys had to cope with some really hard issues (for instance, realted to JPA where they need an application wide visiblity of bundles as well as load time weaving – you won't want to know the details...).

Really? What for?

But... is this really of any interest in the field?

Our customers are using WebSphere (or WebLogic or maybe JBoss) application server, and all of them are not capable of running such a modularized, OSGi based application. Moreover, in a production environment, there is no need and most often it is actually not even desired to be able to refresh application bundles dynamically.

So, what's left? Modularizing your application? Right, that is leading to a better structure and less coupling and blah blah, but the same can be achieved without OSGi (just let the business domain drive your application "slices"). It's just that you are forced to use modules (bundles) when you are using OSGi.

If at all, Spring dm server will pay off for developers since it may speed up development (especially the build-deploy-test cycle). But is this really, I mean really, hurting us that much? Hence, if you were asking me, a lot of technical overhead and server lock-in for no real benefit.

So, will Spring dm server be able to gain real attention, I mean beyond being technically cool? I'm sceptical. As always, time will tell...

October 27, 2009

Java Mystery: Directory Exists but Does Not?

Well, yesterday for me was another one of those days where you think your computer must be fooling you... There is an issue with a DirectoryCleaner that works for a subproject but does not when called from the parent. But first things first.

The openArchitectureWare Part

We are using oAW (yep, the new version which is part of Eclipse Modeling Project of Eclipse Galileo). Before generating the artifacts from the model, we are executing a directory cleaner that just rubs out the folders, for instance src/generated/java. The workflow file generate.mwe looks like this:

<workflow>
<property file="generateAll.properties"/>
<component class='org.eclipse.emf.mwe.utils.DirectoryCleaner' directory='${srcGenDir}'/>
<component file='.../generateAll.mwe' inheritAll="true">
<modelFile value='...' />
</component>
</workflow>

srcGenDir is a property that is defined in the included properties file, but that's irrelevant. The highlighted line configures the mentioned directory cleaner. The relevant code snippet is invokeInternal() method of org.eclipse.emf.mwe.utils.DirectoryCleaner which is part of Eclipse EMF frameworks:

protected void invokeInternal(final WorkflowContext model, 
final ProgressMonitor monitor, final Issues issues) {
if (directory != null) {
final StringTokenizer st = new StringTokenizer(directory, ",");
while (st.hasMoreElements()) {
final String dir = st.nextToken().trim();
final File f = new File(dir);
if (f.exists() && f.isDirectory()) {
... do the cleanup ...
}
}
}
}

Pretty simple, right? The directory attribute contains one or more directories (comma separated), hence a tokenizer is used to get each of them and to create a file. If this file exists and it's a directory, that will be erased.

The Maven Part

Of course we are using Maven to build the stuff. The Fornax Maven plugin is configured to call the workflow during Maven build. Moreover, the mentioned workflow is part of a subproject B which belongs to an outer multi-module project A.

Now, this is where the mystery begins... When I build B (the submodule), everything is fine and works like expected. However, when I build A (the parent project), B will be built in turn and its workflow is executed, but the directory is not cleaned up! You wouldn't expect this, right?

The Strange Part

In an attempt to find out what's going on, we put some debugging code into DirectoryCleaner. It turns out that the file f returns the same value for getAbsolutePath() in both cases, but f.exists() reveals false when the build is started from the parent project – hence the condition is not met and nothing will be cleaned up.

Unfortunately, you can't look any deeper into the native code that is called when determining if a file exists. So, in a kind of trial and error approach, we found out that using the following code fixes the issue:

protected void invokeInternal(final WorkflowContext model, 
final ProgressMonitor monitor, final Issues issues) {
if (directory != null) {
final StringTokenizer st = new StringTokenizer(directory, ",");
while (st.hasMoreElements()) {
final String dir = st.nextToken().trim();
final File f1 = new File(dir);
final File f = new File(f1.getAbsolutePath());
if (f.exists() && f.isDirectory()) {
... do the cleanup ...
}
}
}
}

That is: by creating another file that is using the absolute path of the first one and using this further on, everything is fine in both scenarios – whether called from parent project or submodule.

To be honest: I have no explanation for these findings. Why is File behaving differently? When calculating the exist flag, the code should take into account the file's absolute path, right? So, why is creation of another file based on the absolute path is fixing the issue? Any insights are deeply appreciated...!

October 16, 2009

Speeding Up Your System

We did a lot of interesting stuff lately, including upgrading Eclipse to the new Galileo release, and in turn upgrading oAW (openArchitectureware) to the new versions of Xtext, Xpand, Xcheck etc. which are now part of the Eclipse Modeling project. I will blog about all this later...

But, what really started to hurt us was the performance of our Windows XP based development laptops. They aren't really brand new ones, but not that old either. Nevertheless, they seem to have an issue with all those java, class and jar files involved when starting Eclipse, doing a "Clean Project", when using Maven to build the software etc.

IT department was not willing to provide us with new hardware (disk especially) at this time, and no, using Linux is no option either. Hence, we had to find out other areas of improvement by tuning our system.

Here is what we did to speed things up for Windows XP. Note that things might or might not be different for Windows Vista or Windows 7.

1. Disable Indexing Service

By default, there is a Microsoft "Indexing Service" running in your Windows system. According to msdn, this is "a base service for Microsoft Windows 2000 or later that extracts content from files and constructs an indexed catalog to facilitate efficient and rapid searching."

Well, to be honest, I have never heard of that service before (and rarely use the search function of Windows), but it turned out to cause lots of harddisk traffic. So we decided to disable this service, which is recommended by some people.

Actually, there are several ways to do so (wihtout using Microsoft Management Console (MMC) with an appropriate snap-in):

  • Disable the Indexing Service in your list of local services.
  • In the Properties window of your local disk, remove option "Allow Indexing Service to index this disk for fast file searching".
  • Remove the function via the Control Panel > Add or Remove Programs > Add/Remove Windows Components > Uncheck "Indexing Service".

2. Don't Virus-Scan Java Stuff

Our anti-virus tool was configured to scan literally everything, including java, class and jar files. This seems to be exaggerated from security point of view, but IT folks said they could not configure the anti-virus to ignore a particular set of file extensions (like *.jar).

Hence, what we did instead was to exclude two folders from being scanned on the fly: javatools (where everything Java related is put, including Eclipse releases) and projects (where all our projects reside). Instead, these folders are now scanned once a week, which is acceptable for IT guys.

Know what? That speeded up starting time of Eclipse by factor 5!

Surprisingly, excluding the Maven local repository (located in your personal settings folder) from being scanned on the fly did not make much difference, so we didn't handle this folder any special.

3. Optimize Subversion

Are you using Subversion? If so, are you using TortoiseSVN, the Windows Shell Extension for Subversion? Well, in that case you will know and probably like the little overlay icons that are used to indicate the state of files and folders. This feature is recursive, whereby overlay changes in lower level folders are propagated up through the folder hierarchy so that you don’t forget about changes you made deep in the tree.

Starting with release 1.2, a new TSVNCache program is used to maintain a cache of your working copy status, providing much faster access to this information. Not only does this prevent explorer from blocking while acquiring status, but it also makes recursive overlays workable.

This is all nice, but there is a major drawback: TSVNCache by default looks for changes on all drives and in all folders, killing disk performance with all the I/O it's doing.

You can enable a TSVNCacheWindow showing all the folders being crawled by TSVNCache. To do so, open the Registry Editor, and create a new DWORD at HKEY_CURRENT_USER\Software\TortoiseSVN\CacheTrayIcon with value of 1. After that you have to restart TSVNCache which is easiest done by just killing the process, it will be automatically restarted when you do any TortoiseSVN operation. Now there should be a small tortoise icon in Windows tray area which opens the TSVNCacheWindow. Watch how TortoiseSVN scans files and folders whenever you write to a file...

Time to fix that! That should be quite easy if you're keeping all of your working copies below one specific folder (or a small set of folders), like we do. All you have to do is to setup TortoiseSVN to only scan your sourcefolder paths:

  1. Right-click in Explorer on any folder and select "TortoiseSVN > Settings...".
  2. In the Settings window's tree, click on the "Icon Overlays" entry.
  3. In the "Exclude Paths" input field, put C:\* to exclude the entire C drive. If you have more drives, exclude them all at the top level. Use newlines to separate the values.
  4. In the "Include Paths" input field, list all of the locations where your working copies are stored, again separated by newlines.
  5. Switch off "Network drives" option in "Drive Types" area.

All in all, your settings should now look like in the following screenshot. Thanks to Paraesthesia for this nice tip!


Epilogue

You won't believe what difference these three little tunings made to our system performance. The harddisk is not any more busy all the time, applications (like Eclipse) are starting much faster, and build time has decreased drastically. Not bad for not spending anything on new hardware! Well, next thing we will check is what effect a new (big, fast) hard disk will have... ;o)

September 18, 2009

Spring: Use Custom Namespaces!

Have you ever heard of custom XML namespaces for Spring? I know you love Spring (like I do), so... probably yes. They are available since Spring 2.0 which came out October 2006, so I just can't believe we didn't use them up to now.

The Old-Fashioned Way

Well, we are building our web apps on JSF and Spring Web Flow, and additionally use a custom framework for adding a few more features like mapping between business objects and view objects based on XML mapping descriptions. Of course, this framework part is designed "IoC friendly", i.e. it's based on interfaces and standard implementations that need to be plugged by Spring configuration.

We did this by providing a basic Spring configuration that defines defaults for most elements and injects all required references into the main configuration objects. To allow the project to provide custom implementations, this code relies on particular bean names that have to be defined by the custom project. Here's a simple example for the framework's configuration:

<bean id="iplIntegrationInfo" class="...IntegrationInfo">
<!-- myViewObjAccessor: to be defined in project specific Spring config file -->
<property name="viewObjAccessor" ref="myViewObjAccessor" />
<!-- myContextResolver: to be defined in project specific Spring config file -->
<property name="contextResolver" ref="myContextResolver" />
...
</bean>

<!-- These can be used for aliases if standard implementation should be used -->
<bean id="defaultViewObjAccessor" class="..." />
<bean id="defaultContextResolver" class="..." />

As you can see, all references are injected into the main configuration (iplIntegrationInfo). However, the referenced beans are not actually defined but have to be provided by the project. So, here's an example for the project's configuration:

<!-- use default -->
<alias alias="myViewObjAccessor" name="defaultViewObjAccessor" />

<!-- use custom implementation -->
<bean id="myContextResolver" class="..." />

Here we use the default for the first bean (by just establishing an alias for the configuration done in the framework file), and a custom implementation for the second.

This approach works, but obviously has a few drawbacks:

  • Configuration is shared between framework and project specific configuration code.
  • Project needs to know the framework's Spring file to do its work.
  • Internal framework structures (bean names, classes) are exposed to the outside.
  • A lot of XML code is required, even if using all the defaults.

Using Custom Namespaces

What should I say, we have switched to custom namespace lately and now configuration looks like this:

<ui:view-config/>

That's as short as it can be, but still is a complete configuration in case the project is using default implementations for all the beans. If the project would like to override some settings, the configuration file may be:

<ui:view-config>
<ui:viewobj-accessor ref="viewObjAccessor"/>
<ui:context-resolver ref="contextResolver"/>
</ui:view-config>

<bean id="viewObjAccessor" class="..." />
<bean id="contextResolver" class="..." />

Hence, you have the option to override all default implementations, and use meaningful tags to do so. Can you see the gains?

How Does This Magic Work?

I'm not going to provide full description on how to author custom namespaces since that is available in the Spring reference documentation and elsewhere. So, just a few basics to give you the idea...

First of all, you have to provide an XML schema describing your namespace. Then you have to implement a Namespace Handler that links each tag of your namespace to an XML parser, the Bean Definition Parser. And then, of course, those parsers must be implemented.

Each bean defintion parser is supposed to parse the XML element (including all attributes and sub elements) and to build the appropriate Spring beans. Spring provides a small hierarchy of classes you can use, depending on your situation:

  • The most specific is AbstractSimpleBeanDefinitionParser that fits well when there is a strong correlation between the attributes on a tag and the properties on your bean. In this case, there is really few code on your side.
  • The AbstractSingleBeanDefinitionParser is a bit more general, it allows you to create any single bean definition for an arbitrary complex nested XML structure. This involves more coding, of course, but still does some work for you (like automatically registering the bean in the context). This is the one you'll probably use most of the time.
  • The AbstractBeanDefinitionParser is even more general and allows you to create multiple bean definitions and register them directly. It takes care of id generation and a few other things.
  • Instead of using the latter one, you could also implement the BeanDefinitionParser interface directly, giving you all the flexibility you need.

All in all, you'll need a few hours to get comfortable with all the classes that are used to build up the bean definitions (DomUtils, BeanDefinition, BeanDefinitionBuilder, RuntimeBeanReference and ManagedMap/ManagedList to name a few).

But after that, building the parser and creating/registering your beans is quite straightforward. Moreover, you always can sneak a peek on how Spring itself is doing it for the Spring provided custom namespaces (like for Spring Security, for instance).

Okay, that's all for now, so go ahead and make life easier for your customers, your team, or yourself by providing a well-designed custom Spring namespace!

September 8, 2009

Spring: Test Framework Never Heard of Web Apps?

Do you know the Spring TestContext Framework? You really should. It drastically simplifies JUnit-testing of Spring powered applications. One particular cool thing is that it offers an easy, annotation based way to load your Spring context and auto-wire your beans. For example, this is how it could look like for JUnit 4:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = "classpath:spring-config/ui-test-config.xml")
public class MyParticularTest {
@Autowired
private Controller controller;
...
}

This is really nice, but... there is one thing we stumbled upon lately.

The test framework by default is using the GenericXmlContextLoader which loads the Spring application context as an instance of GenericApplicationContext. This class, obviously, is not aware of web applications that would require a WebApplicationContext instead. Hence, you can't access any web related resources, use associated scopes etc.

Don't get me wrong... Spring TestContext Framework is nice, but it's more than strange that it does not support the web scenario. There is a Jira issue for that, but it's still open for more than 10 months now, and is scheduled not until Spring 3.1?!? That's just annoying.

Well, there are some workarounds of course, based on using a custom context loader. See this blog post or this Spring community thread, for instance.

Highlight your Syntax

Now, summer is nearly over and I'm back on this blog. During summer time, I use to spend more time with my family (and doing vacation ;-) but now my head is full with lots of snippets that could make it into this blog.

First of all, I have integrated some syntax highlighting to these pages. This might be well-known and boring for you, but I didn't really know how to do this until finding Alex Gorbatchev's SyntaxHighlighter. It's purely based on JavaScript and hence can be easily integrated with lots of products, tools and engines – like Blogger.com. Integration couldn't be easier, see here for a step-by-step explanation.

When adding code, you just have to give your <pre> element a class like this: <pre class="brush:java">. That's it!

What do you get by this? Well, the well-known, nicely formatted code blocks like this one

public void prepareMethodOverrides() throws BeanDefinitionValidationException {
// Check that lookup methods exists.
MethodOverrides methodOverrides = getMethodOverrides();
if (!methodOverrides.isEmpty()) {
for (Iterator it = methodOverrides.getOverrides().iterator(); it.hasNext(); ) {
MethodOverride mo = (MethodOverride) it.next();
prepareMethodOverride(mo);
}
}
}

This works for lots of languages (called "brushes"), including XML and JavaScript, and you can easily provide a custom brush. Standard layout can be customized, of course, but the default is pretty good IMO.

There are just two or three things (there is always room for improvements ;-)

  • Syntax Highlighter documentation is a bit sparse...
  • The server hosting the CSS and JavaScript files, http://alexgorbatchev.com, seems to be not the fastest one. To speed up things, the Syntax Highlighter files could be hosted somewhere else, and they most probably already are.
  • You can highlight particular lines using the <pre class="brush:java; highlight: [14, 15]"> syntax (see line 7 in example above). However, it seems you can't specify areas but have to list all individual lines which might be cumbersome.

All in all, pretty cool. Syntax Highlighter is open source under LGPL 3, but the author asks you to "buy him a beer" if you use his tool. Well, I honestly will think of some donation ;-)

June 23, 2009

Maven: How Relocated Artifacts Can Ruin Your Day

Wow, this is one of those days I struggle with issues all the time without actually going forward one little bit on my original task... I guess you all know this feeling :o(
Now, one of the things that I wasted quite some time on is excluding a dependency. One of our modules is defining a dependency to Apache DBCP framework like this:
<dependency>
  <groupId>commons-dbcp</groupId>
  <artifactId>commons-dbcp</artifactId>
  <version>1.2.1</version>
</dependency>
This dependency is transitively depending on some XML artifacts, which you can see by using the maven-dependency-plugin's tree goal:
...
[INFO] +- commons-dbcp:commons-dbcp:jar:1.2.1:compile
[INFO]   +- commons-pool:commons-pool:jar:1.2:compile
[INFO]   +- xml-apis:xml-apis:jar:1.0.b2:compile
[INFO]   \- xerces:xercesImpl:jar:2.0.2:compile
...
Well, the xml-apis and xerces artifacts are a bit outdated and actually we didn't want them at all in our war files, so I decided to exclude them using the dependency exclusion feature:
<dependency>
  <groupId>commons-dbcp</groupId>
  <artifactId>commons-dbcp</artifactId>
  <version>1.2.1</version>
  <exclusions>
    <exclusion>
      <groupId>xml-apis</groupId>
      <artifactId>xml-apis</artifactId>
    </exclusion>
    <exclusion>
      <groupId>xerces</groupId>
      <artifactId>xercesImpl</artifactId>
    </exclusion>
  </exclusions>
</dependency>
What do you think will be the result? Surprisingly, xml-apis:xml-apis vanished, but xerces:xercesImpl didn't. The new dependency tree is the evidence:
...
[INFO] +- commons-dbcp:commons-dbcp:jar:1.2.1:compile
[INFO]   +- commons-pool:commons-pool:jar:1.2:compile
[INFO]   \- xerces:xercesImpl:jar:2.0.2:compile
...
That's strange, isn't it? It took me an hour or so playing around to find the reason for this behaviour: Xerces has been relocated with respect to its Maven coordinates... The only hint Maven is giving you is this message when executing in debug mode:
[DEBUG] While downloading xerces:xerces:2.0.2
  This artifact has been relocated to xerces:xercesImpl:2.0.2.
What does that mean? Well, you may relocate artifacts in the Maven repository when the group or artifact id should change (for more details, see the Guide to Relocation). Originally, the DBCP artifact specified a dependency to xerces:xerces in its dependencies – which you can see in the artifact's POM file (residing in your local Maven cache, for instance):
<dependency>
  <groupId>xerces</groupId>
  <artifactId>xerces</artifactId>
  <version>2.0.2</version>
</dependency>
However, xerces:xerces has been relocated to xerces:xercesImpl and these new coordinates are what is shown in the dependency tree. But, when excluding a dependency, you must specify the original transitive dependency (as defined by the artifact), not the new coordinates:
<dependency>
  <groupId>commons-dbcp</groupId>
  <artifactId>commons-dbcp</artifactId>
  <version>1.2.1</version>
  <exclusions>
    <exclusion>
      <groupId>xml-apis</groupId>
      <artifactId>xml-apis</artifactId>
    </exclusion>
    <exclusion>
      <groupId>xerces</groupId>
      <artifactId>xerces</artifactId>
    </exclusion>
  </exclusions>
</dependency>
How intuitive is that???
Maven should really do better, either by indicating the original coordinates when showing the dependency tree/list, or better yet by issuing a warning when trying to exclude a relocated dependency. I guess the latter one is not easy to implement, but current solution is just annoying!

June 17, 2009

Checkstyle: One and Only Configuration File?

The Checkstyle Challenge

When you are using both Eclipse and Maven, you are probably facing the same challenge like we do: you would like to have a single configuration file for Checkstyle tool and reference that in Eclipse as well as in Maven.

Checkstyle ships with some default conventions (according to the Sun coding standards). But you usually want to customize this and create your own configuration set, based on your corporate coding styles.

Now, you want to setup Checkstyle within Eclipse to provide direct feedback to developers. You want to setup Checkstyle with Maven to be able to have a coprehensive report built as part of your project site. And you want both to reference the same configuration file, to avoid having to maintain two of them. Naturally, right?

Well, unfortunately, it seems impossible to achieve... if you really found a way, let me know!

However, there nonetheless are some tips to help you having a centralized configuration of Checkstyle with both tools. This post is about how to do that.

Checkstyle with Maven

The goal is to centralize the Checkstyle configuration to make its usage as simple as possible for all the projects, avoiding manual configuration as far as possible. Fortunately, this is quite easy using Maven's dependency mechanism.

Essentially, we build a custom artifact that contains nothing but our Checkstyle configuration file. This JAR is deployed to the company's Maven repository from where it can be downloaded as a normal dependency when needed. After that, the Checkstyle plugin is configured (in the reporting section) to reference the packaged configuration file. That's it!

Now, we'll look at the details...

Let's assume our checkstyle configuration is located in config/my-checkstyle.xml. Then the POM of the checkstyle artifact, which actually is a minimal Maven project, looks like this:

  ...
<groupId>com.fja.ipl</groupId>
<artifactId>checkstyle-config</artifactId>
<version>1.0</version>
<build>
<resources>
<resource>
<directory>config</directory>
</resource>
</resources>
</build>
...

(Of course, you additionally have to add the distributionManagement section to be able to deploy the artifact, but this is usually derived by the company's base POM.)

Deploy the artifact. You'll notice that the created checkstyle-config-1.0.jar indeed contains your my-checkstyle.xml file in its root.

Now let's switch to the project that should use the Checkstyle report. The configuration file is referenced in the Checkstyle plug-in configuration in the reporting section like this:

  <reporting>
<plugins>
<plugin>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>2.2</version>
<configuration>
<configLocation>my-checkstyle.xml</configLocation>
</configuration>
</plugin>
...
</plugins>
</reporting>

That is not yet working, Maven can't find this file – our checkstyle-config artifact has not been referenced. We do this now by adding it as an extension dependency to the POM as follows:

  <build>
<extensions>
<extension>
<groupId>com.fja.ipl</groupId>
<artifactId>checkstyle-config</artifactId>
<version>1.0</version>
</extension>
</extensions>
...
</build>

The artifacts configured as extensions will be included in the running build's classpath. That means, checkstyle-config-1.0.jar now is in the classpath of the Checkstyle plugin, and the file referenced by the configLocation setting is located in the JAR and hence can be found.

Using this approach, the Checkstyle configuration can be maintained centrally, and projects can get new versions simply by updating the artifact's version in the extension section.

Checkstyle with Eclipse

Unfortunately, I don't really see a way to use the same Checkstyle configuration, packaged into the Maven artifact, with Eclipse as well...

So you'll have to duplicate the configuration file. However, to keep the number of duplicates small, I recommend to put it on a central network drive within your company and use the Remote Configuration option of the Checkstyle Eclispe plugin to reference the remote file. This does not prevent you from working offline because Eclipse downloads and caches this file.

In this scenario, changing the Checkstyle configuration is done centrally by just updating the remote file. This is a great advantage over using a configuration that is part of the project, because in the latter case you would have to upgrade every single project.

June 4, 2009

Google and the Crystal Ball

Google brought us the Web Search. They brought us the Maps. They brought us their Mail, the News, the Images, the Videos... In other words, they revolutionized the web. They rule the web. You all know that.

They are so creative, cutting-edge and are always good for some surprise. Like this one: Google has announced the Wave project a few days ago, which indeed might have the power to turn into the "next generation of e-mail". Wave is a web based service platform designed to merge e-mail, instant messaging, wiki, and social networking. Gosh, this sounds exciting, really!

Yeah, Google is collecting data, probably more than we can imagine. And they will continue to do so. (A new study of UC Berkeley just revealed that Google is the dominant player in the tracking market; typically, tracking is done using "web bugs" that are embedded in the web page‘s HTML code, and are designed to enable monitoring of who is reading the page.)

But... aside from all discussions how evil or risky Google's data acquisitiveness might be, it opens a new world for something called "collective intelligence": discovering metadata based on the search engine data.

What does this mean? Well, Google records search phrases which can later be compared across specific regions, categories and time frames. Using Google's Insight for Search service, you can "see what the world is searching for". Pretty nice. For instance, search for "google wave" and see how the interest emerged end of May. Interestingly enough, there have already been a few searches in summer 2008!?!

Well, it even gets better. Google has found out that certain search terms used in their web search are good indicators of flu activity – when people have the flu, they search for symptoms, drugs etc. Google uses that to estimate flu activity on their Flu Trends page, and this indicator turns out to be quite good.

So, what is next? Possibilities seem endless, and Google will certainly surprise us again. Since they have all the data, and the (near) future is certainly hidden in that data – how about some kind of digital, web based crystal ball? I told you, so don't be surprised if they make it... ;o)

June 3, 2009

Eclipse Top Annoyances

Eclipse is one of those love/hate things. I love it and hate it, both at the same time.

The Love

Firstly, it's the best Java IDE on the market, and we successfully use it for years now. In the meantime, there might be other IDEs with an even richer feature set, better integration, nicer UI etc. but we actually don't care very much. I love Eclipse just for what it did to the Java and Open Source community, and it definitely has been pushing the edge for a couple of years now.

BTW, of course I know Eclipse is more than just a Java IDE, but this is my personal short interpretation of "development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle".

The Hate

Well, having said all this... sometimes Eclipse really drives me crazy. There are a couple of annoyances, some of them rather small but insistent, others bothering us since quite a long time.

The strange thing is, all of my complaints are not related to the more sophisticated plugins like Mylyn, WTP, MDT and so forth. (Yes, we use them too, albeit in a sketchy way.) No, the issues annoying me most are shown by the Platform or JDT, and maybe that's why they are actually so annoying.

The List

Okay, here is my list for Eclipse 3.4 (Ganymede):

6. Formatting

We have setup the Eclipse Java code formatter to match the project requirements. There are plenty of options to adjust to your needs, but... sometimes the algorithm is just broken. Take this example:

private Accumulator getSpecificAccumAndSetRuleDefaultValue(
CoverageRule calcStepRule,
CalculationStepInfo calculationStepInfo,
...)
{
...
}

What is this extra line break before the first method argument good for? Instead of aligning the method arguments horizontally, Eclipse messed it up vertically. You can find several other strange formattings where formatted code requires extra space, mainly in long method or class declarations. Here is another example of an extra line break:

public class PersistenceService<R extends PersistentRootObject, D extends HibernateDaoSupport>
implements
IPersistenceService {
...
}
5. Blocking Workspace

I have blogged about this blocking behaviour before: sometimes, Eclipse can't save my file while it is building "in the background" and tells me that my "User Operation is Waiting"...

That's just silly, Eclipse should be able to do my save action at any time, if compiling or not. Loosing editor's content (by unintended editing, Eclipse crash or hang-up, whatever) just due to the impossibility to save is just not acceptable.

4. Startup Time

Depending on size of the workspace, you got to wait minutes before you can start editing. That's just too slow. Period.

3. Dialog Window Dimensions

When you open a project's Properties window, for instance, it will show up quite small. I have a large screen and don't like to scroll unnecessarily, so I resize the window to be considerably larger. However, the dialog dimensions are not persisted – when I close and reopen the same dialog, it shows up as small as before. Eclipse just refuses to remember what I like.

For other dialog windows (like Open Type), this works perfectly; why not for all of them?

2. Update Manager

I used to think the new Eclipse 3.4 update manager (p2 Provisioning System) was a good thing, but I'm no longer sure. As end user, you just don't want to mess around with features, dependencies, build numbers... and get strange error messages (see this post of an issue with a custom built Eclipse plugin we struggled with recently).

Additionally, the list of available software is too technical for most end users. Subversive SVN Connectors plugin provides 6 optional connectors, but at least one is required. So, which one(s) do I choose? Instead of having the full list with just the name and version strings, I would rather like to see some typical sets of required features ("typical installation" similary to when installing software on Windows). Alternatively, at least provide some end user level meta information like "This is the default feature", "Choose this item if..." etc.

1. Vanished Project Contents

Sometimes the content of my workspace projects in Package Explorer view is "gone", meaning I can't expand the project nodes any more. Yes, of course there is some content, and fortunately it is still on the disk – but just not showing up in the view.

Project Explorer view still shows the projects correctly. Closing and re-opening them in Package Explorer usually helps.

The Hope

I used to install the latest milestones for some years, but have given up this habit because of stability/quality/compatibility issues with Eclipse core and/or the plugins I use. Hence, I can't tell which ones will be addressed by the Eclipse 3.5 (Galileo) which is going to happen on 24th of June.

So... let's keep our fingers crossed!

May 18, 2009

Maven Plugin Releases: Do it yourself!

In my previous post, I have complained about Maven plugins that do not release new versions although there are blocking issues that are reported in Jira with patches attached to them. If you don't want to wait any longer for a new plugin release, just build one on your own. Here's how we usually do that.

Fix the Plugin

  1. Checkout plugin sources from source repository and import as Eclipse project. The easiest way to do this is to use m2eclipse plugin's cool "Materialize Maven Projects" feature (see here).
  2. Edit the POM and change the plugin's version number (see notes below).
  3. Download the patch from the JIRA issue page.
  4. Appy the patch using Eclipse: in project's context menu, select Team > Apply Patch...
  5. Now build, unit-test and install the plugin using Maven.
  6. Update your project's POM to use this new version, and test, test, test....
  7. If everything is fine, upload your patched plugin to your repository manager to make it available to your team members.
  8. Check in all changes you made, probably including the fixed plugin – into your repository, of course, not the plugin's one!

Give Some Thought to Version Numbers

A few rules on the fixed plugin's version string that have proven rational for us:

  • You should not just increase the version number to prevent mix-up with future official versions of the plugin.
  • Instead, we usually add a suffix made up of our company name and a consecutive number, like "-fja-1" for the first fix. This indicates the initiator and gives room for further fixes in case they should be required. Additionally, you always see the underlying official version. Example: <version>2.5-fja-1</version> for FJA's first fix of version 2.5.
  • Don't change the plugin's group or artifact id because this would bypass Maven's version resolution, i.e. in Maven's eyes your fix would not just be a new version, but another plugin. You possibly end up having both plugins (the original one and your fix) configured in your builds, leading to duplicate execution and unexpected results.
  • If you just change the version string, this can be done in the parent POM – you are locking down your plugins in your corporate/project root POM, do you? – making it easy to switch back to an official relase later... in case there is one available.

May 13, 2009

Maven Compromised by Plugins

Every piece of software has its flaws... The important part is how the project is dealing with bugs.

Maven is fine

With Maven, the situation actually is good enough: there have been releases of 2.0.x branch 1-2 times per year, to fix bugs and provide new features or improvements. At the same time, the 2.1.0 branch appeared with a milestone and a final release six month later.

However, you would usually not want to switch your Maven build system to a new version too often. This is always risky since a new version of the "core" may cause unexpected results in any area of the build. A continuous integration engine helps to detect bugs, but you never know.

Fortunately, Maven is heavily based on plugins – which means if you encounter any issue with a specific Maven task, it's usually caused by the related plugin, not the Maven core. That's great, right? You just upgrade your POMs to use a new version of the particular plugin. Better isolation, fewer risks!

(Some) Plugins are not!

But... this assumes that there is a new version of the plugin available. Now imagine this: a plugin has a bug reported by the community, and there is a Jira issue for that. Some brave soul even attached a patch to fix the issue and others confirmed that it works for them. Thus, the new bugfix release could easily be built. But what happens? No new release. Nothing. Not even an alpha/beta version. Not even a comment explaining why. For months!

Here are just two examples we stumbled upon the last couple of weeks:

  • Javadoc plugin: The report is not generated at all for multi-module project if run from parent level and using the recommended aggregate goal. Jira issue reported 2008/08/18, patch added 2008/09/22. No new release ever since.

  • Checkstyle plugin: Current version does not work with Checkstyle 5.0 that started to appear on the stage last summer. Jira issue created 2008/10/28, along with a patch; another patch added 2009/03/27. No new release ever since.

Wow. How annoying is that? There is a major/blocking issue, and there is a patch available, but the plugin owners don't take the time or effort to create a new release?!? Why so? What else could the community do to help fixing the plugin?

This way, the plugins are compromising usability of Maven. IMHO, that's a serious issue...

So, what can you do?

If you don't want to continue waiting for a new plugin release, you have to build one on your own, for your "private use". That's quite easy, fortunately, and I'll probably post about that later.

However, this is actually not what a typical Maven user wants to do. The plugins should just do it right: release more often to fix their known bugs! You could increase the pressure a little bit by voting for the Jira issue or adding some comments, but my experiences are not too good ...

May 11, 2009

Update to Checkstyle 5.0

Checkstyle 5 is available!

This is good news: a new version of Checkstyle is available that better supports Java 5 language features (like generics, annotations or package-info files). Additionally, some checks are cleaned up a bit, for instance concerning their parent in which they are contained. See the release notes for details.

Due to these changes, Checkstyle 5.0 is not fully compatible to previous versions 4.x – which is already indicated by the version number leap. Hence, don't expect the upgrade to be smoothly!

Nevertheless, I thought it would be time to upgrade, so here is what I did...

Step 1: Upgrade checkfile configuration

Due to the incompatabilities, I had to apply some changes to our checkstyle configuration to make it work with Checkstyle 5. To be sure I did it right, I have downloaded and used the Checkstyle binaries and checked my configuration for just a simple project. Hence, before upgrading Eclipse and Maven Checkstyle integration, I know my configuration is correct.

BTW, we are using a common checkstyle configuration file for all projects, and reference this "global" file with Maven and Eclipse in different ways:

  • With Maven, we use a custom artifact that contains nothing but our checkstyle.xml file. This in turn is included as build extension in our main base POM. See this post for more details.

  • For Eclipse, we use the "Eclipse Checkstyle Plugin" that provides a Remote Configuration option to reference the configuration file on an internal file server.

Step 2: Upgrade Eclipse Plugin

There is a new Beta-Release of Eclipse Checkstyle plugin (eclipse-cs) available on its update site http://eclipse-cs.sf.net/update. At the time of writing, there are three features available:

  • Eclipse Checkstyle Plug-in – version 5.0.0-beta4
  • Eclipse Checkstyle Plug-in 4.4.x -> 5.0.0 Migration (Optional) – version 5.0.0-beta4
  • m2eclipse Maven Synchronization Plugin (Optional/Experimental) – version 0.0.3

Oh, seems there would have been some help in migrating from Checkstyle 4.4 to 5.0... Doesn't matter, I always like to see what the changes are so it's good to do it manually.

The m2eclipse Synchronization plugin is a new feature providing mechanism to synchronize Checkstyle rules and configuration between the maven plugin and eclipse-cs. Sounds really interesting... but let's do one step after the other and test this later.

So. I just installed the "Eclipse Checkstyle Plug-in" feature. Eclipse didn't recognize that this actually is an update, so you have to uninstall the previous eclipse-cs installation manually.

Why that? Well, the "package" has changed from com.atlassw.tools.* to net.sf.eclipsecs.*, and this applies to the feature's ID, too. Moreover, this renaming also affects the buildCommand and nature in .project files, they have to be net.sf.eclipsecs.core.CheckstyleBuilder and
net.sf.eclipsecs.core.CheckstyleNature
now.

Additionally, the notation for file sets has been changed: a file set previously configured as src\\main\\java\\com\\mycompany\\.* does no more match to any file; instead, it has to be the slash now like in src/main/java/com/mycompany/.*.

Okay, maybe I should have tried the Migration plugin... Anyways, after these changes everything works fine for me in Eclipse.

Step 3: Upgrade Maven Plugin

Good. Last piece is Maven, which provides the Maven 2 Checkstyle Plugin. However, the current version is 2.2 which is based on Checkstyle 4.4 by defining these dependencies:

<dependency>
<groupId>checkstyle</groupId>
<artifactId>checkstyle</artifactId>
<version>4.4</version>
</dependency>
<dependency>
<groupId>checkstyle</groupId>
<artifactId>checkstyle-optional</artifactId>
<version>4.4</version>
</dependency>
There is an interesting way to override the plugin's dependencies pointed out by Brian Fox, but that's not going to work for us because Checkstyle versions 4.4 and 5.0 are not API compatible.

What can we do? Not much... we'll have to wait for a new version of the Checkstyle plugin that updates to Checkstyle 5. There is this Jira issue, and patches have already been provided some time ago. It's only that there seems to be no progress whatsoever... Checkstyle 5.0 is officially out since April 18th, so there is no reason to wait any longer! Create a new release (for my part, alpha/beta is fine as well) – please!!!

If you really need the plugin to be fixed now, you could checkout the plugin's sources and built your own version, applying the patch provided in the Jira issue. That works, but is nothing we want to do regularly!

May 5, 2009

Eclipse: Update Manager Fools Me

Recently, we experienced an issue with a custom Eclipse plugin developed and hosted by my company. When trying to install the plugin into various versions of Eclipse Ganymede, the update manager always told me "Cannot find a solution satisfying the following requirements org.eclipse.ui [3.4.2.M20090204-0800]".

Finally, after some lost days of searching and trying, we discovered that this message is totally misleading: the issue was not caused by the feature org.eclipse.ui missing or being available only with wrong version. Instead, the reason was a misconfigured Manifest file of the custom plugin. It just defined wrong version for the required bundles (oAW, the MDA/MDD generator framework we use).

How nasty is that? Couldn't Eclipse update manager (p2) provide better support? More useful messages? Gosh!

Moreover, Eclipse update manager seems to cache the metadata of remote repositories. That might be nice to avoid network overhead of loading the repository again and again. But... when testing installation of different plugin versions with a local update site, it definitely gets into your way. The sad thing is, you can't get rid of the cached contents easily. You'd have to remove the site, restart Eclipse with -clean option two times, and after that recreate the location. Could be made easier, really.

Eclipse update manager will once more be refactored in upcoming Eclipse 3.5 Galileo. Folks, don't mess it up again!

May 4, 2009

Maven Enforcer Plugin: cool and annoying

As a Maven expert, you probably have heard of maven-enforcer-plugin, "The Loving Iron Fist of Maven", as they say.

It's so cool!

The idea is pretty cool: the plugin provides goals to ensure that the environment used to run Maven is like you – the POM author – expected it to be. There are checks for constraints on the version of Maven or JDK, on the OS family, to enforce certain dependency requirements, and much more. This list on the web site gives you an impression on what kind of checks are provided.

Do you know what happens when you write your POMs and settings for Maven version 2.0.9 or greater, but somebody is still using an outdated 2.0.4? Well, this kind of issues is annoying and you better make sure it does not happen.

The current version of maven-enforcer-plugin (at the time we set it up) was 1.0-alpha-4 which sounded not very mature, but nevertheless we decided to use it. For our projects, we identified a couple of things we wanted to check, and we configured the plugin accordingly:

  • Java Version: should be 1.5.
  • Maven version: must at least be 2.1.0.
  • Enforce defined versions for all plugins, and disallow any use of "LATEST", "RELEASE" or "SNAPSHOT" as a version for any plugin.

Hey, SNAPSHOT plugins are just dangerous and you should never, never ever depend on them. The only exclusion could be plugins developed inhouse, but actually, no – not really. Bah, don't use SNAPSHOT plugins. Period.

So, we tried to setup our enforcer plugin configuration like this:

<plugin>
<artifactId>maven-enforcer-plugin</artifactId>
<version>1.0-alpha-4</version>
<executions>
<execution>
<id>enforce-versions</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<requireMavenVersion>
<version>[2.1.0,)</version>
</requireMavenVersion>
<requireJavaVersion>
<version>1.5</version>
</requireJavaVersion>
<requirePluginVersions>
<message>Best Practice is to always define plugin versions!</message>
<banLatest>true</banLatest>
<banRelease>true</banRelease>
<banSnapshots>true</banSnapshots>
</requirePluginVersions>
</rules>
</configuration>
</execution>
</executions>
</plugin>
Of course, we wanted to apply the same checks on command line as within Eclipse when using m2eclipse plugin.

It's so annoying!

Well, and here is where the trouble started...

Shot one. Oh, wait, we are using Maven version 2.1.0-M1 currently. Guess what? This is not included in range [2.1.0,) which means every version x >= 2.1.0. Okay. Let's use [2.1.0-M1,) instead.

Shot two. What about Eclipse? The check fails within Eclipse, and it turns out that the version number reported by the Maven embedder used by m2eclipse (0.96 at that time) is 2.1-SNAPSHOT. Thus, enforcer fails and we'll have to use this for the Maven version specification:

<version>[2.1.0-M1,),[2.1-SNAPSHOT,)</version>

Ugly, but still manageable...

Shot three. My team reported the enforcer plugin to sometimes use some "thinking time" when checking the environment – it seems to hang for couple of seconds. We all see that betimes, and I have no idea what it is caused by. Actually, performing those version checks should be fast as light, right?

What makes this even worse is the fact that in multi-module builds, enforcer plugin is called for each subproject. It binds by default to the validate lifecycle phase which is executed for every subproject. However, the environment does not change very much during a single build, so it would be sufficient to check that only once per Maven call...

Shot four. When establishing the cool Sonar quality management platform, Maven started to complain that "some plugins are missing valid versions". Seems that Sonar defines dependencies to other artifacts used internally, but doesn't give a version for them. All you can do is specify a version for those plugins in the <pluginManagement> section of one of your parent POMs. That's actually not what you want: explicitely list versions for artifacts used internally by the current version of a build tool... But what else can you do? Completely disable the requirePluginVersions check?

Shot five. After upgrading m2eclipse to version 0.97, it does not correctly handle our enforcer configuration any more and instead yields an error:

org.apache.maven.lifecycle.LifecycleExecutionException: Invalid or missing parameters: [Mojo parameter [name: 'rules'; alias: 'null']] for mojo: org.apache.maven.plugins:maven-enforcer-plugin:1.0-alpha-4:enforce

This is a known issue, and you have to uncheck the "Skip Maven compiler plugin when processing resources" checkbox in the project's properties. That means, we have to change configuration for all our projects :-(

Moreover, it was the default to check this option and it seems to have negative performance impact when deselected.

Game over. This is the point where we eliminated enforcer plugin. Too much pain for a little helper tool. One of those things that seems to be quite cool at first glance, but starts to annoy you very soon.