tag:blogger.com,1999:blog-61890166165856940432024-03-14T05:57:46.912+01:00Java MoodsJava, Maven, Tools and everything else...::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.comBlogger46125tag:blogger.com,1999:blog-6189016616585694043.post-43747428885754385522011-05-05T23:46:00.007+02:002011-05-06T01:10:11.978+02:00The Butler Dispute, Round 2<p>Nearly 4 month have passed by since the renaming of Hudson project to Jenkins, which marks the climax of the <a href="http://javamoods.blogspot.com/2011/02/butler-dispute.html">dispute</a> between the old Hudson developers and the guys from Oracle and Sonatype.</p><p>Jenkins has made a great job since then, delivering 15 releases in a weekly schedule. Build number <a href="http://kohsuke.org/2011/03/13/jenkins-hits-1-400/">1.400 was hit</a> in March, wich is not a particular significant release but shows how well things go on. <a href="http://javamoods.blogspot.com/2011/03/way-from-hudson-to-jenkins.html">The way from Hudson to Jenkins</a> is as easy as it could be, and it seems like many users are going it. </p><p>Indeed there are a lot of reasons why to choose Jenkins over Hudson, just to name a few:</p><ul><li>Support by the fabulous Hudson core development team – with Kohsuke Kawaguchi, the creator of Hudson, and other brave guys.</li><li>Strong community activity – measured in figures like commit counts and mailing list traffic, see <a href="http://bobbickel.blogspot.com/2011/03/jenkins-vs-hudson-time-to-upgrade.html">this post</a> for some numbers.</li><li>Most of the plugins moved over to Jenkins – 5 of the top 5 and 19 of the top 25 plugins continue primary development with Jenkins, see <a href="http://jieryn.livejournal.com/4362.html">here</a> for some statistics.</li><li>High quality and regular releases – the weekly schedule led to 15 high quality releases, each of them providing a couple of bug fixes and new features (see <a href="http://jenkins-ci.org/changelog">changelog</a>). Moreover, a few weeks ago, Jenkins governance board <a href="https://wiki.jenkins-ci.org/pages/viewpage.action?pageId=57180302">proposed to start another release line</a> for most stable baselines with a 3-months schedule.</li></ul><p>Even the Hudson board seems to have observed that Jenkins outperforms Hudson in many ways, at least they are thinking about "how to make it more attractive for plug-in developers to support both Hudson and Jenkins" (see <a href="http://java.net/projects/hudson/lists/dev/archive/2011-05/message/18">this message</a> on Hudson-Dev list). The author's perception is that "Hudson also appears to be slowing down development wise" and "another place where Hudson appears to be slowing down, is when you compare changelogs". Some of the ideas deal with copying approaches that are working fine for the Jenkins project.</p><p>Hence, it seems Jenkins is the winner of the battle and has in fact benefited from the fork.... until today.</p><p>Because today, <a href="http://www.oracle.com/us/corporate/press/393483">Oracle submitted</a> a proposal to move Hudson to the Eclipse Foundation. This is, well, somewhat astonishing since that means Oracle will lose both control and the Hudson trademarks – which was the main background of the original dispute with the community.</p><p>As part of the proposal, other big players have announced support for the project, including IBM, VMware, Tasktop and Intuit. That means, moving the Hudson project to Eclipse will for sure result in higher attention and more resources (developers).</p><p>Does this change anything? Will Jenkins be the unlucky loser, after all? I don't think so. The heavens didn't really smile on Hudson since the fork (kind of bad karma) and I don't see why the move to Eclipse should change that. It's all about people, not code.</p><p>Moreover, Jenkins has been <a href="http://www.sonatype.com/people/2011/05/sonatype-supports-hudsons-move-to-the-eclipse-foundation/">invited by Sonatype</a> to reunite with Hudson. But... why should they do that? Jenkins is a vibrant project today, so what is the benefit? Also, there have been some deep disappointments on personal level that are not forgotten yet.</p><p>It's going to be interesting!</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com0tag:blogger.com,1999:blog-6189016616585694043.post-48728460865249451632011-04-07T15:07:00.005+02:002011-04-07T15:54:43.598+02:00Jenkins: Pimp It Up!<p>Some days ago, I started to review what plugins are available for Jenkins, <a href="http://javamoods.blogspot.com/2011/03/way-from-hudson-to-jenkins.html">my favorite CI server</a>. I haven't done so for a long time, so I was somewhat surprised to see a full universe of plugins (380+) listed in the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Plugins">Wiki</a>...</p><p>There is next to everything you can imagine. Among the plugins I would like to suggest for consideration are these:</p><ul><br /><li><a href="https://wiki.jenkins-ci.org/display/JENKINS/Dependency+Graph+View+Plugin">Dependency Graph View Plugin</a> – Shows the dependency graph of the Jenkins projects using graphviz. This greatly helps to keep track of dependencies between all your Jobs.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-TdG2Mq9kJEA/TZ3ACjfgHcI/AAAAAAAAADw/49Zj6msai9k/s1600/Jenkins_DependencyGraph.gif"><img style="margin-left:50px; cursor:pointer; cursor:hand;width: 400px; height: 161px;" src="http://2.bp.blogspot.com/-TdG2Mq9kJEA/TZ3ACjfgHcI/AAAAAAAAADw/49Zj6msai9k/s400/Jenkins_DependencyGraph.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5592837462383664578" /></a></li><br /><br /><li><a href="https://wiki.jenkins-ci.org/display/JENKINS/Disk+Usage+Plugin">Disk Usage Plugin</a> – This plugin calculates and records disk usage (space for builds and workspace) per project and per build, and can display trend graphs.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-gYjzbit_khk/TZ3ACyGNk0I/AAAAAAAAAD4/XSyIrJ7wwMg/s1600/Jenkins_DiskUsage.gif"><img style="margin-left:50px; cursor:pointer; cursor:hand;width: 400px; height: 168px;" src="http://4.bp.blogspot.com/-gYjzbit_khk/TZ3ACyGNk0I/AAAAAAAAAD4/XSyIrJ7wwMg/s400/Jenkins_DiskUsage.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5592837466304123714" /></a></li><br /><br /><li><a href="https://wiki.jenkins-ci.org/display/JENKINS/Global+Build+Stats+Plugin">Global Build Stats Plugin</a> – can be used to gather and display global build result statistics, monitoring over time, and show nice graphics.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-ZpxEKqoBVGU/TZ3ADAtgueI/AAAAAAAAAEA/K5LPUi06OK8/s1600/Jenkins_GlobalBuildStats.gif"><img style="margin-left:50px; cursor:pointer; cursor:hand;width: 400px; height: 245px;" src="http://4.bp.blogspot.com/-ZpxEKqoBVGU/TZ3ADAtgueI/AAAAAAAAAEA/K5LPUi06OK8/s400/Jenkins_GlobalBuildStats.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5592837470227053026" /></a></li><br /><br /><li>And, of course, all the static analyzing plugins that scan result files of several static code analysis tools and visualize the results as trend graphs:<ul><li><a href="https://wiki.jenkins-ci.org/display/JENKINS/Checkstyle+Plugin">Checkstyle Plug-in</a></li><li><a href="https://wiki.jenkins-ci.org/display/JENKINS/Cobertura+Plugin">Cobertura Plugin</a></li><li><a href="https://wiki.jenkins-ci.org/display/JENKINS/FindBugs+Plugin">FindBugs Plugin</a></li><li><a href="https://wiki.jenkins-ci.org/display/JENKINS/JavaNCSS+Plugin">JavaNCSS Plugin</a></li><li><a href="https://wiki.jenkins-ci.org/display/JENKINS/PMD+Plugin">PMD Plugin</a></li><li><a href="https://wiki.jenkins-ci.org/display/JENKINS/Task+Scanner+Plugin">Task Scanner Plugin</a></li><li><a href="https://wiki.jenkins-ci.org/display/JENKINS/Warnings+Plugin">Warnings Plugin</a></li></ul><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-z3JagrhSN2k/TZ3ADLtjdDI/AAAAAAAAAEI/VOidk6R1ZOU/s1600/Jenkins_StaticAnalysis.gif"><img style="margin-left:50px; cursor:pointer; cursor:hand;width: 400px; height: 223px;" src="http://3.bp.blogspot.com/-z3JagrhSN2k/TZ3ADLtjdDI/AAAAAAAAAEI/VOidk6R1ZOU/s400/Jenkins_StaticAnalysis.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5592837473180021810" /></a></li><br /></ul><p>Have fun!</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com0tag:blogger.com,1999:blog-6189016616585694043.post-71149926812471916912011-03-29T09:07:00.002+02:002011-03-29T10:11:52.168+02:00DocBook with Maven Issue<p>We are using <a href="http://docbook.org/">DocBook</a> for writing technical documentation for all our projects and in-house frameworks. We are actually quite happy with this approach, especially because we are able to automatically publish the docs in a number of formats, including HTML and PDF. To do so, we use the <a href="http://docs.codehaus.org/display/MAVENUSER/Docbkx+Maven+Plugin">docbkx-maven-plugin</a> in the project's nightly build.</p><p>So, all has been in best order... until I decided to upgrade this docbkx-maven-plugin from version 2.0.8 to current version 2.0.11 (due to some issues we had). After doing so, the document conversion issues an error which breaks the build:</p><pre class="brush:xml">[ERROR] Failed to execute goal com.agilejava.docbkx:docbkx-maven-plugin:2.0.11:generate-pdf (pdf) on project builddoc-ma<br />ven-plugin: Failed to transform to PDF: org.apache.fop.fo.ValidationException: null:30:723: Error(30/723): fo:table-body<br /> is missing child elements.<br />[ERROR] Required Content Model: marker* (table-row+|table-cell+)</pre><p>Well, this is somewhat unexpected because I didn't change anything but the plugin version, and I don't see any reason it could not work as before. In particular, we are still using the same docbook version in our POM. Here is the relevant snippet:</p><pre class="brush:xml"><plugin><br /> <groupId>com.agilejava.docbkx</groupId><br /> <artifactId>docbkx-maven-plugin</artifactId><br /> <version>2.0.11</version><br /> <dependencies><br /> <!-- the DocBook XML DTD and catalog files (see http://www.oasis-open.org/docbook) --><br /> <dependency><br /> <groupId>org.docbook</groupId><br /> <artifactId>docbook-xml</artifactId><br /> <version>4.4</version><br /> <scope>runtime</scope><br /> </dependency><br /> </dependencies><br /><br /> <executions><br /> <execution><br /> <id>pdf</id><br /> <goals><br /> <goal>generate-pdf</goal><br /> </goals><br /> <phase>post-site</phase><br /> <configuration><br /> ...<br /> </configuration><br /> </execution><br /> ...<br /> </executions><br /><br /> <configuration><br /> <htmlStylesheet>css/html.css</htmlStylesheet><br /> <htmlCustomization>${basedir}/src/doc/xsl/html_chunk_customization.xsl</htmlCustomization><br /> <foCustomization>${basedir}/src/doc/xsl/fopdf_customization.xsl</foCustomization><br /> ...<br /> </configuration><br /></plugin></pre><p>It's important to understand that we are using the <a href="http://docbkx-tools.sourceforge.net/advanced.html">advanced customizing</a> capabilities of DocBook, i.e. we customized the stylesheets used for rendering HTML and PDF. The created custom stylesheets contain an import to <code>urn:docbkx:stylesheet</code>, and in the Maven POM the <code>htmlCustomization</code> and <code>foCustomization</code> properties point to those custom stylesheets. This is how it's supposed to be, and this is how it worked all along.</p><p>I found out that the error message is correct when building with plugin version greater than 2.0.8, since the <code>for-each</code> element indeed does not return any element which results in an empty <code>fo:table-body</code>. In fact, none of the <code>xsl:value-of</code> in our customized stylesheet returned any value any more....</p><p>So here is why: since <code>docbkx-maven-plugin</code> version 2.0.9, the plugin is using <a href="http://docbook.xml-doc.org/snapshots/xsl-ns/README">namespaced stylesheets</a>. That is, we must use a namespace in our custom stylesheet to be able to select any docbook element! See <a href="http://groups.google.com/group/docbkx-tools-users/browse_thread/thread/af837b9c268f6b9b/01cc4eaef3ebaece?lnk=raot">this</a> or <a href="http://www.mail-archive.com/docbook-apps@lists.oasis-open.org/msg14755.html">this</a> post for related comments.</p><p>Thus, all I have to do is to add the docbook namespace declaration at the top and add the docbook namespace prefix to all references to element names in my customization layer. See highlighted lines in this XSL snippet:</p><pre class="brush:xml; highlight: [3,12]"><xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"<br /> xmlns:fo="http://www.w3.org/1999/XSL/Format"<br /> xmlns:db="http://docbook.org/ns/docbook"<br /> exclude-result-prefixes="date"<br /> version="1.0"><br /><br /> <xsl:template name="book.titlepage.separator"><br /> <fo:block><br /> <fo:table table-layout="fixed" width="163mm"><br /> ...<br /> <fo:table-body text-align="left"><br /> <xsl:for-each select="/db:book/db:bookinfo/db:revhistory/db:revision"><br /> ...<br /> </xsl:for-each><br /> </fo:table-body><br /> </fo:table><br /> </fo:block><br /> </xsl:template><br /> ...<br /></xsl:stylesheet><br /></pre><p>Well, that did the trick – after spending a couple of hours of investigation... I think that issue should be clearly noted with the <code>docbkx-maven-plugin</code> Maven plugin, because in the end it is an incompatability between versions 2.0.8 and 2.0.9. Alas, I did not find this information on the plugin's <a href="http://docbkx-tools.sourceforge.net/docbkx-maven-plugin/changes-report.html">Changes Report</a> page. At least, nothing that pointed me (not being a DocBook expert) into this direction... :-(</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com3tag:blogger.com,1999:blog-6189016616585694043.post-24139174052200251842011-03-21T20:08:00.006+01:002011-03-21T21:17:48.665+01:00The Way From Hudson To Jenkins<p>Some time has gone by since the <a href="http://javamoods.blogspot.com/2011/02/butler-dispute.html">Hudson/Jenkins fork</a>... and there has been even more talk in the community. However, slowly the dust settles, everybody is getting back to business. And finally, we decided to switch from Hudson to Jenkins! This is about why and how.</p><h4>Why move to Jenkins?</h4><p>But wait: who has forked, anyways? Is it Jenkins that forked Hudson, or is it Hudson that did the fork of Jenkins? There is some evidence that the community just did a rename of the project (due to trademark conflicts), and after that <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=317610">Oracle forked Jenkins</a>, using the Hudson name they claim holding the trademark on.</p><p>You may think this question is a purely theoretical one, but actually it's not. I'll have to legitimate the decision to move to Jenkins to my stakeholders, and using a fork would be a "smell". Project forks are usually not as good as the "original", are possibly done out of selfish reasons, are considered to harm the community etc. Hence, not moving to a fork but instead following the "real" project is a good reason for the move to Jenkins.</p><p>An even better one is "project vibrancy", that is the pace of development and level of support provided by the community. This is usually measured by indicators such as the number of commits, the mailing list traffic, the quantity, quality and regularity of releases etc. See <a href="http://daniel.gredler.net/2011/02/15/hudson-and-jenkins-two-weeks-later/">this post</a> for such an analysis on commit counts and mailing lists post counts. This is more than four weeks old now and covers not more than two weeks, but nevertheless the result is obvious: Jenkins moves much faster than Hudson does, and community is much more agile. This is confirmed by following the dev mailing lists of both: for Hudson, most of the relevant posts are by either Oracle or Sonatype engineers – seems the Hudson community has become pretty small... Moreover, as <a href="http://jieryn.livejournal.com/4362.html">this post</a> shows, most of the top plugins will continue primary development under Jenkins.</p><p>Last not least, I really respect Kohsuke Kawaguchi (the original creator of Hudson) and what he has done for us. I feel ashamed by how Oracle is dealing with him and the rest of the core team, that's why I have a strong tendency to follow the "good guys" with Jenkins.</p><p>As I blogged before, Maven integration is probably one of the most important features of any CI server (at least for me). I guess Sonatype is doing better with Maven integration – it's "The Maven Company", right? – and they are working with Oracle on Hudson. At least, they are putting huge efforts into rock-solid integration. However, after having seen a Sonatype Webinar about their plans with Hudson, I'm not that convinced any more. Current features looked a bit awkward and also does the <a href="http://www.sonatype.com/people/2011/02/guicing-up-hudson-making-life-easier-for-developers-with-jsr-330/">GWT based UI</a> they are using. So, from my point of view, this point is not yet decided.</p><p>Counting it all together, there are some really good reasons to move from Hudson to Jenkins, so we did.</p><h4>How to upgrade</h4><p>Now... how do you actually migrate from Hudson to Jenkins? Well, it couldn't be easier. There is a Wiki page about <a href="http://wiki.jenkins-ci.org/display/JENKINS/Upgrading+from+Hudson+to+Jenkins">Upgrading from Hudson to Jenkins</a>. To make it short, the involved steps are:</p><ol><br /><li>Backup your current installation – just for the good feeling.</li><br /><li>Change Update Site: In your Hudson, go to <span style="font-style:italic;">Manage Hudson</span> > <span style="font-style:italic;">Plugin Management</span> > <span style="font-style:italic;">Advanced</span> > <span style="font-style:italic;">Update Site</span> and enter "http://updates.jenkins-ci.org/update-center.json" as URL for Jenkins update site.</li><br /><li>Choose to upgrade automatically on <span style="font-style:italic;">Manage Hudson</span> page, just as you did so many times to update Hudson. This will download the new JAR.</li><br /><li>Restart Hudson, eh, Jenkins.... and there it is!</li><br /></ol><p>That's it. Took less than 5 min! Hudson indeed is a drop-in replacement, so you usually do not have to change anything (environment variables, system properties, start scripts, job configuration etc).</p><p>Well, there is only one thing: the name of the WAR file is still hudson.war! Is Oracle aware of this? ;-)</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com4tag:blogger.com,1999:blog-6189016616585694043.post-52480460480684985642011-02-27T21:37:00.008+01:002011-02-27T23:12:13.750+01:00The Butler Dispute<p>I thought it would be time to resurrect my blog, after not having posted for a couple of months. That was mainly because I have been really busy with some interesting stuff I should post about in the future, like Xtext upgrade...</p><h4>Oracle vs. the Community</h4><p>But today, I just bumped the ongoing dispute between Hudson and Jenkins guys. We are using Hudson since around 2008, coming from Cruise Control. We really liked the web interface, being able to setup everything by just using your browser. Of course, the features also have been impressive since then. Setting up a build farm is just fun with Hudson.</p><p>You probably know that there has been a fork of Hudson which is named Jenkins (others say Hudson has been renamed to Jenkins and then forked into Hudson). This all started with the Hudson team being unhappy with the infrastructure provided by java.net (which is driven by Oracle since Sun acquisition), due to its poor reliability. The community talked about moving parts of the project to other servers, and first candiate was issue tracking. Suddenly, the project is locked due to the migration of java.net projects to new Kenai infrastructure, which was announced by Oracle but somehow missed by the project owners. Frustrated by the migration, the community decides to move code to GitHub and mailing list to Google Groups. See <a href="http://jenkins-ci.org/content/whos-driving-thing">"Who's driving this thing?"</a> for the facts.</p><p>This is the point where Oracle steps in, claiming to have a trademark on the name. If the project decides to move, it must be using another name: "Because it is open source, we can't stop anybody from forking it. We do however own the trademark to the name so you cannot use the name outside of the core community. We acquired that as part of Sun." (BTW, that <a href="http://www.theserverside.com/discussions/thread.tss?thread_id=61437">might not be true</a> after all). Later he stated that "the final decision of what to do w.r.t. infrastructure belongs to Oracle". </p><p>Guess what: this really concerned the community. There have been some <a href="http://jenkins-ci.org/content/hudsons-future">talks</a> between key community members and Oracle representives, in an attempt to agree on a "proposal for a stable structure and arrangement" which later would be proposed to the community. But, with no success. That finally led to the <a href="http://kohsuke.org/2011/01/11/bye-bye-hudson-hello-jenkins/">decision</a> of the community to move to GitHub and at the same time rename the project to another butler's name: "Jenkins".</p><p>Of course, <a href="http://hudson-ci.org/docs/process_summary.html">Oracle's view</a> on the subject is a bit different...</p><h4>Welcome Jenkins!</h4><p>Well, so now you have the choice: use <a href="http://hudson-ci.org/">Hudson</a>, or use <a href="http://jenkins-ci.org/">Jenkins</a>. You know, competition is usually a good thing, so let the race begin. The majority of the community seems to have made the switch to Jenkins (given the blogs and mailing list traffic). This is because Oracle's behavior is not quite understood and does not cast a positive light on their comprehension of Oracle's role in the Hudson community.</p><p>However, Oracle is putting enormous resources (people and hardware) into the Hudson project. And what's even more important, Sonatype is helping to drive Hudson to the next level. Sonatype? Right, that's the company behind Maven and all the great Maven tools like Nexus and m2eclipse.</p><h4>Maven Support – the Killer Feature?</h4><p>One of the most important features (for me, but also for possibly the majority of other users) is Maven 3 support. Sure, Hudson/Jenkins already support Maven 3 since version 1.392 (end of 2010, see <a href="http://jenkins-ci.org/changelog">changelog</a>). But hey, Sonatype entered the scene, and they will surely do better.</p><p>Sonatype, too, have put some full-time engineers into the Hudson project, making sure that "Hudson users can look forward to a long, bright future". See <a href="http://www.sonatype.com/people/2011/02/our-focus-on-advancing-hudson-and-making-great-software/">this</a> or <a href="http://www.sonatype.com/people/2011/02/hudsons-bright-future/">this</a> post. Sonatype in the end of the day wants to earn money with Hudson (and Maven), so I expect to see outstanding features related to Maven 3 support, Eclipse integration and workflow extensions for Hudson. See <a href="http://www.sonatype.com/people/2009/02/sonatypes-hudson-plans-for-maven-integration/">here</a> for some of their ideas.</p><p>Well, this really makes a thrilling game. I honestly appreciate what Kohsuke Kawaguchi and others have built up with Hudson from the ground up, and would like to see them win on the "evil company that pushed them out of the project". And by the way, Sonatype seems to be in good companion when talking about being evil – they <a href="http://www.jroller.com/eu/entry/committer_is_removed">removed the oldest commiter of m2eclipse</a> from the project a year ago.</p><p>So, is this again the good vs. evil story? I don't know. In the end, both projects will have their users. And they will learn and benefit from each other. So let's wait and see.... Time will tell. It's going to be an interesting year, though!</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com0tag:blogger.com,1999:blog-6189016616585694043.post-35817103848108220692010-10-15T12:15:00.005+02:002010-10-15T13:15:19.988+02:00Maven 3 and Plugin Mysteries<p>You probably know that <a href="http://www.sonatype.com/people/2010/10/maven-3-0-has-landed/">Maven 3 has landed</a>. Before testing it with our projects, I was curious about the plugins that are defined in the Maven master POM's <em>pluginManagement</em> section and hence are locked-down with respect to their version. Since all projects inherit from this master POM, they will use the respective version of the plugins if not explicitely overwritten anywhere in the project's POM hierarchy.</p><p>Maven 3 is a <a href="https://cwiki.apache.org/MAVEN/maven-3x-compatibility-notes.html#Maven3.xCompatibilityNotes-AutomaticPluginVersionResolution">bit more strict</a> concerning automatic version resolution of invoked plugins. Other than Maven 2, it will always use the latest release (i.e. non-SNAPSHOT) version of a plugin if there was no explicit version specified in the POM or on the command line. Moreover, it will issue a warning when missing plugin versions are detected "to encourage the addition of plugin versions to the POM or one of its parent POMs". This is to increase reproducability of builds.</p><p>Thus, in Maven 3 the desired build stability is ensured by urging the POM author to give explicit plugin versions, and doesn't any more rely on a full list of plugins (with versions) defined in the master POM. That's why I expected to find a small or even empty <em>pluginManagement</em> section. Well, let's see.</p><p>To find out what's in the <em>pluginManagement</em> of master POM, you just have to create a <a href="http://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Minimal_POM">minimal POM</a> and show the <a href="http://maven.apache.org/plugins/maven-help-plugin/effective-pom-mojo.html">effective POM</a> (that results from the application of interpolation and inheritance, including master POM and active profiles) by calling <code>help:effective-pom</code> for this simple project.</p><p>So, what do we get? The following list shows the plugin versions that are defined in the Maven 2.2.1 master POM, the Maven 3 master POM, as well as the most recent version of those plugins.</p><p><a href="http://2.bp.blogspot.com/_ey2D_DPIY5E/TLgyBS2y7AI/AAAAAAAAADI/F4QSKGtyErU/s1600/MavenPlugins.gif"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 320px;" src="http://2.bp.blogspot.com/_ey2D_DPIY5E/TLgyBS2y7AI/AAAAAAAAADI/F4QSKGtyErU/s400/MavenPlugins.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5528223540420209666" /></a></p><p>Well, we can see some interesting details here:</p><ul><li>The number of plugins defined in the master POMs <em>pluginManagement</em> section is drastically less for Maven 3 than for Maven 2.2.1 – that's what we expected. However, there are still a few.</li><br /><li>Which pluings are listed and which are not? It seems like the plugins for the most basic lifecycle phases (like <em>clean</em>, <em>install</em>, <em>deploy</em>) are predefined, but others are not (like <em>compile</em> or <em>jar</em>). Is there any policy?</li><br /><li>What is really odd: for some of the plugins that are predefined, there is a newer version available than is listed in the Maven 3 master POM (colored red). Why could that be? I have not checked, but Maven 3 is out for a few days now, so I suspect for most of those plugins the new version has been available before. Is that intentionally? Are the new versions not considered "good" or "stable" by the Maven guys? Or did they just forgot to upgrade? Or did not found it important in any kind?</li><br /><li>Another thing I can't explain: when you look on the Maven 3 Project <a href="http://maven.apache.org/ref/3.0/plugin-management.html">Plugin Management site</a>, there are listed a lot more plugins, and some are even of other version than what we got by showing the effective POM for a minimal project POM. How could this be? I have no clue...</li></ul><p>In a <a href="http://javamoods.blogspot.com/2009/12/maven-plugins-upgrade-with-care.html">previous post</a>, I have listed the plugins predefined by Maven 3.0-alpha5. Interestingly, there have been a lot more of them (like for Maven 2.2.1), but the "stale version" question was the same...</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com1tag:blogger.com,1999:blog-6189016616585694043.post-14929340405850642932010-10-13T17:46:00.005+02:002010-10-13T18:37:45.382+02:00World of Versioning<p>Today, we had a discussion on how to name a hotfix release of our framework product, built with Maven (you knew I'm a fan of Maven, didn't you?). It's a very basic question, but still an interesting one and it opens a whole universe of ideas, opinions and rules...</p><p>The previous versions of our product had been named like this:</p><blockquote>1.3.0. 1.3.1 ... 1.4.0, 1.4.1, ... 1.5.0, 1.5.1, 1.5.2, ... 1.5.6</blockquote><p>They all are based on a release plan and contain bugfixes as well as improvements and new features. For each of those versions, we have written release notes and built a site.</p><p>Now, what do we do when there is the need to release a bugfix version of a regular release we built a few days ago? There are some options:</p><ol><li>1.5.7 – i.e. increment last number; however, this doesn't seem to fit well because the bugfix release is of another character than standard releases</li><li>1.5.6.1 – i.e. add an additional numerical identifier</li><li>1.5.6.a – i.e. add another non-numerical identifier</li><li>1.5.6-patch1 – i.e. add another qualifier describing it's actually a patch release</li></ol><p>When searching the Net for version number rules in the Maven world, you'll stumble upon the <a href="http://maven.apache.org/ref/current/maven-artifact/xref/org/apache/maven/artifact/versioning/DefaultArtifactVersion.html">DefaultArtifactVersion</a> class in the core of Maven which expects that version numbers will follow a specific format:</p><blockquote><MajorVersion [> . <MinorVersion [> . <IncrementalVersion ] ] [> - <BuildNumber | Qualifier ]></blockquote><p>Where <em>MajorVersion</em>, <em>MinorVersion</em>, <em>IncrementalVersion</em> and <em>BuildNumber</em> are all numeric and <em>Qualifier</em> is a string. If your version number does not match this format, then the entire version number is treated as being the Qualifier (see <a href="http://mojo.codehaus.org/versions-maven-plugin/version-rules.html">Versions Maven Plugin</a>).</p><p>This means, options 1 and 4 of above would be a viable alternative in the Maven world. However, note that there is some <a href="http://docs.codehaus.org/display/MAVEN/Versioning">discussion</a> about this Maven schema. It suffers from inconsistent/unintuitive parsing, lexically sorting of qualifiers and some other flaws. This would yield to unexpected comparison results especially when using Maven SNAPSHOT versions. The Proposal given on that page seems to be integrated with Maven 3.</p><p>Actually, we wouldn't have this discussion if the third level would not be named <em>Incremental version</em> in Maven world, but rather <em>bugfix version</em> or <em>patch version</em>. There is a <a href="http://semver.org/">Semantic Versioning Specification (SemVer)</a> that recommends this version schema:</p><blockquote>A normal version number MUST take the form X.Y.Z where X, Y, and Z are integers. X is the major version, Y is the minor version, and Z is the patch version. Each element MUST increase numerically. For instance: 1.9.0 < 1.10.0 < 1.11.0.</blockquote><p>There are some rules describing when to increase which part. The main idea is to use the first numerical (major version) to indicate backwards incompatible changes to the public API, and in contrast the last numerical (patch version) suggests that only backwards compatible bug fixes have been introduced.</p><p>This SemVer schema is fully compatible with Maven (regardless of SNAPSHOT versions). If we had used this, we would probably have ended up in a "higher" version number like 5.4.0, but now the upcoming patch would have the version number 5.4.1 without any consideration.</p><p>By the way, a lot of public recommendations for software versioning follow this <em><major>.<minor>.<patch></em> schema. See this <a href="http://stackoverflow.com/questions/2048437/what-version-numbering-scheme-to-use">question</a> and <a href="http://en.wikipedia.org/wiki/Software_versioning">Wikipedia</a> for more information on Software Versioning.<br /></p><p>So. What do we do now? We'll release a version <em>1.5.6-patch1</em> for the patch, but think about changing our versioning according to SemVer, i.e. to upgrade the major number when introducing incompatible changes, and the minor number in most other cases.</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com1tag:blogger.com,1999:blog-6189016616585694043.post-17930095849062517902010-09-27T13:47:00.009+02:002010-09-27T15:59:19.599+02:00Fix Foreign Code<p>Well, finally, I'm back! I have been busy working on-site for a customer of my company, helping to fix their project and increase quality to successfully conduct the rollout. Additionally, I spent my evenings working as a release manager and keeper of the Maven based infrastructure for several projects developed in-house. So this was more than a fulltime job and unfortunately no time was left to read or write blog posts :-(</p><p>However, that project assignment is nearly over now and I intend to write more regularly now about my findings, trials and tribulations.</p><h4>Foreign Code Dilemma</h4><p>My main task when working for our customer was to fix bugs and improve quality of their application, which was nearly completely implemented with respect to use cases and business requirements. This is a situation that might be known to most developers: you are thrown into a project you don't know much of, lots of source code is already implemented, quality is, well, varying, and some important milestone or release date is right ahead. This is what I call the <span style="font-style:italic;">Foreign Code Dilemma</span>. </p><p>What do you do to quickly get up to speed and rescue the project? Well, there are some things that I find quite useful in situations like this. In no particular order...</p><h4>Introduce Continuous Integration</h4><p>It should be common sense these days that Continuous Integration (CI) is able to improve software quality and reduce integration issues as well as overall risks. CI is a software development practice where changes are integrated frequently – usually at least daily – and the result is verified by an automated build and test to detect issues as quickly as possible. The distinguished article about <a href="http://martinfowler.com/articles/continuousIntegration.html">Continuous Integration</a> by Martin Fowler is a must-read.</p><p>Fortunately, the customer's project already provided automated Ant build scripts to checkout, build and test the software. Moreover, they were running on a Cruise Control server each night, so we were quite close.</p><p>The first thing I did was to move to <a href="http://hudson-ci.org/">Hudson</a>, the best integration server available today (if you'd ask me). The transition was quite smooth and done within a few hours, including setting up a brand new build server. If you're still using Cruise Control, you really should consider to move over to Hudson... I think I should post about the cool distribution feature of Hudson soon.</p><p>One issue with the project was the build time: a full build takes 3-4 hours, mainly due to long-running unit and selenium test cases. Of course, this inhibits doing real CI. All we could do for now was to split up the build into the four main tasks, creating a Hudson job for each of them: (1) checkout & compile & package, (2) static code checks, (3) unit tests, (4) selenium tests. Since (1) and (2) are running rather quick (about 10 min) those jobs qualify for CI builds. This is not perfect but still better than doing no CI at all.</p><h4>Introduce Test Cases</h4><p>Test cases are an essential part of a software development project these days, and I always consider a tasks not being finished unless there are test cases ensuring that the functionality is implemented correctly. I'm sure you agree ;-)</p><p>The project I was working on had lots of JUnit test cases, as well as hundreds of Selenium tests checking the web application in the browser. That's not bad, really. Nevertheless, there were two issues:</p><ul><li>The number of test cases does not tell anything about the test coverage. For example, the Selenium tests all did test a "happy day" scenario, moving through the wizard pages of the web application straight from the first to the end. But does it still work, for instance, if you step to the forth page, choose some options on that page, step back two pages, change an option, and go to the forth page again? Nobody tested.</li><br /><li>Selenium web tests are slow, which is no surprise taken the fact that the tests are running in a browser and need to connect to the deployed web application. In my project, the full test suite took more than 3 hours to run... What's even worse is that some of the JUnit tests have not been designed as unit tests, i.e. they required a full service stack to run successfully, as such being more an integration than a unit test. As a consequence, those tests require to startup all services which takes a lot of time.</li></ul><p>Thus, the task for this project actually was not to introduce, but to improve unit tests: increase code coverage and separate unit from integration tests. This way, unit tests can be run within CI builds, providing a quick result for the quality of committed code.</p><h4>Introduce Code Metrics</h4><p>When more than a few people are working on a project, establishing a coding standard is usually a rewarding idea. It helps you to be comfortable with the sources of anybody else from your team, and when doing code comparison you don't see differences all the time that are just caused by reformatting, hiding the significant changes.</p><p>If you have defined such coding standards, you need to check them. <a href="http://checkstyle.sourceforge.net/">Checkstyle</a> is the tool of choice. Here is what you should do:</p><ul><li>Define a Checkstyle configuration to be used for your project. Discuss the rules with developers and stakeholders.</li><li>Run Checkstyle with your CI and/or nightly builds to create a report, including a list of violations for defined rules.</li><li>Establish Checkstyle within your IDE of choice to provide immediate feedback to the developers <span style="font-style:italic;">before</span> they commit.</li><li>Define which exceptions to the rule are acceptable (should not be more than a dozen or so) and suppress them permanently, using Checkstyle suppression filters.</li><li>Get rid of all remaining violations, which might take a few days of effort. Still, this investment will pay off.</li><li>Once the number of Checkstyle violations is "small" (meaning less than 10, ideally zero), make sure it remains small.</li><li>Establish a team culture where committing code with Checkstyle violations is anything else but cool.</li></ul><p>That works quite well in my experience. For the mentioned project, we already had common Eclipse formatting settings, but Checkstyle helped to further improve the code and people adopted it right from the start.</p><h4>The Debugger is Your Best Friend</h4><p>When you have to fix bugs in code you have never seen before, use the debugger as much as possible. To find the hot spot, you usually don't have to read or understand the whole class or even hierarchies of classes. Thus, it'll save you a lot of time when you don't start with code reviews but use the debugger to find the piece of code to blame.</p><p>BTW, the same applies to the look and feel of web applications. Instead of consulting lots of layout code and stylesheets, use browser tools like <a href="http://getfirebug.com/">Firebug</a> to debug pages, styles and JavaScript code (including Ajax requests) right in the displayed page.</p><p>Of course, this approach is not appropriate when fixing larger design issues...</p><h4>Don't Be Shy!</h4><p>When using this toolset, you shouldn't be shy. If you think some code needs refactoring, do so – maybe not a week before going live, but you get the point. The CI build should give you immediate feedback if the change could be integrated, and the tests will tell you if everything still works. Take your chance to improve the code. If your change anyhow is causing an issue, fix it, add another test and don't be discouraged!</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com0tag:blogger.com,1999:blog-6189016616585694043.post-91067511198013527162010-04-19T15:37:00.003+02:002010-04-19T16:39:27.818+02:00HDD / SSD Battle<h4>The Problem</h4><p>You know, the laptop I'm using for my daily work job is not the fastest one. In contrast, it's more than 5 years old and pretty slow. Yeah I know, hardware can never be fast enought, but it's really slow considering the things I have to work on.</p><p>For instance, we are using <a href="http://www.eclipse.org/modeling/tmf/">xtext</a> modeling and hence usually have a couple of Eclipse instances running at the same time (outer & inner workbench), additionally to using <a href="http://m2eclipse.sonatype.org/">m2eclipse</a> to build the projects with Maven in Eclipse. Moreover, we have some quite big workspaces with tens of thousands of class files.</p><p>All of this is probably not unusual, but unfortunately too much for my poor old laptop. It takes minutes to start or end Eclipse, not to mention the times required for cleaning all projects. However, my company currently does not really like the idea to buy new laptops so we have to find ways to speed things up without spending too much money. I have blogged before about some ways to <a href="http://javamoods.blogspot.com/2009/10/speeding-up-your-system.html">speeding up your system</a>.</p><h4>The Solution?</h4><p>It's pretty clear that the bottleneck is the hard drive currently. We have proven this by some inspection tools, the drive is working hard all the time when executing some build, for instance. Now we managed to get a <a href="http://en.wikipedia.org/wiki/Solid-state_drive">solid state drive</a> (SSD) to test the performance improvements it would offer. Well, fasten your seatbelt...</p><p>We have measured some typical tasks with real data and projects on a developer's laptop – first with the built-in hard disk, then after installing the SSD and copying the harddrive content over. Note that we have tried to make a fair comparison, keeping the setup indentical in both scenarios. These are the results.</p><h4>The Battle</h4><p>Working With Eclipse:</p><ul><li>Start Eclipse 3.5.1 with an empty workspace until Welcome screen is displayed: 52 s → 12 s (factor 4.3)</li><li>Start Eclipse with a medium-size workspace: 125 s → 30 s (factor 4.2)</li><li>Clean all projects in that workspace: 445 s → 115 s (factor 3.9) </li><li>Exit Eclipse and wait until workspace is saved: 28 s → 7 s (factor 4.0)</li></ul><p>Working With Maven:</p><ul><li>Maven "clean install" in mdium-size project: 668 s → 336 s (factor 2.0)</li></ul><p>Booting Windows:</p><ul><li>Turn on computer and wait for login screen: 62 s → 33 s (factor 1.9)</li><li>After login, until Windows is ready (autostart applications are loaded): 135 s → 44 s (factor 3.1)</li></ul><h4>The Bottom Line</h4><p>As you can see, the SSD is speeding up boot time by factor 2-3, which already is impressive. Maven build usually gets executed 2 times faster. Eclipse speed-up is even more, namely around factor 4. That's pretty cool! You really feel the performance difference!</p><p>Additionally, after some more weeks of testing, what we like most is that the whole system feels much more reactive; that is, when executing some big job like rebuild a huge workspace, you can switch context and nicely work in another instance of Eclipse, for instance – a single tasks is not blocking the whole system any more.</p><p>All in all, that's an incredible speed-up considering the prices of SSD! Now, go and tell your boss ;-)</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com3tag:blogger.com,1999:blog-6189016616585694043.post-31935211614568893782010-03-26T16:20:00.004+01:002010-03-26T17:19:42.773+01:00Having Fun with Encoding!<h4>Compiler Plugin</h4><p>Recently, I edited our main company root POM to upgrade some plugins to new versions. Of course, we are following best practice to <a href="http://www.sonatype.com/people/2008/05/optimal-maven-plugin-configuration/">lock down the plugin version</a>, so when a new version is available we only need to adjust the parent POM. Nearly all version updates were on the last build number digit, which is the <em>z</em> in <em>x.y.z</em> version string – so I didn't expect much difficulties.</p><p>However, for the <a href="http://maven.apache.org/plugins/maven-compiler-plugin/">compiler plugin</a>, it was a jump from version 2.0.2 to 2.1, and indeed it turned out that some of the test cases failed compiling with strange encoding issues when using the new compiler plugin version.</p><h4>Specify Encoding</h4><p>We are following the suggestion to specify a <a href="http://docs.codehaus.org/display/MAVENUSER/POM+Element+for+Source+File+Encoding">POM property for source file encoding</a>, for not being forced to configure encoding for all relevant plugins individually. Moreover, we were exactly using what's shown in the example:<pre class="brush:xml"><project><br /> ...<br /> <properties><br /> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><br /> ...<br /> </properties><br /> ...<br /></project></pre>That is, we assumed our source files were <a href="http://en.wikipedia.org/wiki/UTF-8">UTF-8</a> encoded, which is the most widely used encoding for unicode characters. But, for some of the projects, that's actually not the case since we are using Eclipse with the default setting for text file encoding which is <a href="http://en.wikipedia.org/wiki/Windows-1252">Cp1252</a> (Western European) on our german Windows.</p><p>Why didn't we ever notice that? Well, it happens that both the UTF-8 as well as Cp1252 encodings are backwards compatible with ASCII. We are coding most of the stuff in english (concerning package, class, method, attribute and parameter names, and even Javadoc comments), so the resulting byte stream will never be different for both encodings. However, some of the files used german umlauts in line comments which are exactly the files that can't be compiled any more with new compiler plugin version.</p><p>When looking at the debug output of compiler plugin 2.0.2 mojo configuration, you can see that the encoding is not explicitely set, probably meaning that the platform default encoding is used (which is again Cp1252 on all build machines):</p><pre class="brush:xml">[DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-compiler-plugin:2.0.2:compile' --><br />[DEBUG] (f) basedir = ...<br />[DEBUG] (f) buildDirectory = ...<br />[DEBUG] (f) classpathElements = [...]<br />[DEBUG] (f) compileSourceRoots = [...]<br />[DEBUG] (f) compilerId = javac<br />[DEBUG] (f) debug = true<br />[DEBUG] (f) failOnError = true<br />[DEBUG] (f) fork = false<br />[DEBUG] (f) optimize = true<br />[DEBUG] (f) outputDirectory = ...<br />[DEBUG] (f) outputFileName = xxx-0.2.0-SNAPSHOT<br />[DEBUG] (f) projectArtifact = xxx:jar:0.2.0-SNAPSHOT<br />[DEBUG] (f) showDeprecation = false<br />[DEBUG] (f) showWarnings = false<br />[DEBUG] (f) source = 1.6<br />[DEBUG] (f) staleMillis = 0<br />[DEBUG] (f) target = 1.6<br />[DEBUG] (f) verbose = false<br />[DEBUG] -- end configuration --</pre><p>The new version 2.1 of compiler plugin is now considering what has been configured in <code>project.build.sourceEncoding</code> property, and hence tries to compile the Cp1252 coded source file with UTF-8 encoding which doesn't work when umlauts are used.</p><h4>Specify Correct Encoding</h4><p>Of course, the solution is to specify the correct encoding in <code>project.build.sourceEncoding</code> property, matching the encoding that is used in the development environment when writing the source files.</p><p>Oh, yes, Cp1252 is quite similar to ISO 8859-1 encoding (only some special characters on positions 0x80–0x9F are different which we don't use), so in fact we are using ISO 8859-1 now to allow builds on non-Windows platforms as well.</p><p>Certainly, it would be nice if the plugins had a history on their site where you can find this type of changes for new versions, without having to search in the Jira...</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com0tag:blogger.com,1999:blog-6189016616585694043.post-21788350908223682522010-02-26T15:30:00.003+01:002010-02-26T16:52:46.283+01:00Eclipse: Update Manager Needs Update!<p>Eclipse Update Manager is really a special piece of software... I have <a href="http://javamoods.blogspot.com/2009/05/eclipse-update-manager-fools-me.html">blogged before</a> about my battle, and here is another one.</p><p>We have a simple Eclipse plugin (created by xtext to provide an editor for our DSL, but actually this doesn't matter). I have a particular version (let's say 1.0.0) of that installed in my Eclipse 3.5.1. Now I want to upgrade to 1.1.0, but unfortunately the feature id has changed, so I need to uninstall my 1.0.0 version prior to installing the new one.</p><p>But... when I try to uninstall this plugin Eclipse tells me that it is "Calculating requirements and dependencies". To do so, Eclipse downloads a lot of stuff, including Eclipse features, mylyn, and much more. Seems to be half the internet which takes while. And then, about 15 min later, Eclipse tells me that it could not find a download site for some weird mozilla plugin.</p><p>Hello? What's that? I want to <em>uninstall</em> a plugin and Eclipse <em>downloads</em> tons of jars only to tell me that one is missing and it couldn't uninstall? Gosh!</p><p>After some googling, I found a trick to force Eclipse to just do what I want:</p><ol><li>In Eclipse <em>Preferences</em>, on <em>Install/Update > Available Software Sites</em> page, export all sites to your filesystem.</li><li>Then remove all update sites and press OK.</li><li>Now uninstall the plugin -- for me, it just worked like a charm.</li><li>After restarting Eclipse, open <em>Install/Update > Available Software Sites</em> again and import the previously exported update sites.</li></ol><p>That's it. Maybe just pulling the network cable would have worked, too... Oh boy.</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com1tag:blogger.com,1999:blog-6189016616585694043.post-7287872684803071752010-02-20T13:54:00.003+01:002010-02-20T15:36:35.070+01:00Maven vs. Ant: Stop the Battle<h4>Maven? Ant?</h4><p>Oh boy, how this bothers me. The endless debate and religious battle about which build tool is the better build tool, no, is the one and only <em>right</em> build tool...</p><p>There are many people out there who love Ant, who defend Ant with their blood and honour. That's fine, but some of them at the same time shoot at Maven. There is so much rant about Maven, so much unfair allegation and just plain wrong claims. <a href="http://kent.spillner.org/blog/work/2009/11/14/java-build-tools.html">This</a> is just one example that has been <a href="http://www.wakaleo.com/blog/246-maven-mythbusters-maven-automatically-updates-for-every-build">discussed</a> in the community lately.</p><p>Don't get me wrong. Maven has its flaws and issues, sure, and you don't have to like it. Use Ant, or <a href="http://www.gradle.org/">Gradle</a>, or <a href="http://buildr.apache.org/">Buildr</a>, or <a href="http://www.schmant.org/">Schmant</a>, or batch files, or anything else if you like that more. But, Maven definitely <em>can</em> be used to build complex software projects, and lots of people are doing exactly that; and guess what -- some of them even like this tool... So, can everybody just please use what he or she likes the most for building their software, and stop throwing mud at each other? Let's get back to work. Let's put our effort in building good software.</p><h4>We've Come a Long Way...</h4><p>You may have guessed, I think Maven is the best build tool, at least for the type of projects I am dealing with in my company. We have started using a complex system of mutual calling batch files long time ago, and switched to Ant in 2000. That was a huge step ahead, but still it was a complex system with lots of Ant-Scripts on different levels. So we moved to Maven 1.0.2 in 2004 for another project. That brought nice configuration and reporting features, but still did not feel right, especially for multi-module projects that were not supported in the Maven core at that time. </p><p>When Maven 2 came out, we adopted that early and suffered from many teething troubles, but nevertheless we were sure to be on the right track. Today, Maven is a mature, stable, convenient build tool for all our projects, and the first time we are quite happy with how it works and what it provides. Moreover, it sound really great what the brave guys from <a href="http://www.sonatype.com/">Sonatype</a> have in their pipeline: <a href="http://www.sonatype.com/people/2009/11/maven-30-alpha-3-released/">Maven 3</a>, Tycho, and all those nice tools like Nexus and m2eclipse...</p><p>Hence, I am happy and honestly don't really care very much about what the blogosphere is telling about Maven. But the sad thing is, my colleagues (mostly used to Ant build systems) are complaining with the same weird theses about Maven. I'll give you one example.</p><h4>The Inhouse Battle</h4><p>In my current project, we create EJBs in some JARs and assemble an EAR file for the whole application. Now we have to create another RAR to be put in the EAR, so I setup a new project (following Maven's convention "one project, one artifact") for the RAR. This is what the "Ant guys" didn't like: "Why can't Maven create that RAR within the main project, you know Ant could do that, so maybe we should use Ant here again, why have so many small projects, this is polluting our Eclipse's Project View, so much complexity, Maven sucks, I knew that before, blah blah..."</p><p>Well, I tried to explain that Maven of course can be configured to create <a href="http://www.sonatype.com/people/2010/01/how-to-create-two-jars-from-one-project-and-why-you-shouldnt/">multiple artifacts per project</a>, but that's not the recommended way because it violates Maven convention. It's all about modularity and standardization. That is how Maven works, and it's great this way. A small project is not much overhead at all, it is going to have a clean and simple POM, and by the way we discovered a dependency cycle in the code that had to be fixed in order to move the RAR code into a separate module.</p><p>So, what's wrong with Maven? Is it just that you want to do it your way and not to subordinate the Maven way? A matter of honor and ego? Is that enough to kick out Maven and go back to your Ant and script based build system (which BTW is so complex that only few guys really know how it works)? Come on.</p><h4>The Bottom Line</h4><p>IMHO, standardization of build systems is one of the main benefits that Maven brought to the world. If you know one Maven project, you can switch to any other project built with Maven and feel comfortable immediately. This increases productivity, both personally and for your company, which is one of the reasons more and more companies switch over <a href="http://www.leshazlewood.com/?p=55">from Ant to Maven</a>.<br />We have clean conventions, a nice project structure, and a highly modular system. And, we have world class reporting with minimal effort.</p><p>You see, that's why we are using Maven. If you don't like it, go your own way but let us just do our job.</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com44tag:blogger.com,1999:blog-6189016616585694043.post-70074547957067228512010-02-08T11:39:00.002+01:002010-02-08T12:09:55.338+01:00@Override Changes in Java 6<p>Today I have ported a Java 6 project back to Java 5. This led to compiler failures in Eclipse, but not in Maven which seemed quite strange at first glance. Interestingly, they are caused by the <code>@Override</code> annotation.</p><p>The <a href="http://java.sun.com/j2se/1.5.0/docs/api/">Java 5 API</a> for <code>@Override</code> says:</p><p><dl><dt></dt><dd>Indicates that a method declaration is intended to override a method declaration in a superclass. If a method is annotated with this annotation type but does not override a superclass method, compilers are required to generate an error message.</dd></dl></p><p>Note that it says "superclass", not "supertype". Hence, it's not allowed to add this annotation to methods that implement methods of an interface. Javac (which is called by Maven) does not report this as an error, but the Eclipse compiler does.</p><p>Well, if you take a look at <a href="http://java.sun.com/javase/6/docs/api/index.html">Java 6</a>, the API didn't change at all so I was surprised to see a different behavior: the <code>@Override</code> annotation is allowed for methods implementing interface methods in Javac, too. In the end, I had to remove those annotations to make the code compile with Java 5 in Eclipse.</p><p>After some googling, I found out that this has just been forgotten by Sun developers: the compiler's behavior is changed but the documentation does not reflect that (see <a href="http://blogs.sun.com/ahe/entry/override_snafu">here</a>). And indeed, when you look at the API of <code>@Override</code> in upcoming <a href="http://download.java.net/jdk7/docs/api/index.html">Java 7</a> it looks like:</p><p><dl><dt></dt><dd>Indicates that a method declaration is intended to override a method declaration in a supertype. If a method is annotated with this annotation type compilers are required to generate an error message unless at least one of the following conditions hold:<ul><li>The method does override or implement a method declared in a supertype.</li><li>The method has a signature that is override-equivalent to that of any public method declared in Object.</li></ul></dd></dl></p><p>Here you got it: <code>@Override</code> may now bee used for interface methods, too.</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com4tag:blogger.com,1999:blog-6189016616585694043.post-82239044287223846692010-02-02T15:49:00.004+01:002010-02-02T16:50:50.090+01:00Optimization: Don't do it... The compiler will!<h4>The Two Rules of Program Optimization</h4><p>I've seen some bad code lately which was designed in an effort to improve performance. For instance, there was a long method (80 lines) that was not split into several methods for a single reason: to avoid the method call overhead (around 15 nanoseconds!). The result was code that was just hard to read.</p><p>This reminded me of the rules of program optimization (coined by Michael A. Jackson, a British computer scientist) we were teached back on university:<br /><strong>The First Rule of Program Optimization:</strong> Don't do it.<br /><strong>The Second Rule of Program Optimization (for experts only!):</strong> Don't do it yet.</p><p>Well, this is true for mainly two reasons:</p><ol><li>Optimization can reduce readability and add code that is used only to improve the performance. This may complicate programs or systems, making them harder to maintain and debug.</li><li>Doing optimizations most of the time means we think to be smarter than the compiler, which is just plain wrong more often than not.</li></ol><h4>Cleaner Code</h4><p><a href="http://en.wikipedia.org/wiki/Donald_Knuth">Donald Knuth</a> said "Premature optimization is the root of all evil". Whereas "Premature optimization" means that a programmer lets performance considerations drive the design of his code. This can result in a design that is not as clean as it could have been, because the code is complicated by the optimization and the programmer is distracted by optimizing.</p><p>Therefore, if performance tests reveal that optimization or performance tuning really have to be done, they usually should be done at the end of the development stage.</p><h4>Wrong Intuitions</h4><p>This is what Sun Microsystem's Technology Evangelist <a href="http://java.sun.com/developer/technicalArticles/Interviews/goetz_qa.html">Brian Goetz</a> thinks: "Most performance problems these days are consequences of architecture, not coding – making too many database calls or serializing everything to XML back and forth a million times. These processes are usually going on outside the code you wrote and look at every day, but they are really the source of performance problems. So if you just go by what you're familiar with, you're on the wrong track. This is a mistake that developers have always been subject to, and the more complex the application, the more it depends on code you didn't write. Hence, the more likely it is that the problem is outside of your code." Right he is!</p><h4>Smarter Compiler</h4><p>Often, the best way to write fast code in Java applications is to write dumb code – code that is straightforward, clean, and follows the most obvious object-oriented principles in order to get the best compiler optimization. Compilers are big pattern-matching engines, written by humans who have schedules and time budgets, so they focus their efforts on the most common code patterns, in order to get the most leverage. Usually hacked-up, bit-banging code that looks really clever will get poorer results because the compiler can't optimize effectively.</p><p>A good example is string concatenation in Java (see <a href="http://java.sun.com/developer/technicalArticles/Interviews/community/kabutz_qa.html">this conversation</a> with Java Champion Heinz Kabutz where he gives some measures)...</p><ol><li>Back in the early days, we all used the String addition (+ operator) to concatenate Strings:<br /><code>return s1 + s2 + s3;</code><br />However, since Strings are immutable, the compiled code will create many temporary String objects, which can strain the garbage collector.</li><li>That's why we were told to use StringBuffer instead:<br /><code>return new StringBuffer().append(s1).append(s2).append(s3).toString();</code><br />That was around 3-5 times faster those days, but the code became less readable. Was it worth it? Is your code doing enough String concatenation to make you really feel a difference after you (for instance) made that execute three times faster?</li><li>Is that still the recommended way? A main downside of StringBuffer is its thread safety that is usually not required (since they are not shared between threads), but slows things down. Hence, the StringBuilder class was introduced in Java 5, which is almost the same as StringBuffer, except it's not thread-safe. So, using StringBuilder is expected to be significantly faster, and know what? When Strings are added using the + operator, the compiler in Java 5 and 6 will automatically use StringBuilder:<br /><code>return s1 + s2 + s3;</code><br />Clean, easy to understand, and quick. Note that this optimization will not occur if StringBuffer is hard-coded!</li></ol><p>That was just one example.... All in all, it's quite simple: today's Java JIT Compilers are highly optimized and clever in optimizing your code. Trust them. Don't try to be even more clever. You aren't!</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com0tag:blogger.com,1999:blog-6189016616585694043.post-81164232989852131412010-01-13T10:26:00.004+01:002010-01-13T11:39:06.180+01:00Concurrent Builds with Hudson<h4>Multiple Build Executors</h4><p>We are using the <a href="http://hudson-ci.org/">Hudson</a> Continuous Integration Server for our integration builds and are quite happy with it. It is fast, stable, feature-rich, extensible, well integrated with Maven and has an appealing user interface.</p><p>One of the nice features that we are using regularly is the Build Executor setting that allows you to specify the number of simultaneous builds. This is useful to increase throughput of Hudson on multi-core processor systems, where the number of executors should (at least) match the number of available cores.</p><p>However, Maven isn't really designed for running multiple instances simultaneously since the local respository isn't multi-process safe. The chance for conflicts seems small (multiple processes must access the same dependency at the same time, at least one of them writing). However, in praxis, we encounter this type of concurrency issue at least once a day now, which is starting to hurt us! The build is failing with a message like this:</p><pre class="brush:xml">[INFO] ------------------------------------------------------------------------<br />[ERROR] BUILD ERROR<br />[INFO] ------------------------------------------------------------------------<br />[INFO] Failed to resolve artifact.<br /><br />GET request of: some/group/some-artifact-1.2.3-SNAPSHOT.jar from my-repo failed<br /> some.group:some-artifact:jar:1.2.3-SNAPSHOT<br />...<br /><br />Caused by I/O exception: ...some-artifact-1.2.3-SNAPSHOT.jar.tmp (The requested operation cannot be performed on a file with a user-mapped section open)</pre><p>or this:</p><pre class="brush:xml">[INFO] ------------------------------------------------------------------------<br />[ERROR] BUILD ERROR<br />[INFO] ------------------------------------------------------------------------<br />[INFO] Failed to resolve artifact.<br /><br />Error copying temporary file to the final destination: Failed to copy full contents from ...some-artifact-1.2.3-SNAPSHOT.jar.tmp to ...\some-artifact-1.2.3-SNAPSHOT.jar</pre><p>The reason is, the JAR file is locked by another process that is executing some long-lasting test cases, for instance. At the same time, a second build tries to download a new version of this snapshot into the local repository, which is done with the help of the mentioned <code>.tmp</code> file.</p><h4>Safe Maven Repository</h4><p>The only way to avoid this type of issue is to use separate local Maven Repositories for each of the processes. You can tell Maven to use a custom local repository location by specifying the <a href="http://maven.apache.org/ref/2.2.1/maven-settings/settings.html#class_settings"><em>localRepository</em> setting</a> in your <code>settings.xml</code> file.</p><p>In Hudson, this is even more convenient. There is a checkbox <em>Use private Maven repository</em> in the advanced part of the <em>Build</em> section of Maven projects. Just click that to setup a private local Maven repo for that project. You should consider to do so when you run into the described issue now and then.</p><p>Obviously using private repos will increase the total amount of disk space due to caching the same dependencies in multiple places. Additionally, the first build will take significantly more time because everything has to be downloaded once. However, both consequences are well acceptable given the better stability and isolation of projects.</p><p>Instead of clicking the Hudson checkbox for all your projects, you should consider to setup the local Maven repo in your <code>settings.xml</code> instead. This has a number of advantages:</p><ul><br /><li>You don't have to setup the option for each and every project, but have it in a central place.</li><li>You can use a common root for all local Maven repos, like <code>d:/maven-repo</code>. This allows you to easily purge all your local repositories from time to time, in order to reduce disk space as well as validate the content (i.e. make sure the build is still running in a clean environment and all required artifacts are in your corporate Maven repository).</li></ul><p>For instance, here is what works fine for us:</p><pre class="brush:xml"><localRepository>d:/builds/.m2/${env.JOB_NAME}/repository</localRepository></pre><p>This is using a Hudson environment variable (<em>JOB_NAME</em>) to create subfolders for the actual projects aka jobs. See <a href="http://wiki.hudson-ci.org/display/HUDSON/Building+a+software+project">here</a> for a list of available variables.</p><p>Oh yes, what I suggest is also encouraged by Brian Fox in his <a href="http://www.sonatype.com/people/2009/01/maven-continuous-integration-best-practices/">Maven Continuous Integration Best Practices</a> blog post, so you should consider twice to adopt this best practice :o)</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com5tag:blogger.com,1999:blog-6189016616585694043.post-58518833963214342112010-01-02T18:24:00.002+01:002010-01-22T10:24:37.195+01:00Cargo Maven Plugin: Not Made for JBoss<h4>Again...</h4><p>Well, actually this blog was supposed to be about Java in general and all the ups and downs I experience during my daily work. However, I've not been doing much other than Maven configuration and build management lately, so here is another Maven related post. Sorry folks.</p><p>As already shown in <a href="http://javamoods.blogspot.com/2009/11/integration-test-with-maven-cargo-and.html">this post</a>, I have been doing integration tests with JBoss by using the <a href="http://cargo.codehaus.org/Maven2+plugin">Cargo Maven plugin</a> to start the JBoss locally and deploy the application to it. This all works quite as soon as you have figured out how to configure Cargo for JBoss.</p><h4>But Remotely Now!</h4><p>Now, the next step is to deploy our EAR file which is generated during nightly build to a running JBoss instance on a separate computer. This is different because no JBoss configuration has to be created locally and no JBoss has to be started. Instead, the EAR file must be transferred to a remote server where JBoss is already running, and JBoss must be persuaded to deploy this file.</p><p>That sounds feasible, and I've done exactly this before for other servers like Tomcat, so I did not expect any issue here. However, I was wrong.</p><h4>Itch #1</h4><p>First trouble was caused by my lack of knowledge regarding JBoss. With standard installation, you are not able to connect to the server remotely and all the services are bound to localhost only (see <a href="http://community.jboss.org/wiki/JBoss42FAQ">here</a> or <a href="http://community.jboss.org/thread/63800?tstart=0">here</a>). This is intentionally, to prevent unprotected installations appearing all over the net. You have to pass the option <code>-b 0.0.0.0</code> when starting JBoss to allow remote connections to the services, but take care to secure your JBoss accordingly!</p><h4>Itch #2</h4><p>Okay, after this has been configured, I tried to use Cargo to deploy my EAR file to JBoss. This is the configuration I ended up with:</p><pre class="brush:xml"><!-- *** Cargo plugin: deploy the application to running JBoss *** --><br /><plugin><br /> <groupId>org.codehaus.cargo</groupId><br /> <artifactId>cargo-maven2-plugin</artifactId><br /> <version>1.0</version><br /> <configuration><br /> <wait>false</wait><br /> <!-- Container configuration --><br /> <container><br /> <containerId>jboss5x</containerId><br /> <type>remote</type><br /> </container><br /> <!-- Configuration to use with the Container --><br /> <configuration><br /> <type>runtime</type><br /> <properties><br /> <cargo.hostname>...</cargo.hostname><br /> <cargo.servlet.port>8080</cargo.servlet.port><br /> </properties><br /> </configuration><br /> <!-- Deployer configuration --><br /> <deployer><br /> <type>remote</type><br /> <deployables><br /> <deployable><br /> <location>...</location><br /> </deployable><br /> </deployables><br /> </deployer><br /> </configuration><br /><br /> <executions><br /> <execution><br /> <id>deploy</id><br /> <phase>deploy</phase><br /> <goals><br /> <goal>deployer-redeploy</goal><br /> </goals><br /> </execution><br /> </executions><br /></plugin></pre><p>However, I always got this error message:</p><pre>[INFO] Failed to deploy to [http://...]<br />Server returned HTTP response code: 500 for URL: ... </pre><p>The configuration seems to be correct, so what is the problem?</p><p>After asking Google, I realized that Cargo is not able to transfer a file to JBoss! Instead, it requires the deployable to be deployed to be present on the server filesystem (see <a href="http://markmail.org/message/dzdl2jmsvdlhl7cz#query:cargo%20jboss%20%22Server%20returned%20HTTP%20response%20code%3A%20500%20for%20URL%22+page:1+mid:fnjeuj33223xl74c+state:results">here</a>). This is obviously caused by the JBoss JMX deployer which is used by Cargo, but actually you don't care who is to blame – you just want it to work. The name "Cargo" implies the parcel is transferred to its destination, right? Also note that this <a href="http://jira.codehaus.org/browse/CARGO-416">issue</a> is dated from Sep 2006, so there has been some time to fix it in either way.</p><h4>What Can We Do?</h4><p>Well, there are probably not many options. Since current version of Cargo is not able to transfer the file to the server, you'd have to do this on your own. The location given in our Cargo configuration above actually is the path on the JBoss server. So, when the file exists locally on the JBoss server, Cargo should be able to deploy it successfully.</p><p>For transferring the file to JBoss server, we could use the <a href="http://maven.apache.org/plugins/maven-dependency-plugin/">maven-dependency-plugin</a>, a quite useful plugin for all kind of analyzing, copying or unpacking artifacts. We configure it to run in pre-integration-test phase and to copy the EAR file (produced by this POM) to some temp directory on the JBoss server:</p><pre class="brush:xml"><plugin><br /> <groupId>org.apache.maven.plugins</groupId><br /> <artifactId>maven-dependency-plugin</artifactId><br /> <executions><br /> <execution><br /> <id>copy</id><br /> <phase>install</phase><br /> <goals><br /> <goal>copy</goal><br /> </goals><br /> <configuration><br /> <artifactItems><br /> <artifactItem><br /> <groupId>${project.groupId}</groupId><br /> <artifactId>${project.artifactId}</artifactId><br /> <version>${project.version}</version><br /> <type>${project.packaging}</type><br /> <destFileName>test.ear</destFileName><br /> </artifactItem><br /> </artifactItems><br /> <outputDirectory>${publish.tempdir}</outputDirectory><br /> <overWrite>true</overWrite><br /> </configuration><br /> </execution><br /> </executions><br /></plugin></pre><p>The property <code>${publish.tempdir}</code> can be anything on the JBoss server (which must be available in the network!) and is exactly what has to be used for the value of <code>location</code> element in Cargo configuration.</p><p>Another option would be to use the hot-deploy directory of JBoss as <code>outputDirectory</code> for the dependency plugin, and hence rely on hot deployment of JBoss instead of Cargo and JBoss JMX deployer. This way, we could get rid of Cargo configuration and cleanup the POM a bit, but in the end it seemed a bit less clean to me... your mileage may vary.</p><p>So, as always, in the end we got it to work, but not without unforeseen pain. When will Cargo be fixed to get the EAR file to JBoss server? Who knows.</p><h4>Updates</h4><p>2010/01/22: Note that the dependency plugin must be bound after the <em>install</em> phase so that the artifact has been copied at least to your local Maven repository. As a consequence, the Cargo plugin must be run in <em>deploy</em> phase, which is actually a good choice anyways. I have changed this in my code above.</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com1tag:blogger.com,1999:blog-6189016616585694043.post-16843723650958157782009-12-10T14:10:00.002+01:002009-12-10T14:16:46.425+01:00Maven Plugins: Upgrade with Care!<h4>Upgrading Maven Plugins: Tips and Issues</h4><p>After having shown the list of current Maven plugin versions in my <a href="http://javamoods.blogspot.com/2009/12/maven-plugins-upgrade-with-care.html">previous post</a>, now I'm going to share my experiences with upgrading. Just like expected, some of the new plugins did not work out of the box or required some changes in configuration:</p><p><strong>maven-checkstyle-plugin</strong></p><p>We have used version 2.3 previously which is based on Checkstyle 4.4. In contrast, plugin version 2.4 is eventually built on top of Checkstyle 5 which better supports Java 5 language features (I wrote a <a href="http://javamoods.blogspot.com/2009/05/update-to-checkstyle-50.html">post</a> on that issue). The configuration is not fully compatible, so you would have to upgrade it.</p><p><strong>maven-javadoc-plugin</strong></p><p>Starting with version 2.6, JavaDoc plugin can detect the Java API link for the current build. This did not work for us (probably due to missing proxy configuration), so we had to switch it off by setting <em>detectJavaApiLink</em> property to false. See <a href="http://maven.apache.org/plugins/maven-javadoc-plugin/javadoc-mojo.html">plugin site</a>.</p><p><strong>taglist-maven-plugin</strong></p><p>The configuration of tags has changed and is now a bit more extensive. Old format (using <em>tags</em> element) is still supported, but deprecated. See <a href="http://mojo.codehaus.org/findbugs-maven-plugin/2.2/plugin-info.html">plugin documentation</a>.</p><p><strong>findbugs-maven-plugin</strong></p><p>I have used version 2.2 which was the most recent one when changing the POM. However, this yielded some strange errors during site generation. This is the strack trace:</p><pre>Generating "FindBugs Report" report.<br /> Plugin Artifacts to be added ->...<br /> AuxClasspath is ->D:\Profiles\Default User\.m2\repository\org\apache\maven\reporting\maven-reporting-impl\2.0\maven-reporting-impl-2.0.jar;...<br />[java] Exception in thread "main" java.io.FileNotFoundException: D:\builds\...\User\.m2\repository\...\maven-reporting-impl-2.0.jar;D:\Profiles\Default (The filename, directory name, or volume label syntax is incorrect)<br />[java] at java.util.zip.ZipFile.open(Native Method)<br />[java] at java.util.zip.ZipFile.<init>(ZipFile.java:114)<br />[java] at java.util.zip.ZipFile.<init>(ZipFile.java:131)<br />[java] at edu.umd.cs.findbugs.classfile.impl.ZipFileCodeBase.<init>(ZipFileCodeBase.java:53)<br />[java] at edu.umd.cs.findbugs.classfile.impl.ZipCodeBaseFactory.countUsingZipFile(ZipCodeBaseFactory.java:92)<br />[java] at edu.umd.cs.findbugs.classfile.impl.ZipCodeBaseFactory.makeZipCodeBase(ZipCodeBaseFactory.java:46)<br />[java] at edu.umd.cs.findbugs.classfile.impl.ClassFactory.createFilesystemCodeBase(ClassFactory.java:97)<br />[java] at edu.umd.cs.findbugs.classfile.impl.FilesystemCodeBaseLocator.openCodeBase(FilesystemCodeBaseLocator.java:75)<br />[java] at edu.umd.cs.findbugs.classfile.impl.ClassPathBuilder.processWorkList(ClassPathBuilder.java:564)<br />[java] at edu.umd.cs.findbugs.classfile.impl.ClassPathBuilder.build(ClassPathBuilder.java:195)<br />[java] at edu.umd.cs.findbugs.FindBugs2.buildClassPath(FindBugs2.java:584)<br />[java] at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:181)<br />[java] at edu.umd.cs.findbugs.FindBugs.runMain(FindBugs.java:348)<br />[java] at edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1057)<br />[java] Java Result: 1</pre><p>As you can see, we are using Windows default for local Maven repository (<code>D:\Profiles\Default User\.m2</code>) which obviously is causing a problem with the path later on. How sick is that?!?</p><p>Then, after having tried this and that, I discovered there is a brand new version 2.3 available so I tested that one and guess what – everything works fine again! Hence, don't use version 2.2, it seems to be broken...</p><p><strong>docbkx-maven-plugin</strong></p><p>The latest version of that plugin is 2.0.9, but that did not work correctly. It failed the build with this error:</p><pre>ValidationException: null:30:723: Error(30/723): fo:table-body is missing child elements.<br />Required Content Model: marker* (table-row+|table-cell+)</pre><p>The given line/position information did not match to anything suspicious, and I could not see anything wrong in our XSL files. So, no idea what this issue is telling me or what could I do about it. That's why I rolled back to 2.0.8 which just works fine.</p><h4>The Bottom Line</h4><p>With the given plugin versions, we are up to date again and managed to get rid of some issues hitting us since quite some time. Additionally, we are well prepared for upgrading to Maven 3. I hope I will be able to do so soon, to check if this is really a "<a href="http://www.sonatype.com/people/2009/11/maven-30-alpha-3-released/">drop-in replacement</a>"... I will let you know!</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com0tag:blogger.com,1999:blog-6189016616585694043.post-74034224080739129532009-12-10T12:21:00.009+01:002010-01-06T20:28:25.903+01:00Maven Plugins: Current Versions<h4>Upgrading Maven Plugins</h4><p>In preparation for a later switch to Maven 3 (which is already <a href="http://old.nabble.com/-ANN--Apache-Maven-3.0-alpha-5-Released-tt26540534.html#a26540534">knocking on the door</a>) as well as to get rid of some plugin related issues we are suffering from, I decided to update the Maven plugins we use for build and site generation.</p><p>Of course, we are following <a href="http://www.sonatype.com/people/2008/05/optimal-maven-plugin-configuration/">best practice</a> and are locking down the version of all plugins in <em>project.build.pluginManagment.plugins</em> section. This is done in the company's topmost POM, so that all company projects would use the same versions once they reference the latest parent POM.</p><p>As you might know, upgrading to new plugin versions is always an adventure and you have to test your builds seriously, which is of course hard when you are going to change the company settings...</p><h4>Build Plugins</h4><p>Well, here is the list of build plugins we now use with their current version, as well as (in brackets) the version that Maven 3.0-alpha5 defines in its internal POM. I have highlighted where both versions differ:</p><ul><li><strike>maven-archetype-plugin: 2.0-alpha-5</strike> (see comment below)</li><li>maven-assembly-plugin: 2.2-beta-4 (2.2-beta-4)</li><li>maven-clean-plugin: 2.3 (2.3)</li><li>maven-compiler-plugin: 2.0.2 (2.0.2)</li><li>maven-dependency-plugin: <strong>2.1</strong> (2.0)</li><li>maven-deploy-plugin: 2.4 (2.4)</li><li>maven-ear-plugin: <strong>2.4</strong> (2.3.1)</li><li>maven-ejb-plugin: <strong>2.2</strong> (2.1)</li><li>maven-enforcer-plugin: 1.0-beta-1</li><li>maven-help-plugin: 2.1 (2.1)</li><li>maven-install-plugin: 2.3 (2.3)</li><li>maven-javadoc-plugin: <strong>2.6.1</strong> (2.5)</li><li>maven-jar-plugin: <strong>2.3</strong> (2.2)</li><li>maven-release-plugin: 2.0-beta-9 (2.0-beta-9)</li><li>maven-resources-plugin: 2.4.1 (2.4.1)</li><li>maven-site-plugin: 2.0.1 (2.0.1)</li><li>maven-source-plugin: <strong>2.1.1</strong> (2.0.4)</li><li>maven-surefire-plugin: 2.4.3 (2.4.3)</li><li>maven-war-plugin: <strong>2.1-beta-1</strong> (2.1-alpha-1)</li><li>build-helper-maven-plugin: 1.4</li><li>failsafe-maven-plugin: 2.4.3-alpha-1</li><li>cargo-maven2-plugin: 1.0</li><li>docbkx-maven-plugin: 2.0.8</li></ul><p>It's a bit strange that Maven 3.0-alpha5 (which came out end of November) does not use the latest version of all those plugins, most of them having been released before that date. I don't know if this was intentional or not... Let's hope it's not because of unsure quality of latest plugin versions ;-) Anyways, I decided to upgrade to the latest available version for all plugins.</p><h4>Reporting Plugins</h4><p>Here's the list for plugins related to site reports:</p><ul><li>cobertura-maven-plugin: 2.3</li><li>findbugs-maven-plugin: 2.3</li><li>jdepend-maven-plugin: 2.0-beta-2</li><li>maven-checkstyle-plugin: 2.4</li><li>maven-jxr-plugin: 2.1</li><li>maven-pmd-plugin: 2.4</li><li>maven-project-info-reports-plugin: 2.1.2</li><li>maven-surefire-report-plugin: 2.4.3</li><li>taglist-maven-plugin: 2.4</li></ul><p>I'm going to show some tips and issues when upgrading to these versions in an upcoming post...</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com5tag:blogger.com,1999:blog-6189016616585694043.post-4400267125267351132009-12-02T08:51:00.020+01:002010-02-18T15:45:57.396+01:00Unit and Integration Testing with Maven, Part 2<h4>Welome Back...</h4><p>... to the second installment of this little series. After having seen the requirements and hassles when using Maven for testing in the <a href="http://javamoods.blogspot.com/2009/11/unit-and-integration-testing-with-maven.html">last post</a>, we are now going to list the possible solutions.</p><p>Remember, Maven does support different phases for unit and integration tests, but there is <em>only one</em> source directory (usually <code>src/test/java</code>), making it a bit difficult to setup and organize test environments. Hence, we have to use some way to separate both types of test.</p><h4>Option 1: Separate Module for Integration Tests</h4><p>In case you are using modules for your project anyways and want to have integration tests which are testing these modules in integration, this is the natural solution: just create another module that depends on the other ones and only contains the integration test sources and resources.</p><p>However, if you instead want to integration-test each module individually, this approach just doubles the number of modules, which can be difficult to manage. You could put all integration tests into one big module – but that has other disadvantages, of course.</p><p>Since we do not have any sources (only test sources) in that integration-test project, the usual build lifecycle would execute a lot of unneeded steps like copying resources, compiling, testing, creating JAR file etc. We could use the POM packaging type to suppress all of this, but then we have to configure the maven-compiler-plugin to force compilation of test sources.</p><p>Independantly of packaging type, we have to do some more configurations:</p><ul><li>Execute Surefire plugin in <em>integration-test</em> phase.</li><li>Prepare integration tests (for instance, start the container and deploy the application) in <em>pre-integration-test</em> phase. This is quite easy with <a href="http://cargo.codehaus.org/Maven2+plugin">Cargo Maven plugin</a>.</li><li>Shutdown integration tests (stop the container) in <em>post-integration-test</em> phase.</li></ul><p>Here is the relevant part of such a POM file:</p><pre class="brush:xml"><build><br /> <plugins><br /><br /> <!-- *** Compiler plugin: we must force test compile because we're using a <br /> pom packaging that doesn't have this lifecycle mapping. --><br /> <plugin><br /> <artifactId>maven-compiler-plugin</artifactId><br /> <executions><br /> <execution><br /> <goals><br /> <goal>testCompile</goal><br /> </goals><br /> </execution><br /> </executions><br /> </plugin><br /><br /> <!-- *** Surefire plugin: run integration tests *** --><br /> <plugin><br /> <artifactId>maven-surefire-plugin</artifactId><br /> <executions><br /> <execution><br /> <phase>integration-test</phase><br /> <goals><br /> <goal>test</goal><br /> </goals><br /> </execution><br /> </executions><br /> </plugin><br /><br /> <!-- *** Cargo plugin: start/stop application server and deploy the ear <br /> file before/after integration tests *** --><br /> <plugin><br /> <groupId>org.codehaus.cargo</groupId><br /> <artifactId>cargo-maven2-plugin</artifactId><br /> <version>1.0</version><br /> <configuration><br /> ...<br /> </configuration><br /><br /> <executions><br /> <!-- before integration tests are run: start server --><br /> <execution><br /> <id>start-container</id><br /> <phase>pre-integration-test</phase><br /> <goals><br /> <goal>start</goal><br /> </goals><br /> </execution><br /> <!-- after integration tests are run: stop server --><br /> <execution><br /> <id>stop-container</id><br /> <phase>post-integration-test</phase><br /> <goals><br /> <goal>stop</goal><br /> </goals><br /> </execution><br /> </executions><br /> </plugin><br /><br /> </plugins><br /></build></pre><h4>Option 2: Different Source Directories</h4><p>In this scenario, unit and integration test sources are placed in separate source directories, like <code>src/test/java</code> and <code>src/integrationtest/java</code>. I think this would definitely be the best solution, and it actually should be what Maven supports out of the box. Sadly, Maven does not, and as far as I know Maven 3 won't either :-(</p><p>Well, we should be able to configure things this way. For compiling and executing integration tests, you would have to configure Compiler plugin to compile the integration test sources in pre-integration-test phase (using the <a href="http://maven.apache.org/plugins/maven-compiler-plugin/compile-mojo.html#includes">includes parameter</a>), and then configure a second Surefire execution using <a href="http://maven.apache.org/plugins/maven-surefire-plugin/test-mojo.html#testClassesDirectory">testClassesDirectory parameter</a> to point it to the integration test folder.</p><p>However, this option seems a bit fragile to me. Even if it works (I haven't checked), the integration test folder would probably not show up in Eclipse when using m2eclipse plugin and other plugins may have issues with this additional test source path as well. That's why I do not recommend this option.</p><h4>Option 3: Different File Name Patterns</h4><p>In this scenario, the package or file name is used to distinguish between unit and integration test source files. Let's say, for instance, that all integration test classes start with <em>IT*</em> and hence the filename pattern is <code>**/IT*.java</code>. We'd have to configure the Surefire plugin to execute twice: once in test phase for executing the unit tests only, and another time in integration-test phase to execute, well, the integration tests (and only those)</p><p>A common way to do so is this:</p><ul><li>In configuration of Surefire plugin, set <code>skip</code> parameter to true to bypass all tests.</li><li>Add an <code>execution</code> element for unit tests, where <code>skip</code> is set to false and <code>exclude</code> parameter is used to exclude the integration tests.</li><li>Add another <code>execution</code> element for integration tests, where <code>skip</code> is set to false and <code>include</code> parameter is used to just include the integration tests and nothing else.</li><li>Prepare and shutdown of integration tests are like before.</li></ul><p>Here is the POM section:</p><pre class="brush:xml"><build><br /> <plugins><br /><br /> <!-- *** Surefire plugin: run unit and integration tests in <br /> separate lifecycle phases, using file name pattern *** --><br /> <plugin><br /> <artifactId>maven-surefire-plugin</artifactId><br /> <configuration><br /> <skip>true</skip><br /> </configuration><br /> <executions><br /> <execution><br /> <id>unit-test</id><br /> <phase>test</phase><br /> <goals><br /> <goal>test</goal><br /> </goals><br /> <configuration><br /> <skip>false</skip><br /> <excludes><br /> <exclude>**/IT*.java</exclude><br /> </excludes><br /> </configuration><br /> </execution><br /><br /> <execution><br /> <id>integration-test</id><br /> <phase>integration-test</phase><br /> <goals><br /> <goal>test</goal><br /> </goals><br /> <configuration><br /> <skip>false</skip><br /> <includes><br /> <include>**/IT*.java</include><br /> </includes><br /> </configuration><br /> </execution><br /> </executions><br /> </plugin><br /><br /> <!-- *** Cargo plugin: start/stop application server and deploy the ear <br /> file before/after integration tests *** --><br /> ...<br /><br /> </plugins><br /></build></pre><p>There is another approach to achieve the same result:</p><ul><li>In configuration of Surefire plugin, use the <code>exclude</code> parameter to exclude the integration tests when executing unit tests in test phase.</li><li>Add an <code>execution</code> element for integration tests, where <code>exclude</code> is set to any dummy value (to override the default configuration) and <code>include</code> parameter is used to just include the integration tests and nothing else.</li></ul><p>It's a bit shorter than the former configuration, but in the end it's a matter of taste. Again, here is the POM snippet:</p><pre class="brush:xml"><build><br /> <plugins><br /><br /> <!-- *** Surefire plugin: run unit and integration tests in <br /> separate lifecycle phases, using file name pattern *** --><br /> <plugin><br /> <artifactId>maven-surefire-plugin</artifactId><br /> <configuration><br /> <excludes><br /> <exclude>**/IT*.java</exclude><br /> </excludes><br /> </configuration><br /> <executions><br /> <execution><br /> <id>integration-test</id><br /> <phase>integration-test</phase><br /> <goals><br /> <goal>test</goal><br /> </goals><br /> <configuration><br /> <excludes><br /> <exclude>none</exclude><br /> </excludes><br /> <includes><br /> <include>**/IT*.java</include><br /> </includes><br /> </configuration><br /> </execution><br /> </executions><br /> </plugin><br /><br /> <!-- *** Cargo plugin: start/stop application server and deploy the ear <br /> file before/after integration tests *** --><br /> ...<br /><br /> </plugins><br /></build></pre><h4>Failsafe Plugin</h4><p>Instead of using the Surefire plugin with custom filename pattern to distinguish between unit and integration test classes, you should rather use the <a href="http://mojo.codehaus.org/failsafe-maven-plugin/">Failsafe plugin</a> for executing integration tests.</p><p>This plugin is a fork of the Surefire plugin designed to run integration tests. <br />It is used during the <code>integration-test</code> and <code>verify</code> phases of the build lifecycle to execute the integration tests of an application. Other than Surefire plugin, the Failsafe plugin will not fail the build when executing tests thus enabling the post-integration-test phase to execute.</p><p>Failsafe plugin has its own naming convention. By default, the Surefire plugin executes <code>**/Test*.java</code>, <code>**/*Test.java</code>, and <code>**/*TestCase.java</code> test classes. In contrast, the Failsafe plugin will look for <code>**/IT*.java</code>, <code>**/*IT.java</code>, and <code>**/*ITCase.java</code>. Did you note that this matches what we used before for our integration tests? ;-)</p><p>When using Failsafe, the last POM (of option 3) looks like this:</p><pre class="brush:xml"><build><br /> <plugins><br /><br /> <!-- *** Surefire plugin: run unit and exclude integration tests *** --><br /> <plugin><br /> <artifactId>maven-surefire-plugin</artifactId><br /> <configuration><br /> <excludes><br /> <exclude>**/IT*.java</exclude><br /> </excludes><br /> </configuration><br /> </plugin><br /><br /> <!-- *** Failsafe plugin: run integration tests *** --><br /> <plugin><br /> <groupId>org.codehaus.mojo</groupId><br /> <artifactId>failsafe-maven-plugin</artifactId><br /> <version>2.4.3-alpha-1</version><br /> <executions><br /> <execution><br /> <goals><br /> <goal>integration-test</goal><br /> <goal>verify</goal><br /> </goals><br /> </execution><br /> </executions><br /> </plugin><br /><br /> <!-- *** Cargo plugin: start/stop application server and deploy the ear <br /> file before/after integration tests *** --><br /> ...<br /><br /> </plugins><br /></build></pre><p>Of course, Failsafe would also work with first option (separate integration test module).</p><h4>Conclusion</h4><p>Well, after having shown all the ways that came into my mind, here are my personal "best practices":</p><ul><li>Use the Failsafe plugin to execute integration tests.</li><li>If keeping integration tests in a separate module feels alright, do so (see option 1).</li><li>If you want to have unit and integration tests in the same module, choose a file or package name pattern to distinguish between both, and configure Surefire and Failsafe plugins accordingly.</li></ul><h4>Update (2010/02/18)</h4><p>Note that there is a new version 2.5 of failsafe plugin available with changed group id: <code>org.apache.maven.plugins:maven-failsafe-plugin:2.5</code>. See the <a href="http://maven.apache.org/plugins/maven-failsafe-plugin/index.html">plugin site</a> for details. Thanks to stug23 for pointing that out!</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com7tag:blogger.com,1999:blog-6189016616585694043.post-46205800645846360642009-11-28T17:39:00.007+01:002009-11-28T18:32:15.250+01:00Unit and Integration Testing with Maven, Part 1<h4>Test Types</h4><p>In my <a href="http://javamoods.blogspot.com/2009/11/integration-test-with-maven-cargo-and.html">last post</a>, I talked about integration testing with Maven's Cargo plugin and JBoss Application Server. Now let's see how integration testing fits into an overall testing strategy with Maven.</p><p>When thinking about tests and testing strategies, there is one important thing to keep in mind: integration tests are not unit tests (even though JUnit may be used to write integration tests). For the sake of completeness, let's pin down the main characteristics of both:</p><p><strong>Unit Tests:</strong></p><ul><li>are testing a small piece of code in isolation</li><li>are independant from other tests</li><li>are usually written by software developers</li><li>have to be very fast because they are run quite often</li></ul><p>In contrast, <strong>Integration Tests:</strong></p><ul><li>individual software modules are combined and tested as a group</li><li>are usually much slower than unit tests because a context has to be established (Spring, database, web server etc.)</li><li>normally are run after unit testing</li><li>may be created by QA team using tools, but also by developers using JUnit test cases (which still does not turn them into unit tests)</li></ul><br /><h4>Testing with Maven</h4><p>As you know, <a href="http://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html">Maven Build Lifecycle</a> provides phases both for testing and integration testing. Related phases are:</p><pre>...<br />generate-test-sources<br />generate-test-resources<br />process-test-resources<br />test-compile<br />process-test-classes<br />test<br />...<br />pre-integration-test<br />integration-test<br />post-integration-test<br />...</pre><p>Unfortunately, Maven does not support separate source directories for both test types. This is really complicating things (as we'll see in a minute). There have been <a href="http://docs.codehaus.org/display/MAVEN/Testing+Strategies">some discussions</a> on how to fix that, but I don't think it made it into <a href="http://www.sonatype.com/people/2009/11/maven-3x-paving-the-desire-lines-part-one-2/">Maven 3</a> (not quite sure, though).</p><p>Hence, what we need to have is this:</p><ul><li>Both unit and integration tests have to be compiled by javac.</li><li>Nevertheless, they should be clearly separated from each other by a path or file name pattern.</li><li>Unit tests have to be run in <code>test</code> lifecycle phase, and the build should stop if they do not succeed.</li><li>Integration tests should be run in <code>integration-test</code> phase, and at least the <code>post-integration-test</code> phase has to be run independantly of test result (to be able to shutdown a running container, close database connections etc.).</li></ul><p>The next part of this small series will show how these requirements can be met with Maven and what has to be configured, so please stand by...</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com1tag:blogger.com,1999:blog-6189016616585694043.post-9499853399679904822009-11-27T17:22:00.017+01:002009-11-27T21:39:03.995+01:00Integration test with Maven, Cargo and JBoss<h4>It's Friday...</h4><p>... and I thought it would be a good idea to setup an integration test of some EJBs we are creating in a new project. Actually, it was not, since it nearly ruined my evening. But eventually I got it to work, and here is how.</p><p>To test the EJBs, I need to create an EAR file, deploy that to the application server (JBoss 5.1.0 in our case) and run JUnit test cases against this server. For Maven, I have setup a separate <code>integration-test</code> module for executing the integration test for the following reasons:</p><ul><li>to better separate unit tests (with JUnit) from integration tests (also with JUnit), which simplifies Maven configuration a bit.</li><li>to be able to run this module outside of the normal CI (continuous integration) build due to its lack of performance.</li></ul><h4>Cargo Maven Plugin</h4><p>For automatically starting the container, deploying the EAR file and stopping the container when the tests are finished, I use the <a href="http://cargo.codehaus.org/Maven2+plugin">Cargo Maven plugin</a>. I did this several times before (with Tomcat, though) – so I thought that'd be easy...</p><p>Well, when using JBoss, there are some tricks you have to know. Before going into the details, some more information on Cargo.</p><p>A <em>Container</em> is the base concept in Cargo. It represents an existing application server runtime, and Cargo hides all the details of the actual server implementation for you. There are two types of containers:</p><ul><li><em>Local Container</em>: this is executing on the machine where Cargo runs. This can either be an <em>Installed Container</em> which is, well, installed on the local machine and is run in a separate VM, or an <em>Embedded Container</em> that is executing in the same JVM where Cargo is running (currently only supported for Jetty).</li><li><em>Remote Container</em>: a container that is already running anywhere (local or remote). It's not under Cargo's control and can't be started or stopped by Cargo.</li></ul><p>You use a <em>Configuration</em> to specify how the container is configured (logging, data sources, location where to put the deployables, etc). The available configuration depends on the container type:</p><ul><li><em>Local Configuration</em>: for local containers. There are two local configuration types: <em>Standalone Local Configuration</em> which configures the container from scratch in a directory of your choice, and <em>Existing Local Configuration</em> that re-uses an existing container installation already residing on your hard drive.</li><li><em>Runtime Configuration</em>: You use a runtime configuration when you want to access your container as a black box through a remote protocol (JMX, etc). This is perfect for remote containers.</li></ul><p>In my case, I wanted to use a Local Installed Container with a Standalone Local Configuration, to eliminate dependencies from other deployments.</p><h4>JBoss with Cargo</h4><p>Well, and here are the pitfalls when using JBoss in this setting:</p><ol><li><strong>Experimental:</strong> JBoss 5.x is still an <a href="http://cargo.codehaus.org/JBoss+5.x">experimental container</a> for Cargo. This is a bit strange given the fact that this version is now out for a while, but fortunately not really an issue.</li><li><strong>Extensive Logging:</strong> When Cargo builds the JBoss configuration – remember, I use Standalone Local Configuration so Cargo creates one from scratch – it uses a logging setup (independantly from what is used with your JBoss installation!) that is way too chatty. The console scrolls forever, and things are slowing down in a way that you think everything is stuck in an infinite loop.<br>Thus, you have to tell Cargo to use another logging configuration file, which is a bit tricky and not documented very well (see <a href="http://jira.codehaus.org/browse/CARGO-585">this issue</a>).<br /></li><li><strong>Shutdown Port:</strong> Now the container starts up, the tests are run, but after that JBoss AS is not shutting down. It's telling me <code>javax.naming.CommunicationException: Could not obtain connection to any of these urls: localhost:1299</code>, which means the <a href="http://article.gmane.org/gmane.comp.java.cargo.user/1381">wrong port</a> is used for shutdown. Standard shutdown port is 1099, so we have to tell Cargo to use that port number.</li></ol><p>All in all, the configuration now looks like this. Mentioned settings are highlighted. Perhaps this is useful for someone else...</p><pre class="brush:xml; highlight: [23,33,34,35,36,37,38,39,40]"><br /><!-- *** Cargo plugin: start/stop JBoss application server and deploy the ear <br /> file before/after integration tests *** --><br /><plugin><br /> <groupId>org.codehaus.cargo</groupId><br /> <artifactId>cargo-maven2-plugin</artifactId><br /> <version>1.0</version><br /> <configuration><br /> <wait>false</wait><br /> <!-- Container configuration --><br /> <container><br /> <containerId>jboss5x</containerId><br /> <type>installed</type><br /> <home>${it.jboss5x.home}</home><br /> <timeout>300000</timeout><br /> </container><br /> <!-- Configuration to use with the Container --><br /> <configuration><br /> <type>standalone</type><br /> <home>${project.build.directory}/jboss5x</home><br /> <properties><br /> <cargo.jboss.configuration>default</cargo.jboss.configuration><br /> <cargo.servlet.port>${it.jboss5x.port}</cargo.servlet.port><br /> <cargo.rmi.port>1099</cargo.rmi.port><br /> <cargo.jvmargs>-Xmx512m</cargo.jvmargs><br /> </properties><br /> <deployables><br /> <deployable><br /> <groupId>com.fja.ipl</groupId><br /> <artifactId>ipl-lc-ear</artifactId><br /> <type>ear</type><br /> </deployable><br /> </deployables><br /> <!-- Override logging created by Cargo (which is way to chatty) with default <br /> file from JBoss (see http://jira.codehaus.org/browse/CARGO-585) --><br /> <configfiles><br /> <configfile><br /> <file>src/test/resources/jboss-log4j.xml</file><br /> <todir>conf</todir><br /> </configfile><br /> </configfiles><br /> </configuration><br /> </configuration><br /><br /> <executions><br /> <!-- before integration tests are run: start server --><br /> <execution><br /> <id>start-container</id><br /> <phase>pre-integration-test</phase><br /> <goals><br /> <goal>start</goal><br /> </goals><br /> </execution><br /> <!-- after integration tests are run: stop server --><br /> <execution><br /> <id>stop-container</id><br /> <phase>post-integration-test</phase><br /> <goals><br /> <goal>stop</goal><br /> </goals><br /> </execution><br /> </executions><br /></plugin><br /><properties><br /> <it.jboss5x.home>${basedir}/../../../tools/bin/jboss-5.1.0.GA</it.jboss5x.home><br /> <it.jboss5x.port>8080</it.jboss5x.port><br /></properties><br /></pre>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com5tag:blogger.com,1999:blog-6189016616585694043.post-2693914711088995942009-11-19T09:47:00.012+01:002009-11-19T11:44:20.890+01:00JBoss and Maven<h4>The Challenge</h4><p>During the last days I was busy trying to build and deploy a JEE application on JBoss – using Maven, of course. One interesting task was to run integration tests, i.e. JUnit tests that are testing a service bean deployed on JBoss application server. Integration tests with Maven are an interesting issue, worth its own blog entry probably. But this post is all about JBoss.</p><p>Well, this task doesn't sound hard for a Maven expert, but unexpectedly it was not that easy. Our test code (calling the EJB) was depending on JBoss artifacts, which might be questionable in itself but that's how it's currently done. Hence, during runtime the code needs the <code>jbossall-client.jar</code> from JBoss' client directory in the classpath.</p><h4>The Eclipse Way</h4><p>We are using JBoss 5.x, and in this version the <code>jbossall-client.jar</code> is rather small since it only references all required jar files in the Manifest's Class-Path element:</p><pre class="brush:java;">Manifest-Version: 1.0<br />Specification-Title: JBossAS<br />Specification-Version: 5.0.0.GA<br />...<br />Implementation-Vendor: JBoss.org<br />Implementation-Vendor-Id: http://www.jboss.org/<br />Class-Path: commons-logging.jar concurrent.jar ejb3-persistence.jar hi<br /> bernate-annotations.jar jboss-aop-client.jar jboss-appclient.jar jbos<br /> s-aspect-jdk50-client.jar jboss-client.jar jboss-common-core.jar jbos<br /> s-deployers-client-spi.jar jboss-deployers-client.jar jboss-deployers<br /> -core-spi.jar jboss-deployers-core.jar jboss-deployment.jar jboss-ejb<br /> 3-common-client.jar jboss-ejb3-core-client.jar jboss-ejb3-ext-api.jar<br /> jboss-ejb3-proxy-clustered-client.jar jboss-ejb3-proxy-impl-client.j<br /> ar jboss-ejb3-proxy-spi-client.jar jboss-ejb3-security-client.jar jbo<br /> ss-ha-client.jar jboss-ha-legacy-client.jar jboss-iiop-client.jar jbo<br /> ss-integration.jar jboss-j2se.jar jboss-javaee.jar jboss-jsr77-client<br /> .jar jboss-logging-jdk.jar jboss-logging-log4j.jar jboss-logging-spi.<br /> jar jboss-main-client.jar jboss-mdr.jar jboss-messaging-client.jar jb<br /> oss-remoting.jar jboss-security-spi.jar jboss-serialization.jar jboss<br /> -srp-client.jar jboss-system-client.jar jboss-system-jmx-client.jar j<br /> bosscx-client.jar jbossjts-integration.jar jbossjts.jar jbosssx-as-cl<br /> ient.jar jbosssx-client.jar jmx-client.jar jmx-invoker-adaptor-client<br /> .jar jnp-client.jar slf4j-api.jar slf4j-jboss-logging.jar xmlsec.jar</pre><br /><p>This list is impressive... and the approach is working with current Eclipse. The equivalent for the Maven world would be a POM that references all other required libraries as normal dependencies (see <a href="http://markmail.org/message/snk2p6sssulo3s25#query:jbossall-client.jar%20now%20references%20external%20libs+page:1+mid:dh3rmeex7iib7z2x+state:results">this</a> discussion).</p><h4>The Maven Way</h4><p>And yes, such a POM <code>org.jboss.jbossas:jboss-as-client</code> is available on <a href="http://repository.jboss.com/maven2/org/jboss/jbossas/jboss-as-client">JBoss Maven repository</a> (note: it's not <code>org.jboss.client:jbossall-client</code> which is the reference-by-manifest's-class-path version!).</p><br /><p>However, this approach involves two issues:</p><ul><li>By following the dependencies defined in this <code>org.jboss.client:jbossall-client</code> transitively, Maven will download a vast number of JBoss and JEE libraries which you actually don't want to be used and packaged in your client. This includes things like <code>org.jboss.jbossas:jboss-as-server:jar:5.1.0.GA</code> and <code>jacorb:jacorb:jar:2.3.0jboss.patch6-brew</code>. Does not sound confidence-building, does it? Seems like JBoss should exclude transitive dependencies at the right places.</li><li>Moreover, some of the dependencies can't be found on any of the Maven repositories we configured to proxy in our Nexus. This includes Maven Central, JBoss of course, and a couple of others, so I do not know what else to add to provide the missing jars. Maybe it's just that the reference itself is wrong (version number?).</li></ul><p>Instead of excluding everything we do not use currently, next idea is to just include those dependencies to the JBoss jar files that the client actually requires. But... it's quite hard to find out the corresponding Maven coordinates. This includes guessing the groupd and artifact id, but also the version – and unfortunately the version information given in the jar's manifest file is not useful since it notes the JBoss AS version instead of the version of the library which is required for Maven.</p><h4>Lost in Space?</h4><p>So, I ended up re-packaging a jar that holds the content of the required client jars and putting this on our Nexus...</p><p>I honestly wonder how everybody else is using JBoss 5.x with Maven on the client side???</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com10tag:blogger.com,1999:blog-6189016616585694043.post-56695658953187650772009-11-13T08:02:00.004+01:002009-11-13T08:30:13.640+01:00Spring dm Server Considered Cool?<h4>w-jax is over...</h4><p>I'm just returning from <a href="http://it-republik.de/jaxenter/wjax09/">w-jax 2009</a>, one of the biggest german Java conferences, covering every technology related to Java actually.</p><p>If there are any hypes this year, they are these: Scala, Cloud Computing and OSGi. And, of course, Spring – which is all around.</p><p>Spring Source employs a lot of brave guys, and hearing first-hand about their newest coolest technologies is always refreshing. Did you know that Tomcat is to a large extent pushed by guys paid by Spring Source? I forgot the exact number, but <a href="http://jandiandme.blogspot.com/">Eberhard Wolff</a> said the majority of bug fixes and commits is done by Spring Source's Tomcat team.</p><h4>Spring dm Server is Cool!</h4><p>There have been some sessions about <a href="http://www.springsource.com/products/dmserver">Spring dm Server</a>, and when taking a closer look it seems to be clear why Spring Source invests in Tomcat that much. Spring dm server is based on Tomcat, and they add the ability to deploy web application modules as OSGi bundles. This way, you can partition your web apps into separate bundles to deploy them independantly and dynamically into the server. Yes, they are preserving OSGi's dynamic, meaning that you can stop, start or refresh a particular bundle without the need to stop and restart the server. As you would expect, other bundles continue to work. If a requested OSGi service is temporarily not available, dm Server will wait 5 minutes for it being redeployed.</p><p>Spring dm server is part of <a href="http://www.springsource.com/products/sts">Spring Tool Suite</a>, an Eclipse based development environment which is also able to auto-deploy a module whenever it changes.</p><p>All of this is quite gorgeous from a technical perspective, and the Spring guys had to cope with some really hard issues (for instance, realted to JPA where they need an application wide visiblity of bundles as well as load time weaving – you won't want to know the details...).</p><h4>Really? What for?</h4><p>But... is this really of any interest in the field?</p><p>Our customers are using WebSphere (or WebLogic or maybe JBoss) application server, and all of them are not capable of running such a modularized, OSGi based application. Moreover, in a production environment, there is no need and most often it is actually not even desired to be able to refresh application bundles dynamically.</p><p>So, what's left? Modularizing your application? Right, that is leading to a better structure and less coupling and blah blah, but the same can be achieved without OSGi (just let the business domain drive your application "slices"). It's just that you are <em>forced</em> to use modules (bundles) when you are using OSGi.</p><p>If at all, Spring dm server will pay off for developers since it may speed up development (especially the build-deploy-test cycle). But is this really, I mean really, hurting us that much? Hence, if you were asking me, a lot of technical overhead and server lock-in for no real benefit.</p><p>So, will Spring dm server be able to gain real attention, I mean beyond being technically cool? I'm sceptical. As always, time will tell...</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com0tag:blogger.com,1999:blog-6189016616585694043.post-71871179669466575412009-10-27T08:47:00.008+01:002009-10-27T10:45:21.664+01:00Java Mystery: Directory Exists but Does Not?<p>Well, yesterday for me was another one of those days where you think your computer must be fooling you... There is an issue with a DirectoryCleaner that works for a subproject but does not when called from the parent. But first things first.</p><h4>The openArchitectureWare Part</h4><p>We are using oAW (yep, the new version which is part of <a href="http://www.eclipse.org/modeling/">Eclipse Modeling Project</a> of Eclipse Galileo). Before generating the artifacts from the model, we are executing a directory cleaner that just rubs out the folders, for instance <code>src/generated/java</code>. The workflow file <code>generate.mwe</code> looks like this:</p><pre class="brush:java; highlight: [3]"><workflow><br /> <property file="generateAll.properties"/><br /> <component class='org.eclipse.emf.mwe.utils.DirectoryCleaner' directory='${srcGenDir}'/><br /> <component file='.../generateAll.mwe' inheritAll="true"><br /> <modelFile value='...' /><br /> </component><br /></workflow></pre><p><code>srcGenDir</code> is a property that is defined in the included properties file, but that's irrelevant. The highlighted line configures the mentioned directory cleaner. The relevant code snippet is <code>invokeInternal()</code> method of <code>org.eclipse.emf.mwe.utils.DirectoryCleaner</code> which is part of Eclipse EMF frameworks:</p><pre class="brush:java">protected void invokeInternal(final WorkflowContext model, <br /> final ProgressMonitor monitor, final Issues issues) {<br /> if (directory != null) {<br /> final StringTokenizer st = new StringTokenizer(directory, ",");<br /> while (st.hasMoreElements()) {<br /> final String dir = st.nextToken().trim();<br /> final File f = new File(dir);<br /> if (f.exists() && f.isDirectory()) {<br /> ... do the cleanup ...<br /> }<br /> }<br /> }<br />}</pre><p>Pretty simple, right? The <code>directory</code> attribute contains one or more directories (comma separated), hence a tokenizer is used to get each of them and to create a file. If this file exists and it's a directory, that will be erased.</p><h4>The Maven Part</h4><p>Of course we are using Maven to build the stuff. The <a href="http://fornax.itemis.de/confluence/display/fornax/OAW-M2-Plugin+%28TOM%29">Fornax Maven plugin</a> is configured to call the workflow during Maven build. Moreover, the mentioned workflow is part of a subproject B which belongs to an outer multi-module project A.</p><p>Now, this is where the mystery begins... When I build B (the submodule), everything is fine and works like expected. However, when I build A (the parent project), B will be built in turn and its workflow is executed, but the directory is <em>not</em> cleaned up! You wouldn't expect this, right?</p><h4>The Strange Part</h4><p>In an attempt to find out what's going on, we put some debugging code into <code>DirectoryCleaner</code>. It turns out that the file <code>f</code> returns the same value for <code>getAbsolutePath()</code> in both cases, but <code>f.exists()</code> reveals <code>false</code> when the build is started from the parent project – hence the condition is not met and nothing will be cleaned up.</p><p>Unfortunately, you can't look any deeper into the native code that is called when determining if a file exists. So, in a kind of trial and error approach, we found out that using the following code fixes the issue:</p><pre class="brush:java; highlight: [8]">protected void invokeInternal(final WorkflowContext model, <br /> final ProgressMonitor monitor, final Issues issues) {<br /> if (directory != null) {<br /> final StringTokenizer st = new StringTokenizer(directory, ",");<br /> while (st.hasMoreElements()) {<br /> final String dir = st.nextToken().trim();<br /> final File f1 = new File(dir);<br /> final File f = new File(f1.getAbsolutePath());<br /> if (f.exists() && f.isDirectory()) {<br /> ... do the cleanup ...<br /> }<br /> }<br /> }<br />}</pre><p>That is: by creating another file that is using the absolute path of the first one and using this further on, everything is fine in both scenarios – whether called from parent project or submodule.</p><p>To be honest: I have no explanation for these findings. Why is <code>File</code> behaving differently? When calculating the exist flag, the code should take into account the file's absolute path, right? So, why is creation of another file based on the absolute path is fixing the issue? Any insights are deeply appreciated...!</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com0tag:blogger.com,1999:blog-6189016616585694043.post-20234317575757325582009-10-16T13:37:00.012+02:002009-10-16T15:16:13.535+02:00Speeding Up Your System<p>We did a lot of interesting stuff lately, including upgrading Eclipse to the new <a href="http://www.eclipse.org/galileo/">Galileo</a> release, and in turn upgrading oAW (<a href="http://www.openarchitectureware.org/">openArchitectureware</a>) to the new versions of Xtext, Xpand, Xcheck etc. which are now part of the <a href="http://www.eclipse.org/modeling/">Eclipse Modeling project</a>. I will blog about all this later...</p><p>But, what really started to hurt us was the performance of our Windows XP based development laptops. They aren't really brand new ones, but not that old either. Nevertheless, they seem to have an issue with all those java, class and jar files involved when starting Eclipse, doing a "Clean Project", when using Maven to build the software etc.</p><p>IT department was not willing to provide us with new hardware (disk especially) at this time, and no, using Linux is no option either. Hence, we had to find out other areas of improvement by tuning our system.</p><p>Here is what we did to speed things up for Windows XP. Note that things might or might not be different for Windows Vista or Windows 7.</p><h4>1. Disable Indexing Service</h4><p>By default, there is a Microsoft "Indexing Service" running in your Windows system. According to <a href="http://msdn.microsoft.com/en-us/library/ms689718(VS.85).aspx">msdn</a>, this is "a base service for Microsoft Windows 2000 or later that extracts content from files and constructs an indexed catalog to facilitate efficient and rapid searching."</p><p>Well, to be honest, I have never heard of that service before (and rarely use the search function of Windows), but it turned out to cause lots of harddisk traffic. So we decided to <strong>disable this service</strong>, which is recommended <a href="http://www.blackviper.com/WinXP/Services/Indexing_Service.htm">by some people</a>.</p><p>Actually, there are several ways to do so (wihtout using Microsoft Management Console (MMC) with an appropriate snap-in):</p><ul><li>Disable the Indexing Service in your list of local services.</li><li>In the Properties window of your local disk, remove option "Allow Indexing Service to index this disk for fast file searching".</li><li>Remove the function via the Control Panel > Add or Remove Programs > Add/Remove Windows Components > Uncheck "Indexing Service".</li></ul><h4>2. Don't Virus-Scan Java Stuff</h4><p>Our anti-virus tool was configured to scan literally everything, including java, class and jar files. This seems to be exaggerated from security point of view, but IT folks said they could not configure the anti-virus to ignore a particular set of file extensions (like *.jar).</p><p>Hence, what we did instead was to <strong>exclude two folders from being scanned on the fly</strong>: <code>javatools</code> (where everything Java related is put, including Eclipse releases) and <code>projects</code> (where all our projects reside). Instead, these folders are now scanned once a week, which is acceptable for IT guys.</p><p>Know what? That speeded up starting time of Eclipse <em>by factor 5</em>!</p><p>Surprisingly, excluding the Maven local repository (located in your personal settings folder) from being scanned on the fly did not make much difference, so we didn't handle this folder any special.</p><h4>3. Optimize Subversion</h4><p>Are you using Subversion? If so, are you using <a href="http://tortoisesvn.tigris.org/">TortoiseSVN</a>, the Windows Shell Extension for Subversion? Well, in that case you will know and probably like the little overlay icons that are used to indicate the state of files and folders. This feature is recursive, whereby overlay changes in lower level folders are propagated up through the folder hierarchy so that you don’t forget about changes you made deep in the tree.</p><p>Starting with <a href="http://tortoisesvn.tigris.org/tsvn_1.2_releasenotes.html">release 1.2</a>, a new TSVNCache program is used to maintain a cache of your working copy status, providing much faster access to this information. Not only does this prevent explorer from blocking while acquiring status, but it also makes recursive overlays workable.</p><p>This is all nice, but there is a major drawback: TSVNCache by default looks for changes <em>on all drives and in all folders</em>, killing disk performance with all the I/O it's doing.</p><p>You can <strong>enable a TSVNCacheWindow</strong> showing all the folders being crawled by TSVNCache. To do so, open the Registry Editor, and create a new <code>DWORD</code> at <code>HKEY_CURRENT_USER\Software\TortoiseSVN\CacheTrayIcon</code> with value of 1. After that you have to restart TSVNCache which is easiest done by just killing the process, it will be automatically restarted when you do any TortoiseSVN operation. Now there should be a small tortoise icon in Windows tray area which opens the TSVNCacheWindow. Watch how TortoiseSVN scans files and folders whenever you write to a file...</p><p>Time to fix that! That should be quite easy if you're keeping all of your working copies below one specific folder (or a small set of folders), like we do. All you have to do is to <strong>setup TortoiseSVN to only scan your sourcefolder paths</strong>:</p><ol><li>Right-click in Explorer on any folder and select "TortoiseSVN > Settings...".</li><li>In the Settings window's tree, click on the "Icon Overlays" entry.</li><li>In the "Exclude Paths" input field, put C:\* to exclude the entire C drive. If you have more drives, exclude them all at the top level. Use newlines to separate the values.</li><li>In the "Include Paths" input field, list all of the locations where your working copies are stored, again separated by newlines.</li><li>Switch off "Network drives" option in "Drive Types" area.</li></ol><p>All in all, your settings should now look like in the following screenshot. Thanks to <a href="http://www.paraesthesia.com/archive/2007/09/26/optimize-tortoise-svn-cache-tsvncache.exe-disk-io.aspx">Paraesthesia</a> for this nice tip!</p><a href="http://4.bp.blogspot.com/_ey2D_DPIY5E/SthxumZXYPI/AAAAAAAAACY/nDwNAML5xSw/s1600-h/TortoiseSvnSettings.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 264px;" src="http://4.bp.blogspot.com/_ey2D_DPIY5E/SthxumZXYPI/AAAAAAAAACY/nDwNAML5xSw/s400/TortoiseSvnSettings.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5393185599171027186" /></a><br /><h4>Epilogue</h4><p>You won't believe what difference these three little tunings made to our system performance. The harddisk is not any more busy all the time, applications (like Eclipse) are starting much faster, and build time has decreased drastically. Not bad for not spending anything on new hardware! Well, next thing we will check is what effect a new (big, fast) hard disk will have... ;o)</p>::Christophhttp://www.blogger.com/profile/13039853176384586281noreply@blogger.com0