Matt RaibleMatt Raible is a Web Developer and Java Champion. Connect with him on LinkedIn.

The Angular Mini-Book The Angular Mini-Book is a guide to getting started with Angular. You'll learn how to develop a bare-bones application, test it, and deploy it. Then you'll move on to adding Bootstrap, Angular Material, continuous integration, and authentication.

Spring Boot is a popular framework for building REST APIs. You'll learn how to integrate Angular with Spring Boot and use security best practices like HTTPS and a content security policy.

For book updates, follow @angular_book on Twitter.

The JHipster Mini-Book The JHipster Mini-Book is a guide to getting started with hip technologies today: Angular, Bootstrap, and Spring Boot. All of these frameworks are wrapped up in an easy-to-use project called JHipster.

This book shows you how to build an app with JHipster, and guides you through the plethora of tools, techniques and options you can use. Furthermore, it explains the UI and API building blocks so you understand the underpinnings of your great application.

For book updates, follow @jhipster-book on Twitter.

10+ YEARS


Over 10 years ago, I wrote my first blog post. Since then, I've authored books, had kids, traveled the world, found Trish and blogged about it all.

Developing Services with Apache Camel - Part III: Integrating Spring 4 and Spring Boot

Spring Boot This article is the third in a series on Apache Camel and how I used it to replace IBM Message Broker for a client. I used Apache Camel for several months this summer to create a number of SOAP services. These services performed various third-party data lookups for our customers. For previous articles, see Part I: The Inspiration and Part II: Creating and Testing Routes.

In late June, I sent an email to my client's engineering team. Its subject: "External Configuration and Microservices". I recommended we integrate Spring Boot into the Apache Camel project I was working on. I told them my main motivation was its external configuration feature. I also pointed out its container-less WAR feature, where Tomcat (or Jetty) is embedded in the WAR and you can start your app with "java -jar appname.war". I mentioned microservices and that Spring Boot would make it easy to split the project into a project-per-service structure if we wanted to go that route. I then asked two simple questions:

  1. Is it OK to integrate Spring Boot?
  2. Should I split the project into microservices?

Both of these suggestions were well received, so I went to work.

[Read More]

Posted in Java at Oct 08 2014, 07:13:18 AM MDT 3 Comments

Developing Services with Apache Camel - Part II: Creating and Testing Routes

Apache Camel This article is the second in a series on Apache Camel and how I used it to replace IBM Message Broker for a client. The first article, Developing Services with Apache Camel - Part I: The Inspiration, describes why I chose Camel for this project.

To make sure these new services correctly replaced existing services, a 3-step approach was used:

  1. Write an integration test pointing to the old service.
  2. Write the implementation and a unit test to prove it works.
  3. Write an integration test pointing to the new service.

I chose to start by replacing the simplest service first. It was a SOAP Service that talked to a database to retrieve a value based on an input parameter. To learn more about Camel and how it works, I started by looking at the CXF Tomcat Example. I learned that Camel is used to provide routing of requests. Using its CXF component, it can easily produce SOAP web service endpoints. An end point is simply an interface, and Camel takes care of producing the implementation.

[Read More]

Posted in Java at Sep 30 2014, 10:05:38 AM MDT 9 Comments

Fixing XSS in JSP 2

Way back in 2007, I wrote about Java Web Frameworks and XSS. My main point was that JSP EL doesn't bother to handle XSS.

Of course, the whole problem with JSP EL could be solved if Tomcat (and other containers) would allow a flag to turn on XML escaping by default. IMO, it's badly needed to make JSP-based webapps safe from XSS.

A couple months later, I proposed a Tomcat enhancement to escape JSP's EL by default. I also entered an enhancement request for this feature and attached a patch. That issue has remained open and unfixed for 3 and 1/2 years.

Yesterday, Chin Huang posted a handy-dandy ELResolver that XML-escapes EL values.

I tried Chin's resolver in AppFuse today and it works as well as advertised. To do this, I copied his EscapeXML*.java files into my project, changed the JSP API's Maven coordinates from javax.servlet:jsp-api:2.0 to javax.servlet.jsp:jsp-api:2.1 and added the listener to web.xml.

With Struts 2 and Spring MVC, I was previously able to have ${param.xss} and pass in ?xss=<script>alert('gotcha')</script> and it would show a JavaScript alert. After using Chin's ELResolver, it prints the string on the page instead of displaying an alert.

Thanks to Chin Huang for this patch! If you're using JSP, I highly recommend you add this to your projects as well.

Posted in Java at Feb 28 2011, 02:08:46 PM MST 7 Comments

Integration Testing with HTTP, HTTPS and Maven

Earlier this week, I was tasked with getting automated integration tests working in my project at Overstock.com. By automated, I mean that ability to run "mvn install" and have the following process cycled through:

  • Start a container
  • Deploy the application
  • Run all integration tests
  • Stop the container

Since it makes sense for integration tests to run in Maven's integration-test phase, I first configured the maven-surefire-plugin to skip tests in the test phase and execute them in the integration-test phase. I used the <id>default-phase</id> syntax to override the plugins' usual behavior.

<plugin>
  <artifactId>maven-surefire-plugin</artifactId>
  <executions>
    <execution>
      <id>default-test</id>
      <configuration>
        <excludes>
          <exclude>**/*Test*.java</exclude>
        </excludes>
      </configuration>
    </execution>
    <execution>
      <id>default-integration-test</id>
      <phase>integration-test</phase>
      <goals>
        <goal>test</goal>
      </goals>
      <configuration>
        <includes>
          <include>**/*Test.java</include>
        </includes>
        <excludes>
          <exclude>none</exclude>
          <exclude>**/TestCase.java</exclude>
        </excludes>
      </configuration>
    </execution>
  </executions>
</plugin>

After I had this working, I moved onto getting the container started and stopped properly. In the past, I've done this using Cargo and it's always worked well for me. Apart from the usual setup I use in AppFuse archetypes (example pom.xml), I added a couple additional items:

  • Added <timeout>180000</timeout> so the container would wait up to 3 minutes for the WAR to deploy.
  • In configuration/properties, specified <context.path>ROOT</context.path> so the app would deploy at the / context path.
  • In configuration/properties, specified <cargo.protocol>https</cargo.protocol> since many existing unit tests made requests to secure resources.

I started by using Cargo with Tomcat and had to create certificate keystore in order to get Tomcat to start with SSL enabled. After getting it to start, I found the tests failed with the following errors in the logs:

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: 
PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target
	at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:174)
	at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649)

Co-workers told me this was easily solved by adding my 'untrusted' cert to my JVM keystore. Once all this was working, I thought I was good to go, but found that some tests were still failing. The failures turned out to be because they were talking to http and https was the only protocol enabled. After doing some research, I discovered that Cargo doesn't support starting on both http and https ports.

So back to the drawing board I went. I ended up turning to the maven-jetty-plugin and the tomcat-maven-plugin to get the functionality I was looking for. I also automated the certificate keystore generation using the keytool-maven-plugin. Below is the extremely-verbose 95-line profiles section of my pom.xml that allows either container to be used.

Sidenote: I wonder how this same setup would look using Gradle?

<profiles>
  <profile>
    <id>jetty</id>
    <activation>
      <activeByDefault>true</activeByDefault>
    </activation>
    <build>
      <plugins>
        <plugin>
          <groupId>org.mortbay.jetty</groupId>
          <artifactId>maven-jetty-plugin</artifactId>
          <version>6.1.26</version>
          <configuration>
            <contextPath>/</contextPath>
            <connectors>
              <connector implementation="org.mortbay.jetty.nio.SelectChannelConnector">
                <!-- forwarded == true interprets x-forwarded-* headers -->
                <!-- http://docs.codehaus.org/display/JETTY/Configuring+mod_proxy -->
                <forwarded>true</forwarded>
                <port>8080</port>
                <maxIdleTime>60000</maxIdleTime>
              </connector>
              <connector implementation="org.mortbay.jetty.security.SslSocketConnector">
                <forwarded>true</forwarded>
                <port>8443</port>
                <maxIdleTime>60000</maxIdleTime>
                <keystore>${project.build.directory}/ssl.keystore</keystore>
                <password>overstock</password>
                <keyPassword>overstock</keyPassword>
              </connector>
            </connectors>
            <stopKey>overstock</stopKey>
            <stopPort>9999</stopPort>
          </configuration>
          <executions>
            <execution>
              <id>start-jetty</id>
              <phase>pre-integration-test</phase>
              <goals>
                <goal>run-war</goal>
              </goals>
              <configuration>
                <daemon>true</daemon>
              </configuration>
            </execution>
            <execution>
              <id>stop-jetty</id>
              <phase>post-integration-test</phase>
              <goals>
                <goal>stop</goal>
              </goals>
            </execution>
          </executions>
        </plugin>
      </plugins>
    </build>
  </profile>
  <profile>
    <id>tomcat</id>
    <build>
      <plugins>
        <plugin>
          <groupId>org.codehaus.mojo</groupId>
          <artifactId>tomcat-maven-plugin</artifactId>
          <version>1.1</version>
          <configuration>
            <addContextWarDependencies>true</addContextWarDependencies>
            <fork>true</fork>
            <path>/</path>
            <port>8080</port>
            <httpsPort>8443</httpsPort>
            <keystoreFile>${project.build.directory}/ssl.keystore</keystoreFile>
            <keystorePass>overstock</keystorePass>
          </configuration>
          <executions>
            <execution>
              <id>start-tomcat</id>
              <phase>pre-integration-test</phase>
              <goals>
                <goal>run-war</goal>
              </goals>
            </execution>
            <execution>
              <id>stop-tomcat</id>
              <phase>post-integration-test</phase>
              <goals>
                <goal>shutdown</goal>
              </goals>
            </execution>
          </executions>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>

With this setup in place, I was able to automate running our integration tests by simply typing "mvn install" (for Jetty) or "mvn install -Ptomcat" (for Tomcat). For running in Hudson, it's possible I'll have to further enhance things to randomize the port and pass that into tests as a system property. The build-helper-maven-plugin and its reserve-network-port goal is a nice way to do this. Note: if you want to run more than one instance of Tomcat at a time, you might have to randomize the ajp and rmi ports to avoid collisions.

The final thing I encountered was our app didn't shutdown gracefully. Luckily, this was fixed in a newer version of our core framework and upgrading fixed the problem. Here's the explanation from an architect on the core framework team.

The hanging problem was caused by the way the framework internally aggregated statistics related to database connection usage and page response times. The aggregation runs on a separate thread but not as a daemon thread. Previously, the aggregation threads weren't being terminated on shutdown so the JVM would hang waiting for them to finish. In the new frameworks, the aggregation threads are terminated on shutdown.

Hopefully this post helps you test your secure and unsecure applications at the same time. At the same time, I'm hoping it motivates the Cargo developers to add simultaneous http and https support. ;)

Update: In the comments, Ron Piterman recommended I use the Maven Failsafe Plugin because its designed to run integration tests while Surefire Plugin is for unit tests. I changed my configuration to the following and everything still passes. Thanks Ron!

<plugin>
  <artifactId>maven-surefire-plugin</artifactId>
  <version>2.7.2</version>
  <configuration>
    <skipTests>true</skipTests>
  </configuration>
</plugin>
<plugin>
  <artifactId>maven-failsafe-plugin</artifactId>
  <version>2.7.2</version>
  <configuration>
    <includes>
      <include>**/*Test.java</include>
    </includes>
    <excludes>
      <exclude>**/TestCase.java</exclude>
    </excludes>
  </configuration>
  <executions>
    <execution>
      <id>integration-test</id>
      <phase>integration-test</phase>
      <goals>
        <goal>integration-test</goal>
      </goals>
    </execution>
    <execution>
      <id>verify</id>
      <phase>verify</phase>
      <goals>
        <goal>verify</goal>
      </goals>
    </execution>
  </executions>
</plugin>

Update 2: In addition to application changes to solve hanging issues, I also had to change my Jetty Plugin configuration to use a different SSL connector implementation. This also required adding the jetty-sslengine dependency, which has been renamed to jetty-ssl for Jetty 7.

<connector implementation="org.mortbay.jetty.security.SslSelectChannelConnector">
...
<dependencies>
  <dependency>
    <groupId>org.mortbay.jetty</groupId>
    <artifactId>jetty-sslengine</artifactId>
    <version>6.1.26</version>
  </dependency>
</dependencies>

Posted in Java at Feb 11 2011, 03:54:16 PM MST 9 Comments

What's New in Spring 3.0

This morning, I attended Rod Johnson's What's New in Spring 3.0 keynote at TSSJS. Rod ditched his slides for the talk and mentioned that this might be risky. Especially since he was pretty jetlagged (flew in from Paris at 11pm last night). Below are my notes from his talk.

The most important thing for the future of Java is productivity and cloud computing. The focus at SpringSource is heavily on productivity and not just on improving the Spring codebase. If you look at the comparisons out there between Rails and Spring, it's not an apples-to-apples comparison. The philosophy with Spring has always been the developer is always right. However, if you look at something like Rails, you'll see it's far more prescriptive. That layer of opinionated frameworks is important in that it improves your productivity greatly.

SpringSource is putting a lot of emphasis on improving developer productivity with two opinionated frameworks: Grails and Spring Roo. To show how productive developers can be, Rod started to build a web app with Spring Roo. As part of this demo, he mentioned we'd see many of the new features of Spring 3: RestTemplate, @Value and Spring EL.

Rod used STS to write the application and built a Twitter client. After creating a new project using File -> New Roo Project, a Roo Shell tab shows up at the bottom. Typing "hint" tells you what you should do write away. The initial message is "Roo requires the installation of a JPA provider and associated database." The initial command is "persistence setup --provider HIBERNATE --database HYPERSONIC_IN_MEMORY". After running this, a bunch of log messages are shown on the console, most of them indicating that pom.xml has been modified.

The first file that Rod shows is src/main/resources/META-INF/spring/applicationContext.xml. It's the only XML file you'll need in your application and includes a PropertyPlaceHolderConfigurer, a context:component-scan for finding annotations and a transaction manager.

After typing "hint" again, Roo indicates that Rod should create entities. He does this by running "ent --class ~.domain.Term --testAutomatically". A Term class (with a bunch of annotations) is created, as well as a number of *.aj files and an integration test. Most of the files don't have anything in them but annotations. The integration test uses @RooIntegrationTest(entity=Term.class) on its class to fire up a Spring container in the test and do dependency injection (if necessary). From there, Rod demonstrated that he could easily modify the test to verify the database existed.

private SimpleJdbcTemplate jt;

@Autowired
public void init(DataSource ds) {
    this.jt = new SimpleJdbcTemplate(ds);
}

@Test 
public void testDb() {
    jt.queryForInt("SELECT COUNT(0) FROM TERM");
}

Interestingly, after running the test, you could see a whole bunch of tests being run, not just the one that was in the class itself. From there, he modified the Term class to add two new properties: name and searchTerms. He also used JSR 303's @NotNull annotation to make the fields required.

@Entity
@RooJavaBean
@RooToString
@RooEntity
public class Term {

    @NotNull
    private String name;

    @NotNull
    private String searchTerms;
}

Next, Rod added a new test and showed that the setters for these properties were automatically created and he never had to write getters and setters. This is done by aspects that are generated beside your Java files. Roo is smart enough that if you write toString() methods in your Java code, it will delete the aspect that normally generates the toString() method.

To add fields to an entity from the command lie, you can run commands like "field string --fieldName text --notNull" and "field number --type java.lang.Long --fieldName twitterId --notNull". The Roo Shell is also capable of establishing relationships between entities.

After successfully modifying his Entities, Rod started creating code to talk to Twitter's API. He used RestTemplate to do this and spent a good 5 minutes trying to get Eclipse to import the class properly. The best part of this demo was watching him do what most developers do: searching Google for RestTemplate to get the package name to import.

After awkward silence and some fumbling, he opened an existing project (that had the dependencies properly configured) and used Java Config to configure beans for the project. This was done with a @Configuration annotation on the class, @Value annotations on properties (that read from a properties file) and @Bean annotations for the beans to expose. The first time Rod tried to run the test it failed because a twitter.properties file didn't exist. After creating it, he successfully ran the test and successfully searched Twitter's API.

The nice thing about @Configuration is the classes are automatically picked up and you don't need to configure any XML to recognize them. Also, in your Java classes, you don't have to use @Autowired to get @Bean references injected.

After this, Rod attempted to show a web interface of the application. He started the built-in SpringSource tc Server and proceeded to show us Tomcat's 404 page. Unfortunately, Tomcat seemed to startup OK (no errors in the logs), but obviously something didn't work well. For the next few silent moments, we watched him try to delete web.xml from Eclipse. Unfortunately, this didn't work and we weren't able to see the scaffolding the entities that Rod created.

At this point, Rod opened a completed version of the app and was able to show it to us in a browser. You could hear the murmur of the crowd as everyone realized he was about to show the the Twitter search results for #tssjs. Most of the tweets displayed were from folks commenting about how some things didn't work in the demo.

In summary, there's some really cool things in Spring 3: @Configuration, @Value, task scheduling with @Scheduled and one-way methods with @Async.

Final points of SpringSource and VMWare: they're committed to Java and middleware. Their big focus is providing an integrated experience from productivity to cloud. There's other languages that are further along than Java and SpringSource is trying to fix that. One thing they're working on is a private Java cloud that companies can use and leverage as a VMWare appliance.

I think there's a lot of great things in Spring 3 and most users of Roo seem to be happy with it. It's unfortunate that the Demo Gods frowned upon Rod, but it was cool to see him do the "no presentation" approach.

Posted in Java at Mar 19 2010, 11:46:25 AM MDT 2 Comments

Packaging a SOFEA Application for Distribution

The project I'm working on is a bit different from those I'm used to. I'm used to working on web applications that are hosted on servers and customers access with their browser. SaaS if you will. My current client is different. They're a product company that sells applications and distributes them to customers via download and CD. Their customers install these applications on internal servers (supported servers include WebSphere, WebLogic and Tomcat).

The product I'm currently working on is structured as a SOFEA application and therefore consists of two separate modules - a backend and a frontend. Since it's installed in a servlet container, both modules are WARs and can be installed separately.

Building the backend and frontend as separate projects makes a lot of sense for two reasons:

  • In development, different teams can work on the frontend and backend projects.
  • Having them as separate projects allows them to be versioned separately.

However, having them as two separate projects does make it a bit more difficult for distribution. I'm writing this post to show you how I recently added support for distributing our application as 2 WARs or 1 WAR using the power of Maven, war overlays and the UrlRewriteFilter.

Project Setup
First of all, we have several different Maven modules, but the most important ones are as follows:

  • product-services
  • product-client
  • product-integration-tests

Of course, our modules aren't really named "product", but you get the point. The services project is really just a WAR project with Spring Security configured. It depends on other JAR modules that the services exist in. The client project is a GWT WAR that has a proxy servlet defined in its web.xml that makes it easier to develop. It also contains some UrlRewrite configuration that allows GWT Log's Remote Logging feature to work. The proxy servlet is something we don't want to ship with our product, so we have a separate web.xml for production vs. development. We do the substitution using the maven-war-plugin:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-war-plugin</artifactId>
    <version>2.0.2</version>
    <configuration>
        <!-- Production web.xml -->
        <webXml>src/main/resources/web.xml</webXml>
        <warSourceDirectory>war</warSourceDirectory>
        <!-- Exclude everything but urlrewrite JAR -->
        <warSourceExcludes>
            WEB-INF/lib/aop**,WEB-INF/lib/commons-**,WEB-INF/lib/gin-**,
            WEB-INF/lib/guice-**,WEB-INF/lib/gwt-**,WEB-INF/lib/gxt-**,
            WEB-INF/lib/junit-**
        </warSourceExcludes>
    </configuration>
</plugin>

I could exclude WEB-INF/lib/** and WEB-INF/classes/**, but in my particular project, we still want UrlRewrite in standalone mode, and we have some i18n properties files in WEB-INF/classes that are served up for Selenium tests.

With this configuration, we have a services WAR and a client WAR that can be installed and used by clients. To collapse them into one and make it possible to ship a single war, I turned to our product-integration-tests module. This module contains Selenium tests that test both types of distributions.

Merging 2 WARs into 1
The most important thing in the product-integration-tests module is that it creates a single WAR. First of all, it uses <packaging>war</packaging> to make this possible. The rest is done using the following 3 steps.

1. Its dependencies include the client and servlet WARs (and Selenium RC for testing).

<dependencies>
    <dependency>
        <groupId>com.company.app</groupId>
        <artifactId>product-services</artifactId>
        <version>1.0-SNAPSHOT</version>
        <type>war</type>
    </dependency>
    <dependency>
        <groupId>com.company.app</groupId>
        <artifactId>product-client</artifactId>
        <version>1.0-SNAPSHOT</version>
        <type>war</type>
    </dependency>
    <dependency>
        <groupId>org.seleniumhq.selenium.client-drivers</groupId>
        <artifactId>selenium-java-client-driver</artifactId>
        <version>1.0.1</version>
        <scope>test</scope>
    </dependency>
</dependencies>

2. The WAR created excludes the "integration-tests" part of the name:

<build>
    <finalName>product-${project.version}</finalName>
    ...
</build>

3. WAR overlays are configured so the everything in the client's WEB-INF directory is excluded from the merged WAR.

<plugin>
    <artifactId>maven-war-plugin</artifactId>
    <configuration>
        <!-- http://maven.apache.org/plugins/maven-war-plugin/overlays.html -->
        <overlays>
            <overlay>
                <groupId>com.company.app</groupId>
                <artifactId>product-services</artifactId>
                <excludes>
                    <!-- TODO: Rename to api.html (this is the Enunciate-generated documentation) -->
                    <exclude>index.html</exclude>
                </excludes>
            </overlay>
            <!-- No server needed in product-client -->
            <overlay>
                <groupId>com.company.app</groupId>
                <artifactId>product-client</artifactId>
                <excludes>
                    <exclude>WEB-INF/**</exclude>
                </excludes>
            </overlay>
            <!-- Only include META-INF/context.xml to set the ROOT path -->
            <overlay>
                <excludes>
                    <exclude>WEB-INF/**</exclude>
                </excludes>
            </overlay>
        </overlays>
    </configuration>
</plugin>

That's it! Using this configuration, it's possible to distribute a Maven-based SOFEA project as single or multiple WARs. However, there are some nuances.

One thing you might notice is the reference to META-INF/context.xml in the overlays configuration. This subtly highlights one issue I experienced when merging the WARs. In our GWT client, we're using URLs that point to our services at /product-services/*. This works in development (via a proxy servlet) and when the WARs are installed separately - as long as the services WAR is installed at /product-services. However, when they're merged, a little URL rewriting needs to happen. To do this, I added the UrlRewriteFilter to the product-services module and configured a simple rule.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCENGINE urlrewrite PUBLIC "-//tuckey.org//DTD UrlRewrite 3.0//EN"
        "http://tuckey.org/res/dtds/urlrewrite3.0.dtd">

<urlrewrite use-query-string="true">
    <!-- Used when services are merged into WAR with GWT client -->
    <rule>
        <from>^/product-services/(.*)$</from>
        <to type="forward">/$1</to>
    </rule>
</urlrewrite>

Because the services URLs point to the root (/product-services), the merged WAR has to be installed as the ROOT application. When you're using Cargo with Tomcat and want to deploy to ROOT, you have to have a META-INF/context.xml with a path="" reference (ref: CARGO-516).

<Context path=""/>

It is possible to change the URLs in the client to be relative, but this gets seems to get messy when you're using separate WARs. When using relative URLs, I found I had to do solution using cross-context forwarding to get the results I wanted. Using a redirect instead of a forward worked, but resulted in the client talking to the server twice (once to get redirected, a second time for the actual call). Cross-context forwarding is supported by the UrlRewriteFilter and Tomcat, but I'm not sure WebSphere or WebLogic support it. The best solution is probably to change the URLs dynamically at runtime, possibly using some sort of deferred binding technique.

Testing with Cargo and Selenium
Once I had everything merged, I wanted to configure Cargo and Selenium to allow testing both distribution types. If I installed all 3 wars at the same time, the "product-services" WAR would be used by both the product-client.war and the product.war, so I had to use profiles to allow installing the single merged WAR or both WARs. Below is the profile I used for starting Cargo, deploying the merged WAR, starting Selenium RC and running Selenium tests.

<properties>
    <cargo.container>tomcat6x</cargo.container>
    <cargo.container.url>
        http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.20/bin/apache-tomcat-6.0.20.zip
    </cargo.container.url>
    <cargo.host>localhost</cargo.host>
    <cargo.port>23433</cargo.port>
    <cargo.wait>false</cargo.wait>
    <cargo.version>1.0</cargo.version>

    <!-- *safari and *iexplore are additional options -->
    <selenium.browser>*firefox</selenium.browser>
</properties>
...
<profile>
    <id>itest-bamboo</id>
    <activation>
        <activeByDefault>false</activeByDefault>
    </activation>
    <build>
        <plugins>
            <plugin>
                <groupId>org.codehaus.cargo</groupId>
                <artifactId>cargo-maven2-plugin</artifactId>
                <version>${cargo.version}</version>
                <configuration>
                    <wait>${cargo.wait}</wait>
                    <container>
                        <containerId>${cargo.container}</containerId>
                        <log>${project.build.directory}/${cargo.container}/cargo.log</log>
                        <zipUrlInstaller>
                            <url>${cargo.container.url}</url>
                            <installDir>${installDir}</installDir>
                        </zipUrlInstaller>
                    </container>
                    <configuration>
                        <home>${project.build.directory}/${cargo.container}/container</home>
                        <properties>
                            <cargo.hostname>${cargo.host}</cargo.hostname>
                            <cargo.servlet.port>${cargo.port}</cargo.servlet.port>
                        </properties>
                        <!-- Deploy as ROOT since XHR requests are made to /product-services -->
                        <deployables>
                            <deployable>
                                <properties>
                                    <context>ROOT</context>
                                </properties>
                            </deployable>
                        </deployables>
                    </configuration>
                </configuration>
                <executions>
                    <execution>
                        <id>start-container</id>
                        <phase>pre-integration-test</phase>
                        <goals>
                            <goal>start</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>stop-container</id>
                        <phase>post-integration-test</phase>
                        <goals>
                            <goal>stop</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>selenium-maven-plugin</artifactId>
                <version>1.0</version>
                <executions>
                    <execution>
                        <phase>pre-integration-test</phase>
                        <goals>
                            <goal>start-server</goal>
                        </goals>
                        <configuration>
                            <background>true</background>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <artifactId>maven-surefire-plugin</artifactId>
                <executions>
                    <execution>
                        <phase>integration-test</phase>
                        <goals>
                            <goal>test</goal>
                        </goals>
                        <configuration>
                            <excludes>
                                <exclude>none</exclude>
                            </excludes>
                            <includes>
                                <include>**/*SeleniumTest.java</include>
                            </includes>
                            <systemProperties>
                                <property>
                                    <name>selenium.browser</name>
                                    <value>${selenium.browser}</value>
                                </property>
                                <property>
                                    <name>cargo.port</name>
                                    <value>${cargo.port}</value>
                                </property>
                            </systemProperties>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</profile>

This profile is run by our Bamboo nightly tests with mvn install -Pitest-bamboo. The 2nd profile I added doesn't install the project's WAR, but instead installs the two separate WARs. Running mvn install -Pitest-bamboo,multiple-wars executes the Selenium tests against the multi-WAR distribution.

<profile>
    <id>multiple-wars</id>
    <activation>
        <activeByDefault>false</activeByDefault>
    </activation>
    <build>
        <plugins>
            <plugin>
                <groupId>org.codehaus.cargo</groupId>
                <artifactId>cargo-maven2-plugin</artifactId>
                <version>${cargo.version}</version>
                <configuration>
                    <configuration>
                        <home>${project.build.directory}/${cargo.container}/container</home>
                        <properties>
                            <cargo.hostname>${cargo.host}</cargo.hostname>
                            <cargo.servlet.port>${cargo.port}</cargo.servlet.port>
                        </properties>
                        <deployables>
                            <deployable>
                                <groupId>com.company.app</groupId>
                                <artifactId>product-client</artifactId>
                                <pingURL>http://${cargo.host}:${cargo.port}/product-client/index.html</pingURL>
                                <type>war</type>
                                <properties>
                                    <context>/product-client</context>
                                </properties>
                            </deployable>
                            <deployable>
                                <groupId>com.company.app</groupId>
                                <artifactId>product-services</artifactId>
                                <pingURL>
                                    http://${cargo.host}:${cargo.port}/project-services/index.jspx
                                </pingURL>
                                <type>war</type>
                                <properties>
                                    <context>/product-services</context>
                                </properties>
                            </deployable>
                        </deployables>
                    </configuration>
                </configuration>
            </plugin>
        </plugins>
    </build>
</profile>

I won't be including any information on authoring Selenium tests because there's already many good references. I encourage you to checkout the following if you're looking for Selenium testing techniques.

Summary
This article has shown you how I used Maven, war overlays and the UrlRewriteFilter to allow create different distributions of a SOFEA application. I'm still not sure which packaging (1 WAR vs. 2) mechanism is best, but it's nice to know there's options. If you package and distribute SOFEA applications, I'd love to hear about your experience in this area.

Posted in Java at Oct 06 2009, 01:17:38 AM MDT 2 Comments

Building LinkedIn's Next Generation Architecture with OSGi by Yan Pujante

This week, I'm attending the Colorado Software Summit in Keystone, Colorado. Below are my notes from an OSGi at LinkedIn presentation I attended by Yan Pujante.

LinkedIn was created in March of 2003. Today there's close to 30M members. In the first 6 months, there was 60,000 members that signed up. Now, 1 million sign up every 2-3 weeks. LinkedIn is profitable with 5 revenue lines and there's 400 employees in Mountain View.

Technologies: 2 datacenters (~600 machines). SOA with Java, Tomcat, Spring Framework, Oracle, MySQL, Servlets, JSP, Cloud/Graph, OSGi. Development is done on Mac OS X, production is on Solaris.

The biggest challenges for LinkedIn are:

  • Growing Engineering Team on a monolithic code base (it's modular, but only one source tree)
  • Growing Product Team wanting more and more features faster
  • Growing Operations Team deploying more and more servers

Front-end has many BL services in one webapp (in Tomcat). The backend is many wars in 5 containers (in Jetty) with 1 service per WAR. Production and Development environments are very different. Total services in backend is close to 100, front-end has 30-40.

Container Challenges
1 WAR with N services does not scale for developers (conflicts, monolithic). N wars with 1 service does not scale for containers (no shared JARs). You can add containers, but there's only 12GB of RAM available.

Upgrading back-end service to new version requires downtime (hardware load-balancer does not account for version). Upgrading front-end service to new version requires redeploy. Adding new backend services is also painful because there's lots of configuration (load-balancer, IPs, etc.).

Is there a solution to all these issues? Yan believes that OSGi is a good solution. OSGi stands for Open Services Gateway initiative. Today that term doesn't really mean anything. Today it's a spec with several implementations: Equinox (Eclipse), Knoplerfish and Felix (Apache).

OSGi has some really, really good features. These include smart class loading (multiple versions of JARs is OK), it's highly dynamic (deploy/undeploy built-in), it has a service registry and is highly extensible/configurable. An OSGi bundle is simply a JAR file with an OSGi manifest.

In LinkedIn's current architecture, services are exported with Spring/RPC and services in same WAR can see each other. The problem with this architecture comes to light when you want to move services to a 2nd web container. You cannot share JARs and can't talk directly to the other web app. With OSGi, the bundles (JARs) are shared, services are shared and bundles can be dynamically replaced. OSGi solves the container challenge.

One thing missing from OSGi is allowing services to live on multiple containers. To solve this, LinkedIn has developed a way to have Distributed OSGi. Multicast is used to see what's going on in other containers. Remote servers use the OSGi lifecycle and create dynamic proxies to export services using Spring RPC over HTTP. Then this service is registered in the service registry on the local server.

With Distributed OSGi, there's no more N-1 / 1-N problem. Libraries and services can be shared in one container (memory footprint is much smaller). Services can be shared across containers. The location of the services is transparent to the clients. There's no more configuration to change when adding/removing/moving services. This architecture allows the software to be the load balancer instead of using a hardware load balancer.

Unfortunately, everything is not perfect. OSGi has quite a few problems. OSGi is great, but the tooling is not quite there yet. Not every library is a bundle and many JARs doesn't have OSGi manifests. OSGi was designed for embedded devices and using it for the server-side is very recent (but very active).

OSGi is pretty low-level, but there is some work being done to hide the complexity. Spring DM helps, as do vendor containers. SpringSource has Spring dm Server, Sun has GlassFish, and Paremus has Infiniflow. OSGi is container centric, but next version will add distributed OSGi, but will have no support for load-balancing.

Another big OSGi issue is version management. If you specify version=1.0.0, it means [1.0.0, ∞]. You should at least use version=[1.0.0,2.0.0]. When using OSGi, you have to be careful and follow something similar to Apache's APR Project Versioning Guidelines so that you can easily identify release compatibility.

At LinkedIn, the OSGi implementation is progressing, but there's still a lot of work to do. First of all, a bundle repository needs to be created. Ivy and bnd is used to generate bundles. Containers are being evaluated and Infiniflow is most likely the one that will be used. LinkedIn Spring (an enhanced version of Spring) and SCA will be used to deploy composites. Work on load-balancing and distribution is in progress as is work on tooling and build integration (Sigil from Paremus).

In conclusion, LinkedIn will definitely use OSGi but they'll do their best to hide the complexity from the build system and from developers. For more information on OSGi at LinkedIn, stay tuned to the LinkedIn Engineering Blog. Yan has promised to blog about some of the challenges LinkedIn has faced and how he's fixed them.

Update: Yan has posted his presentation on the LinkedIn Engineering Blog.

Posted in Java at Oct 20 2008, 11:20:09 AM MDT 8 Comments

Jetty and Resin closing in on Tomcat's popularity

From Greg Wilkin's Jetty Improves in Netcraft survey (again):

As with most open source projects, it's very hard to get a measure of who/how/where/why Jetty is being used a deployed. Downloads long ago became meaningless with the advent of many available bundling and distribution channels. The Netcraft Web Survey is one good measure, as it scans the internet and identifies which server sites run. In the results released April 2008, Jetty is identified for 278,501 public server, which is 80% of the market share of our closest "competitor" tomcat (identified as coyote in the survey). Jetty is currently 12th in the league table of identified servers of all types and will be top 10 in 6 months if the current trajectory continues.

If you look at the Netcraft numbers, you might also notice that Resin isn't far behind Jetty. If you look at the Indeed Job Trends graphs for the three, there seems to be some interesting information there too. The first graph is absolute and the second is relative.

If you're using Spring Dynamic Modules to deploy a web application, which server do you think is better? Both Tomcat 6 and Jetty 6 seem to work just fine in Equinox.

Posted in Java at Apr 11 2008, 08:42:48 AM MDT 4 Comments

GlassFish 2 vs. Tomcat 6

In Switched, Dave says:

Now that Glassfish V2 is out I'm switching from Tomcat to Glassfish for all of my development. It's more than fast enough. With Glassfish on my MacBook Pro, Roller restart time is about 8 seconds compared to 16 with Tomcat. And the quality is high; the admin console, the asadmin command-line utility and the docs are all excellent. The dog food is surprisingly tasty ;-)

I did some brief and very non-scientific performance comparisons myself:

Startup Time with no applications deployed:

  • Tomcat 6: 3 seconds
  • GlassFish 2: 8 seconds

Startup Time with AppFuse 2.0 (Struts + Hibernate version) as a WAR

  • Tomcat 6: 15 seconds
  • GlassFish 2: 16 seconds

Environment:

  • JAVA_OPTS="-Xms768M -Xmx768M -XX:PermSize=512m -XX:MaxPermSize=512m -Djava.awt.headless=true -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+UseConcMarkSweepGC -server"
  • OS X 10.4.10, 2.2 GHz Intel Core 2 Duo, 4 GB 667 MHz DDR2 SDRAM

Since this was a very non-scientific experiment, it's possible the last two are actually the same. It's strange that Dave is seeing Roller startup twice as fast on GlassFish. Maybe they've done some Roller deployment optimization?

I realize startup times aren't that important. However, as Dave mentions, they (and context reloading) can be extremely important when developing.

Update: I got to thinking that Dave is probably referring to context reloading. Here's a comparison of how long it takes for both servers to pick up a new WAR (and start the application) when it's dropped into their autodeploy directories.

  • Tomcat 6: 14-16 seconds
  • GlassFish 2: 9 seconds

The strange thing about Tomcat is it takes 6-8 seconds to recognize a new WAR has been deployed. Does Tomcat have a polling increment that can be increased during development?

Regardless, it's impressive that the GlassFish guys have made things that much faster for developers. Nice work folks!

These days, I try to use mvn jetty:run on projects. Then I don't have to worry about deploying, just save and wait for the reload. Time to wait for AppFuse 2.0 to reload using the Maven Jetty Plugin (version 6.1.5)? 7 seconds. Of course, it'd be nice if I could somehow get this down to 1 or 2 seconds.

Maybe Dave should use the Maven integration for Roller to decrease his reload times. ;-)

Posted in Java at Sep 19 2007, 04:55:31 PM MDT 18 Comments

Proposed Tomcat Enhancement: Add flag to escape JSP's EL by default

I posted the following to the Tomcat Developers mailing list. Unfortunately, it didn't get any responses, which means (to me) that no one cares about this feature. I guess the good thing is they didn't veto it.

Hello all,

I'm working for a client that's using a proprietary Servlet/JSP-based framework that runs on Tomcat. They have their own custom JSP compiler and they're looking to move to a standard JSP compiler. One of the things their compiler supports is automatic escaping of XML in expressions. For example, ${foo} would be escaped so <body> -> &lt;body&gt;. JSP EL does not do this. It *doesn't* escape by default and instead requires you to wrap your expressions with <c:out/> if you want escaping.

I'd like to ask what developers think about adding a flag (similar to trimSpaces in conf/web.xml) that allows users to change the escaping behavior from false to true?

I think this is a good option to have as it allows security-conscious organizations to paranoid and escape all content by default.

Thanks,

Matt

Related: http://raibledesigns.com/rd/entry/java_web_frameworks_and_xss

What do you think? Is there anything wrong with adding this (optional) feature to Tomcat? Enhancing security is a good thing - right?

Update: I've entered an enhancement request for this feature and attached a patch.

Posted in Java at Sep 19 2007, 04:29:11 PM MDT 12 Comments