Matt RaibleMatt Raible is a writer with a passion for software. Connect with him on LinkedIn.

The Angular Mini-Book The Angular Mini-Book is a guide to getting started with Angular. You'll learn how to develop a bare-bones application, test it, and deploy it. Then you'll move on to adding Bootstrap, Angular Material, continuous integration, and authentication.

Spring Boot is a popular framework for building REST APIs. You'll learn how to integrate Angular with Spring Boot and use security best practices like HTTPS and a content security policy.

For book updates, follow @angular_book on Twitter.

The JHipster Mini-Book The JHipster Mini-Book is a guide to getting started with hip technologies today: Angular, Bootstrap, and Spring Boot. All of these frameworks are wrapped up in an easy-to-use project called JHipster.

This book shows you how to build an app with JHipster, and guides you through the plethora of tools, techniques and options you can use. Furthermore, it explains the UI and API building blocks so you understand the underpinnings of your great application.

For book updates, follow @jhipster-book on Twitter.

10+ YEARS


Over 10 years ago, I wrote my first blog post. Since then, I've authored books, had kids, traveled the world, found Trish and blogged about it all.
You searched this site for "matt". 663 entries found.

You can also try this same search on Google.

Running Selenium Tests on Sauce Labs

Recently I embarked on a mission to configure my team's Selenium testing process to support multiple browsers. We use Hudson for our continuous integration server. Since our Hudson instance runs on Solaris, testing with Firefox on Solaris didn't seem like a good representation of our clients. Our browser support matrix currently looks as follows:

Platform Browser
Supported
Windows IE7.x and 8.x, Firefox 2.x and 3.x
Mac Safari 3.x, 4.x
Best Effort
Windows and Mac Chrome 4.x

At first, I attempted to use Windows VMs to run Selenium tests on IE. This was a solution that didn't work too well. The major reasons it didn't work:

  1. I had issues getting the Selenium Plugin for Hudson working. Upgrading the plugin to use Selenium RC 1.0.5 may solve this issue.
  2. We had some unit tests that failed on Windows. I tried using the Cygpath Plugin for Hudson (which allows you to emulate a Unix environment on Windows), but failed to get it to work.
  3. We quickly realized it might become a maintenance nightmare to keep all the different VMs up-to-date.

Frustrated by these issues, I turned to Sauce Labs. They have a cloud-based model that runs Selenium tests on VMs that point back to your application. They also support many different browser/OS combinations. We asked them about support for OS X and various Windows versions and they indicated that their experience shows browsers are the same across OSes.

I'm writing this article to show you how we've configured our build process to support 1) testing locally and 2) testing on Sauce Labs. In a future post, I hope to write about how to run Selenium tests concurrently for faster execution.

Running Selenium Tests Locally
We use Maven to build our project and run our Selenium tests. Our configuration is very similar to the poms referenced in Integrating Selenium with Maven 2. Basically, we have an "itest" profile that gets invoked when we pass in -Pitest. It downloads/starts Tomcat (using Cargo), deploys our WAR, starts Selenium RC (using the selenium-maven-plugin) and executes JUnit-based tests using the maven-surefire-plugin. All of this configuration is pretty standard and something I've used on many projects over the past several years.

Beyond that, we have a custom BlockJUnit4ClassRunner class that takes screenshots and captures the HTML source for tests that fail.

public class SeleniumJUnitRunner extends BlockJUnit4ClassRunner {
    public SeleniumJUnitRunner(Class<?> klass) throws InitializationError {
        super(klass);
    }

    protected Statement methodInvoker(FrameworkMethod method, Object test) {
        if (!(test instanceof AbstractSeleniumTestCase)) {
            throw new RuntimeException("Only works with AbstractSeleniumTestCase");
        }

        final AbstractSeleniumTestCase stc = ((AbstractSeleniumTestCase) test);
        stc.setDescription(describeChild(method));

        return new InvokeMethod(method, test) {
            @Override
            public void evaluate() throws Throwable {
                try {
                    super.evaluate();
                } catch (Throwable throwable) {
                    stc.takeScreenshot("FAILURE");
                    stc.captureHtmlSource("FAILURE");
                    throw throwable;
                }
            }
        };
    }
}

To use the functionality SeleniumJUnitRunner provides, we have a parent class for all our tests. This class uses the @RunWith annotation as follows:

@RunWith(SeleniumJUnitRunner.class)
public abstract class AbstractSeleniumTestCase {
    // convenience methods
}

This class looks up the Selenium RC Server, the app location and what browser to use based on system properties. If system properties are not set, it has defaults for running locally.

public static String SERVER = System.getProperty("selenium.server");
public static String APP = System.getProperty("selenium.application");
public static String BROWSER = System.getProperty("selenium.browser");

protected Selenium selenium;

@Before
public void setUp() throws Exception {
    if (SERVER == null) {
        SERVER = "localhost";
    }

    if (BROWSER == null) {
        BROWSER = "*firefox3";
    }

    if (APP == null) {
        APP = "http://localhost:9000";
    }

    selenium = new DefaultSelenium(SERVER, 4444, BROWSER, APP);
    selenium.start("captureNetworkTraffic=true");
    selenium.getEval("window.moveTo(1,1); window.resizeTo(1021,737);");
    selenium.setTimeout("60000");
}

The system properties are specified as part of the surefire-plugin's configuration. The reason we default them in the above code is so tests can be run from IDEA as well.

<artifactId>maven-surefire-plugin</artifactId>
<version>2.5</version>
<configuration>
    <systemPropertyVariables>
        <selenium.application>${selenium.application}</selenium.application>
        <selenium.browser>${selenium.browser}</selenium.browser>
        <selenium.server>${selenium.server}</selenium.server>
    </systemPropertyVariables>
</configuration>

Running Selenium Tests in the Cloud
To run tests in the cloud, you have to do a bit of setup first. If you're behind a firewall, you'll need to setup SSH tunneling so Sauce Labs can see your machine. You'll also need to setup SSH Tunneling on your Hudson server, but installing/configuring/running locally is usually a good first step. Below are the steps I used to configure Sauce Labs' SSH Tunneling on OS X.

1. Install the Python version in /opt/tools/saucelabs. If you get an error (No local packages or download links found for install) download the egg and run it with:

sudo sh setuptools-0.6c11-py2.6.egg

NOTE: If you get an error (unable to execute gcc-4.2: No such file or directory) when installing pycrypto on OS X, you'll need to install the OS X Developer Tools.

2. Create a /opt/tools/saucelabs/local.sh script with the following in it. You should change the last parameter to use your username (instead of mraible) since Sauce Labs uses unique tunnel names.

python tunnel.py {sauce.username} {sauce.key} localhost 9000:80 mraible.local

3. Start the tunnel by executing local.sh. You should see output similar to the following.

$ sh local.sh 
/System/../Python.framework/../2.6/../twisted/internet/_sslverify.py:5: DeprecationWarning: the md5 module is deprecated; use hashlib instead
 import itertools, md5
/System/../Python.framework/../2.6/../twisted/conch/ssh/keys.py:13: DeprecationWarning: the sha module is deprecated; use the hashlib module instead
 import sha, md5
Launching tunnel ... 
Status: new
Status: booting
Status: running
Tunnel host: ec2-75-101-216-8.compute-1.amazonaws.com
Tunnel ID: 70f15fb59d2e7ebde55a6274ddfa54dd
<sshtunnel.TunnelTransport instance at 0x10217ad88> created
requesting remote forwarding for tunnel 70f15fb59d2e7ebde55a6274ddfa54dd 80=>localhost:9000
accepted remote forwarding for tunnel 70f15fb59d2e7ebde55a6274ddfa54dd 80=>localhost:9000

After setting up the SSH Tunnel, I modified AbstractSeleniumTestCase's setUp() method to allow running tests on Sauce Labs.

@Before
public void setUp() throws Exception {
    if (SERVER == null) {
        SERVER = "localhost";
    }

    if (BROWSER == null) {
        BROWSER = "*firefox3";
    } else if (BROWSER.split(":").length == 3) {
        String[] platform = BROWSER.split(":");

        String os = platform[0];
        String browser = platform[1];

        // if Google Chrome, don't use a version #
        String version = (platform[1].equals("googlechrome") ? "" : platform[2]);
        String printableVersion = ((version.length() > 0) ? " " + platform[2].charAt(0) : "");

        String jobName = description.getMethodName() + " [" + browser + printableVersion + "]";

        BROWSER = "{\"username\":\"{your-username}\",\"access-key\":\"{your-access-key}\"," +
                "\"os\":\"" + platform[0] + "\",\"browser\": \"" + platform[1] + "\"," +
                "\"browser-version\":\"" + version + "\"," +
                "\"job-name\":\"" + jobName + "\"}";

        log.debug("Testing with " + browser + printableVersion + " on " + os);
    }

    if (APP == null) {
        APP = "http://localhost:9000";
    }

    selenium = new DefaultSelenium(SERVER, 4444, BROWSER, APP);
    selenium.start("captureNetworkTraffic=true");
    selenium.getEval("window.moveTo(1,1); window.resizeTo(1021,737);");
    selenium.setTimeout("60000");
}

After making this change, I was able to run Selenium tests from IDEA using the following steps:

  1. Start Jetty on port 9000 (since that's what the tunnel points to). In IDEA's Maven panel, create a run/debug configuration for jetty:run, click the "Runner" tab and enter "-Djetty.port=9000" in the VM Parameters box.
  2. Right-click on the test to run and create a run/debug configuration. Enter the following in the VM Parameters box. The last two parameters allow skipping the xvfb and Selenium RC startup process.
    -Dselenium.browser="Windows 2003:iexplore:8." -Dselenium.application=mraible.local -Dselenium.server=saucelabs.com -Dxvfb.skip=true -Dselenium.server.skip=true

These same parameters can be used if you want to run all tests from the command line:

mvn install -Pitest -Dselenium.browser="Windows 2003:iexplore:8." -Dselenium.application=mraible.local -Dselenium.server=saucelabs.com -Dxvfb.skip=true -Dselenium.server.skip=true -Dcargo.port=9000

To simplify things, we create profiles for the various browsers. For example, below are profiles for IE8 and Firefox 3.6.

<profile>
    <id>firefox-win</id>
    <properties>
        <cargo.port>9000</cargo.port>
        <selenium.application>http://${user.name}.local</selenium.application>
        <selenium.browser>Windows 2003:firefox:3.6.</selenium.browser>
        <selenium.server>saucelabs.com</selenium.server>
        <selenium.server.skip>true</selenium.server.skip>
        <xvfb.skip>true</xvfb.skip>
    </properties>
</profile>
<profile>
    <id>ie-win</id>
    <properties>
        <cargo.port>9000</cargo.port>
        <selenium.application>http://${user.name}.local</selenium.application>
        <selenium.browser>Windows 2003:iexplore:8.</selenium.browser>
        <selenium.server>saucelabs.com</selenium.server>
        <selenium.server.skip>true</selenium.server.skip>
        <xvfb.skip>true</xvfb.skip>
    </properties>
</profile>

Issues
Since we've started using Sauce Labs, we've run into a number of issues. Some of these are Selenium-related and some are simply things we learned since we started testing on multiple browsers.

  • SSH Tunnels Keep Restarting This happens on our Hudson server that runs the tunnels as a service. This seems to happen daily and screws up our Hudson results because builds fail.
  • XPath vs. CSS Selectors One of the first things we noticed was that our IE tests were 2-3 times slower than the same tests on Firefox. We discovered this is because Internet Explorer has a very slow XPath engine. To fix this issue, it's recommended that ids or CSS Selectors be used whenever trying to locate elements. For more information on CSS Selectors and Selenium, see CSS Selectors in Selenium Demystified. To test CSS Selectors, I found Firefinder to be a very useful Firefox plugin. Note that many pseudo elements won't work in IE.
  • IE7 fails to initialize on Sauce Labs There's no errors in our JUnit reports, so we're not sure what's causing this. It could very well be bugs in our code/configuration, but IE8 works fine.
  • The Job Names on Sauce Labs don't get set correctly and often results in duplicate job names. This could certainly be related to my code. Finding videos that show failed tests is difficult when the job names aren't set correctly.
  • It would be slick if you could download the video of a failed test, similar to what we do by taking screenshots.
  • Google Chrome works on Sauce Labs, but I'm unable to get it working locally (on Windows or OS X). This seems to be a Selenium issue.
  • Safari 4 works, but when it fails, the screenshot shows a Safari can't find the file error. Since there's no real error to debug, it's difficult to figure out why the test fails. Since Safari 4 is not listed on platforms supported by Selenium, I'm unsure how to fix this.

Overall, Sauce Labs seems to work pretty well. However, in the process of messing with Hudson, build agents and Selenium infrastructure, it's become readily apparent that we need a team member to devote their full-attention to it. Having a developer or two work on it every now-and-then is inefficient, especially when we're still in the process of ironing everything out and making it all stable.

If you have any tips on how you've solved issues with Sauce Labs (ssh tunnels, IE7) or Selenium (Safari 4, Google Chrome), I'd love to hear them. I'm also interested to hear from anyone with experience running Selenium tests concurrently (locally or in the cloud).

Update: I discovered a bug in my AbstractSeleniumTest's setUp() method where job names weren't being set correctly. I've since changed the code in this class to the following:

private static String browser, printableVersion;

@BeforeClass
public static void parseBrowser() {

    if (BROWSER == null) {
        BROWSER = "*firefox3";
    } else if (BROWSER.split(":").length == 3) {
        String[] platform = BROWSER.split(":");

        String os = platform[0];
        browser = platform[1];

        // if Google Chrome, don't use a version #
        String version = (platform[1].equals("googlechrome") ? "" : platform[2]);
        printableVersion = ((version.length() > 0) ? " " + platform[2].charAt(0) : "");

        BROWSER = "{\"username\":\"{your-username}\",\"access-key\":\"{your-access-key}\"," +
                "\"os\":\"" + os + "\",\"browser\": \"" + browser + "\"," +
                "\"browser-version\":\"" + version + "\", " +
                "\"job-name\": \"jobName\"}";
    }
}

@Before
public void setUp() throws Exception {
    if (SERVER == null) {
        SERVER = "localhost";
    }

    if (APP == null) {
        APP = "http://localhost:9000";
    }

    String seleniumBrowser = BROWSER;
    if (BROWSER.startsWith("{")) { // sauce labs
        String jobName = description.getMethodName() + " [" + browser + printableVersion + "]";
        log.debug("=> Running job: " + jobName);

        seleniumBrowser = BROWSER.replace("jobName", jobName);
    }

    selenium = new DefaultSelenium(SERVER, 4444, seleniumBrowser, APP);
    selenium.start("captureNetworkTraffic=true");
    selenium.getEval("window.moveTo(1,1); window.resizeTo(1021,737);");
    selenium.setTimeout("60000");
}

Posted in Java at Jun 06 2010, 07:50:20 PM MDT 4 Comments

Versioning Static Assets with UrlRewriteFilter

A few weeks ago, a co-worker sent me interesting email after talking with the Zoompf CEO at JSConf.

One interesting tip mentioned was how we querystring the version on our scripts and css. Apparently this doesn't always cache the way we expected it would (some proxies will never cache an asset if it has a querystring). The recommendation is to rev the filename itself.

This article explains how we implemented a "cache busting" system in our application with Maven and the UrlRewriteFilter. We originally used querystring in our implementation, but switched to filenames after reading Souders' recommendation. That part was figured out by my esteemed colleague Noah Paci.

Our Requirements

  • Make the URL include a version number for each static asset URL (JS, CSS and SWF) that serves to expire a client's cache of the asset.
  • Insert the version number into the application so the version number can be included in the URL.
  • Use a random version number when in development mode (based on running without a packaged war) so that developers will not need to clear their browser cache when making changes to static resources. The random version number should match the production version number formats which is currently: x.y-SNAPSHOT-revisionNumber
  • When running in production, the version number/cachebust is computed once (when a Filter is initialized). In development, a new cachebust is computed on each request.

In our app, we're using Maven, Spring and JSP, but the latter two don't really matter for the purposes of this discussion.

Implementation Steps
1. First we added the buildnumber-maven-plugin to our project's pom.xml so the build number is calculated from SVN.

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>buildnumber-maven-plugin</artifactId>
    <version>1.0-beta-4</version>
    <executions>
        <execution>
            <phase>validate</phase>
            <goals>
                <goal>create</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <doCheck>false</doCheck>
        <doUpdate>false</doUpdate>
        <providerImplementations>
            <svn>javasvn</svn>
        </providerImplementations>
    </configuration>
</plugin>

2. Next we used the maven-war-plugin to add these values to our WAR's MANIFEST.MF file.

<plugin>
    <artifactId>maven-war-plugin</artifactId>
    <version>2.0.2</version>
    <configuration>
        <archive>
            <manifest>
                <addDefaultImplementationEntries>true</addDefaultImplementationEntries>
            </manifest>
            <manifestEntries>
                <Implementation-Version>${project.version}</Implementation-Version>
                <Implementation-Build>${buildNumber}</Implementation-Build>
                <Implementation-Timestamp>${timestamp}</Implementation-Timestamp>
            </manifestEntries>
        </archive>
    </configuration>
</plugin>

3. Then we configured a Filter to read the values from this file on startup. If this file doesn't exist, a default version number of "1.0-SNAPSHOT-{random}" is used. Otherwise, the version is calculated as ${project.version}-${buildNumber}.

private String buildNumber = null;

...
@Override
public void initFilterBean() throws ServletException {
    try {
        InputStream is = 
            servletContext.getResourceAsStream("/META-INF/MANIFEST.MF");
        if (is == null) {
            log.warn("META-INF/MANIFEST.MF not found.");
        } else {
            Manifest mf = new Manifest();
            mf.read(is);
            Attributes atts = mf.getMainAttributes();
            buildNumber = atts.getValue("Implementation-Version") + "-" + atts.getValue("Implementation-Build");
            log.info("Application version set to: " + buildNumber);
        }
     } catch (IOException e) {
        log.error("I/O Exception reading manifest: " + e.getMessage());
     }
}

...

    // If there was a build number defined in the war, then use it for
    // the cache buster. Otherwise, assume we are in development mode 
    // and use a random cache buster so developers don't have to clear 
    // their browswer cache.
    requestVars.put("cachebust", buildNumber != null ? buildNumber : "1.0-SNAPSHOT-" + new Random().nextInt(100000));

4. We then used the "cachebust" variable and appended it to static asset URLs as indicated below.

<c:set var="version" scope="request" 
    value="${requestScope.requestConfig.cachebust}"/>
<c:set var="base" scope="request"
    value="${pageContext.request.contextPath}"/>

<link rel="stylesheet" type="text/css" 
    href="${base}/v/${version}/assets/css/style.css" media="all"/>

<script type="text/javascript" 
    src="${base}/v/${version}/compressed/jq.js"></script>

The injection of /v/[CACHEBUSTINGSTRING]/(assets|compressed) eventually has to map back to the actual asset (that does not include the two first elements of the URI). The application must remove these two elements to map back to the actual asset. To do this, we use the UrlRewriteFilter. The UrlRewriteFilter is used (instead of Apache's mod_rewrite) so when developers run locally (using mvn jetty:run) they don't have to configure Apache.

5. In our application, "/compressed/" is mapped to wro4j's WroFilter. In order to get UrlRewriteFilter and WroFilter to work with this setup, the WroFilter has to accept FORWARD and REQUEST dispatchers.

<filter-mapping>
    <filter-name>rewriteFilter</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

<filter-mapping>
    <filter-name>WebResourceOptimizer</filter-name>
    <url-pattern>/compressed/*</url-pattern>
    <dispatcher>FORWARD</dispatcher>
    <dispatcher>REQUEST</dispatcher>
</filter-mapping>

Once this was configured, we added the following rules to our urlrewrite.xml to allow rewriting of any assets or compressed resource request back to its "correct" URL.

<rule match-type="regex">
    <from>^/v/[0-9A-Za-z_.\-]+/assets/(.*)$</from>
    <to>/assets/$1</to>
</rule>
<rule match-type="regex">
    <from>^/v/[0-9A-Za-z_.\-]+/compressed/(.*)$</from>
    <to>/compressed/$1</to>
</rule>
<rule>
    <from>/compressed/**</from>
    <to>/compressed/$1</to>
</rule>

Of course, you can also do this in Apache. This is what it might look like in your vhost.d file:

RewriteEngine    on
RewriteLogLevel  0!
RewriteLog       /srv/log/apache22/app_rewrite_log
RewriteRule      ^/v/[.A-Za-z0-9_-]+/assets/(.*) /assets/$1 [PT]
RewriteRule      ^/v/[.A-Za-z0-9_-]+/compressed/(.*) /compressed/$1 [PT]

Whether it's a good idea to implement this in Apache or using the UrlRewriteFilter is up for debate. If we're able to do this with the UrlRewriteFilter, the benefit of doing this at all in Apache is questionable, especially since it creates a duplicate of code.

Posted in Java at Jun 04 2010, 09:27:42 AM MDT 4 Comments

C++, Java and .NET: Lessons Learned from the Internet Age

Today at TSSJS, I attended Cameron Purdy's keynote titled C++, Java and .NET: Lessons learned from the Internet Age, and What it means for the Cloud and Emerging Languages.

His talk was a retrospective of the trade-offs compared to C++ illustrated by Java, C# and other VM-based programming languages with Garbage-Collection, scripting languages simultaneously thrived, and what this teaches us about the applicability of technology to emerging challenges and environments such as cloud computing. Why did Java become so successful? Some folks say it was marketed better, but it was Sun - so we know that could've have been possible.

Cameron is the VP of Development for Oracle Fusion Middleware, responsible for the Coherence Data Grid product which has Java, C# and C++ versions. Data Grids are RAID for servers. It provides a reliable data tier with a single, consistent view of data and enabled dynamic data capacity including fault tolerance and load balancing. The servers cooperate together and act as an organism to manage their information.

Java when it first came out very much looked like evolution. From a C++ programmers perspective, Java was bloated.

Below are Cameron's Top 10 Reasons Why Java has been able to supplant C++. This started happening around 1996-97. Warning to Language Fanbois: Yes, I know there are 3rd party GC implementations that fix some of these issues.

10. Automated Garbage Collection: A significant portion of C++ code is dedicated to memory management. This meant faster time to market and lower bug count.

9. The Build Process: C++ builds are slow and complicated. Personal example: 20 hour full build in C++ compared to 7 minutes in Java.

8. Simplicity of Source Code and Artifacts: C++ splits source into header and implementation files. Artifacts are compiler-specific, but there are many of them. With Java, there's just one .java and just one .class.

7. Binary Standard: In addition to being loadable as a class by a JVM, a Java classfile can be used to compile against. Java defers platform-specific stuff to the runtime.

6. Dynamic Linking: No standard way to dynamically link classes in C++.

5. Portability: Java is portable with very little effort; C++ is portable in theory, but in practice you have to build another language (#ifdef'd types, etc.) on top of it. C++ has significant differences from vendor to vendor. Some unnamed major vendors have horrid support for the C++ standard, particularly templates.

4. Standard Type System: Java has specified, portable primitive types. C++ still has a hard time defining what a String is. Multi-threading? You must be joking. STL? Maybe some day. Basically nothing is standard!

3. Reflection: Full runtime capability to look at the runtime. C++ has optional RTTI, but no reflection. Enables extremely powerful generic frameworks. It gives you the ability to learn about, access and manipulate any object.

2. Performance: GC can make memory management much more efficient (slab allocators, escape analysis). This is because of modern architectures and the fact that Java can take advantage of multiple threads. Thread safe smart pointers in C++ are 3x slower than Java references. Hotspot can do massive inlining, which is very important for dealing with layers of virtual invocation.

1. Safety: Elimination of pointers (arbitrary memory access, ability to easily crash the process). With Java, there's no buffer overruns; code and data cannot be accidentally missed.

Honorable Mention: C++ Templates. Next time someone complains about Java Generics, make them read C++ Templates. They're fugly and extremely bloated.

The Top 10 list of advantages C++ has over Java:

10. Startup Time: The graph of initially loaded class in Java is pretty large. Conclusion: Not good for "instant" and short-running processes.

9. Memory Footprint: Java uses significantly more memory than C++, particularly for "small" applications.

8. Full GC Pauses: Sooner or later, there is a part of GC that can't be run in the background and can't be avoided. This causes havoc for distributed processes and things like real-time financial systems.

7. No Deterministic Destruction: No support for RAII. Cannot count on finalizers. There's not even a "using" construct in Java.

6. Barriers to Native Integration: Operating Systems are built in C/C++. APIs are typically in C.

And that's all Cameron could come up with. Turns out it was only a top 5 list.

So why did the shift from C++ to Java and C# happen? Because Shift Happens. First of all, Al Gore built this internet thing and the World Wide Web. We built a couple browsers with C++, but then we were done. Oh wait, we needed a web server too, so we built Apache. What about the other things? The things that run in the browser? There was no way we were going to run C++ in the browser b/c it was too unsafe. All the advantages that C++ had over Java didn't matter on the web. Startup time wasn't a concern when we left our app server running for months. Memory wasn't an issue because we had GB of RAM on our machines.

What about scripting languages? All the areas where C++ might have an advantage, scripting languages jumped in. They offered simplicity and approachability (hooks up to database, manages state on behalf of the user, produces HTML), rapid application development (no OO architectural requirements, save and refresh).

So what about cloud computing? Can we take what we learned from Java and C++ and apply them to what we see coming down the pipe now with cloud computing? What are we missing? What are the advantages that Java would be missing in a cloud environment?

The things missing from the VM: modularity, lifecycle and isolation. Lower memory footprint and predictable GC pauses. Things missing from the platform: distributed system as a system, provisioning and metering, cloud operating systems APIs, persistence (including key/value) and Map/Reduce-style processing. Finally, the application definition is missing packaging, resource declaration and security in a shared environment.

What's changed in the world since Java was introduced? Hardware virtualization, stateful grid infrastructure and capacity on demand ISPs (EC2). What's coming in Java? Modularization, NIO pluggable file systems, JVM Bare Metal and Virtual Editions. Conclusion: Java either steps up or something else will.

This was an enjoyable talk to listen to and I very much enjoyed Cameron's humor and slide pictures that supported it. As Dusty said, Cameron has a pretty clear picture of what the Java Roadmap should look like. Let's hope Oracle is listening.

Posted in Java at Mar 18 2010, 01:36:28 PM MDT 4 Comments

Highly Interactive Software with Java and Flex

This morning at TSSJS, I attended James Ward's talk about Highly Interactive Software with Java and Flex. Below are my notes from his talk.

Application have moved from mainframes (hard to deploy, limited clients) to client/server (hard to deploy, full client capabilities) to web applications (easy to deploy, limited clients) to rich internet applications (easy to deploy, full client capabilities).

Shortly after showing a diagram of how applications have changed, James showed a demo of a sample Flex app for an automobile insurance company. It was very visually appealing, kinda like using an iPhone app. It was a multi-form application that slides right-to-left as you progress through the wizard. It also allowed you to interact with a picture of your car (to indicate where the damage happened) and a map (to indicate how the accident happened). Both of these interactive dialogs still performed data entry, they just did it in more of a visual way.

Adobe's developer technology for building RIAs is Flex. There's two different languages in Flex: ActionScript and MXML. ActionScript was originally based on JavaScript, but now (in ActionScript 3) uses features from Java and C#. On top of ActionScript is MXML. It's a declarative language, but unlike JSP taglibs. All you can do with MXML is instantiate objects and set properties. It's merely a convenience language, but also allows tooling. The open source SDK compiler takes Flex files and compiles it into a *.swf file. This file can then be executed using the Flash Player (in browser) or Air (desktop).

The reason Adobe developed two different runtimes was because they didn't want to bloat the Flash Player. Once the applications are running client-side, the application talks to the web server. Protocols that can be used for communication: SOAP, HTTP/S, AMF/S and RTMP/S. The web server can be composed of REST or SOAP Web Services, as well as BlazeDS or LC Data Services to talk directly to Java classes.

To see all the possible Flex components, see Tour de Flex. It contains a number of components: core components, data access controls, AIR capabilities, cloud APIs, data visualization. The IBM ILOG Elixir real-time dashboard is particularly interesting, as is Doug McCune's Physics Form.

Next James showed us some code. He used Flex Builder to create a new Flex project with BlazeDS. The backend for this application was a JSP page that talks to a database and displays the results in XML. In the main .mxml file, he used <s:HTTPService> with a URL pointing to the URI of the JSP. Then he added an <mx:DataGrid> and the data binding feature of Flex. To do this, he added dataProvider="{srv.lastResult.items.item}" to the DataGrid tag, where "srv" is the id of the HTTPService. Then he added a Button with click="srv.send()" and set the layout to VerticalLayout. This was a simple demo to show how to hook in a backend with XML.

To show that Flex can interact with more than XML over HTTP, James wrote a SOAP service and changed <s:HTTPService> to <s:WebService> and changed the "url" attribute to "wsdl" (and adjusted the value as appropriate). Then rather than using {srv.lastResult.*}, he had to bind to a particular method and change it to {srv.getElements.lastResults}. The Button's click value also had to change to "srv.getElements(0, 2000)" (since the method takes 2 parameters).

After doing coding in Flex Builder, James switched to his Census to compare server-execution times. In the first example (Flash XML AS), most of the time was spent gzipping the 1MB XML file, but the transfer time is reduced because of this. The server execution time is around 800ms. Compare this to the Flex AMF3 example where the server execution time is 49ms. This is because the AMF (binary) protocol streamlines the data and doesn't include repeated metadata.

To integrate BlazeDS in your project, you add the dependencies and then map the MessageBrokerServlet in your web.xml. Then you use a services-config.xml to define the protocol and remoting-config.xml to define what Java classes to export as services. To use this in the Flex aplication, James changed <s:WebService> to <s:RemoteObject>. He changed the "wsdl" attribute to "endpoint" and added a "destination" attribute to specify the name of the aliased Java class to talk to. Next, James ran the demo and showed that he could change the number of rows from 2,000 to 20,000 and the load time was still much, much faster than the XML and SOAP versions.

There's also a Spring BlazeDS Integration project that allows you to simply annotate beans to expose them as AMF services.

BlazeDS also includes a messaging service that you can use to create publishers and subscribers. The default channels in BlazeDS uses HTTP Streaming and HTTP Long Polling (comet), but it can be configurable (e.g. to use JMS). There's also an Adobe commercial product that keeps a connection open using NIO on the server and has a binary protocol. This is useful for folks that need more real-time data in their applications (e.g. trading floors).

I thought this was a really good talk by James. It had some really cool visual demos and the demo was interesting in showing how easy it was to switch between different web services and protocols. This afternoon, I'll be duking it out with James at the Flex vs. GWT Smackdown. If you have deficiencies of Flex you'd like me to share during that talk, please let me know.

Posted in Java at Mar 18 2010, 12:29:26 PM MDT 4 Comments

How We Hired a Team of 10 in 2 Months

Back in December, I started a new contract with Time Warner Cable (TWC). As part of the terms of that contract, it named the following as one of my deliverables:

Assist in identifying, recruiting and hiring additional full-time Web development staff, emphasizing open-source framework expertise.
    - Timeframe: ongoing, throughout the six-month engagement
    - Deliverable: targeting 2-3 quality leads/hires

Since this was a local gig and I always like a good challenge, I asked my client to raise the number from 2-3 to 4-5. Shortly after signing that contract, my project began. Almost immediately, I began spreading the word on Twitter.

When TWC hired me, it was just the beginning of a larger initiative. They were making a number of large changes:

  • Moving from Waterfall to Agile.
  • Restructuring organizationally for functional teams.
  • Moving from ColdFusion to JVM technologies.

To help with the move to Agile, I contacted a good friend, Brad Swanson. Brad is the founder of Propero Solutions and has always had a passion for agile coaching and making teams more efficient. At the beginning of the year, we setup 2-day training class in Herndon, VA to kick-off the Agile Initiative. There were 15 existing developers on the team when I started and 40 people showed up to that initial training. Most of these additional folks were from Product and QA. Brad's message of working together quickly resonated with the group and you could see their eyes light up with their new-found knowledge.

After the success of Brad's training, we leveraged his network to help us find some very impressive coaches to assist with our efforts. We hired two Agile Coaches to start working with us at the end of January.

While our agile movement was progressing in January, I started contacting friends, former colleagues and referrals about coming to work for us. For friends and former colleagues, my e-mail simply outlined the positions available, the exciting opportunity of the project and that TWC was willing to pay very competitive salaries for strong engineers. While it didn't happen immediately, I did manage to convince 4 former co-workers to join me, including the team I built at LinkedIn and worked with at Evite.

Following those 4, most of the candidates we interviewed were referrals or folks that contacted me directly after seeing my tweet. I'm amazed that I never had to write a blog post to advertise the positions.

Once we identified potential candidates, we executed the following process:

  1. Requested a resume (or LinkedIn Profile URL) via e-mail.
  2. If skills and experience looked like a match, we sent a list of screening questions specific to the position.
  3. If screening answers were satisfactory, we'd schedule a face-to-face interview.
  4. We then conducted a face-to-face interview with a list of questions specific to the position.
  5. If convenient, we took the candidate to lunch to explore their social skills.
  6. After interviewing, the interviewers would huddle for 5-10 minutes and give thumbs-up/thumbs-down and we'd right up a summary e-mail for our boss.
  7. If thumbs-up, our boss would contact the candidate, discuss the details and extend an offer.

This process turned out to be a great way to hire a kick-ass team very quickly. You might notice that HR was not involved at all in this process. While we did use them to post jobs and such, we found that our recommendation-based process of identifying high-quality candidates worked much better. HR was able to bring in folks with lots of buzzwords on their resume, but no one knew them or what they were capable of.

Once a person passed the screening questions, our interview focused more on a person's social skills than their technical ability. The first half of the interview was all about their career experiences and what they enjoyed/disliked about employers and projects. The second half consisted of a handful of very technical, hard questions that we expected people to struggle with. If they answered correctly, we were impressed. If they didn't, we examined how they handled explaining they didn't know the answer. It was interesting to see how many people didn't simply answer "I don't know".

One of my most interesting observations of the process was our question about "what was your most enjoyable employment experience and why?" Most folks responded with something very early on in their career, and often it was their first job. This caused me to reflect on our industry and careers as a whole and wonder if people get more miserable as they keep working. It's a shame there's not more folks happy with their current jobs.

By mid-February, we managed to fill most of our open headcounts. We'd successfully hired 2 Agile Coaches and 8 Developers in a little over 2 months. While everyone hasn't started yet, there's several of us now working in my Denver office. We pretty much caught everyone off-guard with our success and we've moved onto our next biggest problem - were do we put everyone? The TWC Broomfield office is building out space for us, but it'll likely take them a few months to complete the project. My office that fits 4 comfortably had 8 of us in their last week. I had to sit on a garbage can when pairing because we'd run out of chairs.

To solve our short-term space constrains, I've successfully negotiated additional space upstairs from our landlady and we've ordered a number of new desks for folks. Our desks arrive Monday and we're setting up pairing stations upstairs next week. All-in-all, it's been a wild ride with a fair amount of stress. Interviewing folks wasn't that stressful, but trying to hire folks while writing code and trying to deliver features for our project was challenging.

We've been emphasizing pair programming and hiring process required a lot of e-mail communication. When we were pairing, we'd ignore our e-mails for most of the day and then have to catch up at night. Once people started on-boarding, we had to figure out the best way to get them started and slinging code. We established an on-boarding plan and we've been able to get everyone running our app on their machines before lunch. We've even had a couple folks committing code by the end of the first day.

This week, we on-boarded 3 of our final 4 developers. I breathed a big sigh of relief that the hiring was over and we could get back to slinging code and making things happen. As luck would have it, I received an e-mail from my boss on Tuesday that the hiring engine is starting up again and we need to hire 6 more developers. While I'm not anxious to start the Hiring Engine again, I am glad to know it works well and it has helped us build a great team. I'm not going to post the positions as part of this blog entry, but there's a good chance you'll hear more about the gigs if you follow me on Twitter.

Posted in Java at Mar 05 2010, 12:01:57 PM MST 5 Comments

Web Application Testing with Selenium by Jason Huggins

Selenium This evening, I attended Agile Denver's monthly meeting to listen to Jason Huggins talk about Selenium. The meeting started off with a panel on UI testing that I participated in. The most interesting part of this panel (for me) was meeting the other panelists and learning about their expertise. Folks from Red Pine Studios in Boulder video taped both the panel and presentation. Hopefully it will be published online in the near future.

Below are my notes from Jason's talk. Please keep in mind that most of these are his words, not mine.

Jason is the Executive Software Chef at Sauce Labs. He often experiments with new recipes and is one of the creators of Selenium. He worked at Google and helped them build and use a Selenium Farm to test Gmail and Google Docs. Selenium was inspired by ThoughtWorks Expense Report system and its "Add Row" button. The button caused so many issues, they needed a way to write a test that could be run in multiple browsers.

The first thing they tried was jWebUnit (a wrapper around HtmlUnit). Since HtmlUnit simulates the browser, it wasn't "real world" enough. The 2nd attempt was DriftWood. It was a Mozilla extension that drove a real browser so it could handle JavaScript UI features. The downside it was it didn't work for IE or Safari. It also used XML Syntax for tests. The 3rd attempt was JsUnit. It worked in all browsers, but its emphasis was on a single page unit test; it had no page-to-page workflow support. Also, you couldn't see what it was doing while it was running. The 4th attempt was FIT (Framework for Integration Testing). It allowed more readable tests, but the API wasn't that intuitive and there was too much magic behind the curtain. So basically, they had to fork FIT.

The first attempt was called "Selenese" and consisted of a 3-column table where each row had an Action, Target and Value. In the beginning, Selenium Core was a TestRunner that ran in any browser. It was written in plain ol' JavaScript and HTML. The next thing that came about was the Selenium IDE for Firefox. It maintains the echo of Selenium Core and FIT.

Selenium Remote Control (RC) was the next product produced by the project. Selenium RC allows you to write your tests in any language. A Selenium server interprets the requests and turns them into browser manipulation events. Finally, Selenium Grid was developed to leverage Selenium's HTTP architecture to allow parallel execution across servers.

Cloud computing is a wonderful use case for functional testing. Selenium Hub is a gateway into the Selenium Grid that routes the test request to particular browsers and platforms. Sauce Labs has a version of Selenium Grid that runs in the cloud.

Selenium Issues
Selenium is slow. Functional tests will always be slower than unit tests. Until the browsers can launch faster, there's always going to be speed issues. Parallel-ization can solve some of these and is something you should think about right away.

The JavaScript sandbox, Flash, Java Applets, Silverlight and Canvas all present problems in Selenium. Silverlight was shipped without any testing APIs. There are several libraries that provide a bridge for testing Flash. The Selenium project has though about including FlexMonkey, but its GPL license prevents it.

Practical Advice
Everyone seems to build a framework on top of Selenium. If you do this, make sure and write your DSL in terms of intent and then map it to Selenium actions.

Look for abstractions so you're not writing your Selenium tests with its API. It's too much like Assembler.

K.I.S.S. - don't write large tests, just do small ones. Often, when functional tests fail, they tell you something failed, but they don't tell you what failed. The shortest possible functional tests help reduce the scope of where a problem can be. Other benefits of short tests are they're easier to read and easier to write.

Selenium 2.0
The big thing in Selenium 2.0 is a merger with WebDriver. The nice thing about WebDriver is it gets rid of Selenium RC and allows you to drive the browser with a low-level API. For example, you use C++ to drive IE. Basically, every language will talk to the C driver. Except for Firefox, the connection and control is done through telnet. Selenium 2 should fix all the problems with Selenium 1, but also allow you to still use Selenium RC if you want to do grid-style testing.

Selenium 2's API is about finding elements and interacting with those elements. Also, it's entirely backwards compatible, so you can use the old API.

At this point, my laptop's battery died and I was unable to take any more notes. However, I was able to see some pretty slick demos, particularly Jason's company's Sauce onDemand cloud testing services. All you need to do to run your tests in the cloud is change how you initialize Selenium. A kick-ass feature this service provides is video playback (a.k.a. Castro). I'm currently using Selenium's screenshot functionality, but it doesn't hold a candle to the ability to watch a video playback of your tests. Jason also showed us a demo of using Castro and Selenium 2 to create a screencast on-the-fly. Very cool stuff.

My only question after seeing this talk is what's the difference between BrowserMob and Sauce Labs? Both companies were founded by Selenium committers and seem to offer competing projects. My gut feel is that BrowserMob is best for performance/load testing and Sauce Labs is best for running your tests in the cloud.

Posted in Java at Feb 15 2010, 09:44:44 PM MST 5 Comments

Comparing Kick-Ass Web Frameworks at The Rich Web Experience

Yesterday, I delivered my Comparing Kick-Ass Web Frameworks talk at the Rich Web Experience in Orlando, Florida. Below are the slides I used:

Although it's difficult to convey a presentation in a slide deck, I can offer you my conclusion: there is no "best" web framework. I believe web frameworks are like spaghetti sauce in that everyone has different tastes and having so many choices is necessary to satisfy everyone. You can read more about the plural nature of perfection in Malcolm Gladwell's The Ketchup Conundrum (a written version of What we can learn from spaghetti sauce). Even though there is no "best" web framework, I believe GWT, Flex, Rails and Grails are frameworks that every web developer should try. They really do make it fun to develop web applications.

You can find the slides for my other RWE talk at Building SOFEA Applications with GWT and Grails.

Kudos to Jay Zimmerman for putting on a great show in Orlando this year. I had a great time talking with folks and learning in the sessions I attended. I particularly enjoyed bringing my parents and kids and staying at such a nice resort. Disney World (Magic Kingdom) and Universal Studios was very enjoyable due to the short lines. Also, the weather was perfect - especially considering the freezing cold in Denver this week. ;-)

Posted in Java at Dec 04 2009, 08:16:48 AM MST 3 Comments

AppFuse 2.1 Milestone 1 Released

The AppFuse Team is pleased to announce the first milestone release of AppFuse 2.1. This release includes upgrades to all dependencies to bring them up-to-date with their latest releases. Most notable are Hibernate, Spring and Tapestry 5.

What is AppFuse?
AppFuse is an open source project and application that uses open source tools built on the Java platform to help you develop Web applications quickly and efficiently. It was originally developed to eliminate the ramp-up time found when building new web applications for customers. At its core, AppFuse is a project skeleton, similar to the one that's created by your IDE when you click through a wizard to create a new web project.

Release Details
Archetypes now include all the source for the web modules so using jetty:run and your IDE will work much smoother now. The backend is still embedded in JARs, enabling you to choose which persistence framework (Hibernate, iBATIS or JPA) you'd like to use. If you want to modify the source for that, add the core classes to your project or run appfuse:full-source.

In addition, AppFuse Light has been converted to Maven and has archetypes available. AppFuse provides archetypes for JSF, Spring MVC, Struts 2 and Tapestry 5. The light archetypes are available for these frameworks, as well as for Spring MVC + FreeMarker, Stripes and Wicket.

Other notable improvements:

Please note that this release does not contain updates to the documentation. Code generation will work, but it's likely that some content in the tutorials won't match. For example, you can use annotations (vs. XML) for dependency injection and Tapestry is a whole new framework. I'll be working on documentation over the next several weeks in preparation for Milestone 2.

AppFuse is available as several Maven archetypes. For information on creating a new project, please see the QuickStart Guide.

To learn more about AppFuse, please read Ryan Withers' Igniting your applications with AppFuse.

The 2.x series of AppFuse has a minimum requirement of the following specification versions:

  • Java Servlet 2.4 and JSP 2.0 (2.1 for JSF)
  • Java 5+

If you have questions about AppFuse, please read the FAQ or join the user mailing list. If you find bugs, please create an issue in JIRA.

Thanks to everyone for their help contributing code, writing documentation, posting to the mailing lists, and logging issues.

Posted in Java at Nov 19 2009, 07:16:36 AM MST 8 Comments

Building SOFEA Applications with GWT and Grails

Last night, I spoke at the Denver Java User Group meeting. The consulting panel with Matthew, Tim and Jim a lot of fun and I enjoyed delivering my Building SOFEA Applications with GWT and Grails presentation for the first time. The talk was mostly a story about how we enhanced Evite.com with GWT and Grails and what we did to make both frameworks scale. I don't believe the presentation reflects the story format that well, but it's not about the presentation, it's about the delivery of it. ;-)

If you'd like to hear the story about this successful SOFEA implementation at a high-volume site, I'd recommend attending the Rich Web Experience next month. If you attended last night's meeting and have any feedback on how this talk can be improved, I'd love to hear it.

Posted in Java at Nov 12 2009, 09:30:09 AM MST 11 Comments

The Future of Web Frameworks at TSSJS

Caesars Palace For TSSJS Vegas 2010, I submitted two proposals for talks: GWT vs. Flex Smackdown and The Future of Web Frameworks. As of today, the 2nd is the only one that shows up on the conference agenda, but hopefully the former will get accepted too. Here's a description of this talk:

With rich Ajax applications and HTML5 on the horizon, are web frameworks still relevant? Java web frameworks like Struts and Spring MVC were all the rage 5 years ago. Component-based frameworks like Tapestry, JSF and Wicket made it easier to create re-usable applications. But what about the Mobile Web and offline applications?

Are Titanium, Adobe Air and Gears the future? If you're embracing the RESTfulness of the web, do you even need a web framework, or can you use use JAX-RS with an Ajax toolkit?

These questions and many more are examined, answered and debated in this lively session. Bring your opinions and experiences to this session to learn about what's dead, what's rising and what's here to stay. If you're a web framework fan, this session is sure to please.

I believe this talk will be a lot of fun to create and deliver. To create it, I'd like to make it a collaborative effort with the web framework community (users and developers). To kick things off, below is an initial rough outline/agenda:

  • Title
  • Introduction
  • Problem/Purpose
  • Agenda
    • How did we get here?
    • Where are we going?
    • How do we get there?
    • Q and A
  • History of Web Frameworks
    • Deep History (CGI, etc.)
    • Java's Rise
    • PHP
    • Rails -> Grails
    • Ajax Frameworks
    • RESTify!
    • SOFEA, APIs, etc.
  • The Future
    • HTML5
    • GWT, Cappucino and Spoutcore (compare to Java and compilers)
    • The Binary Players (Flex, JavaFX and Silverlight)
    • Getting Rich
    • Speed (is it a problem? YES!)
    • IE 6 will die.
    • Chrome OS
    • The Mobile Web
    • Desktop Webapps (Titanium, AIR, etc.)
    • Or is this the present? Future is bleeding edge.
  • Getting There: It's all about the APIs
    • Allows for any client
    • Web Framework skills transfer to desktop - and phone!
    • Speed will continue to be *very* important
    • Innovation, something we haven't thought of
  • Fallout
    • Interest in server-side frameworks will continue, but frameworks will become unmaintained
    • Ajax Frameworks will continue to innovate
    • HTML5 Frameworks?
    • IE 6 (hopefully!)
    • Desktop and Mobile with Web Technologies
    • Watch out for the next big thing! (or What do you think is the next big thing?)
  • Conclusion
  • Q and A

Is there anything I'm missing that's important for the future of web frameworks? Are there items that should be removed? Any advice is most welcome.

Reminder: I'll be speaking at tomorrow's DJUG if you'd like to discuss your thoughts in person.

Posted in Java at Nov 10 2009, 01:24:39 PM MST 11 Comments