Matt RaibleMatt Raible is a Web Developer and Java Champion. Connect with him on LinkedIn.

The Angular Mini-Book The Angular Mini-Book is a guide to getting started with Angular. You'll learn how to develop a bare-bones application, test it, and deploy it. Then you'll move on to adding Bootstrap, Angular Material, continuous integration, and authentication.

Spring Boot is a popular framework for building REST APIs. You'll learn how to integrate Angular with Spring Boot and use security best practices like HTTPS and a content security policy.

For book updates, follow @angular_book on Twitter.

The JHipster Mini-Book The JHipster Mini-Book is a guide to getting started with hip technologies today: Angular, Bootstrap, and Spring Boot. All of these frameworks are wrapped up in an easy-to-use project called JHipster.

This book shows you how to build an app with JHipster, and guides you through the plethora of tools, techniques and options you can use. Furthermore, it explains the UI and API building blocks so you understand the underpinnings of your great application.

For book updates, follow @jhipster-book on Twitter.

10+ YEARS


Over 10 years ago, I wrote my first blog post. Since then, I've authored books, had kids, traveled the world, found Trish and blogged about it all.

You shouldn't have to worry about front end optimization

After writing yesterday's article on optimizing AngularJS apps with Grunt I received an interesting reply from @markj9 on Twitter.

I clicked on the provided link, listened to the podcast (RR HTTP 2.0 with Ilya Grigorik) and discovered some juicy bits around 27:00. The text below is from the podcast's transcript at the bottom of the page.

AVDI:  Yeah. If I pushed back a little, it’s only because I just wonder, are the improvements coming solely from the perspective of a Google or Facebook, or are they also coming from the perspective of the hundreds of thousands of people developing smaller applications and websites?

ILYA:  Yeah. So, I think it’s the latter, which is to say the primary objective here is actually to make the browsers faster. So, if you open a webpage over HTTP 2.0, it should load faster. And that’s one kind of population that will benefit from it. And the second one is actually developers. We want to make developing web applications easier. You shouldn’t have to worry about things like spriting and concatenating files and doing all this stuff, domain sharding and all this other mess, which are just completely unnecessary and actually makes performance worse in many cases because each one of those has negative repercussions.

Things like, let’s say concatenating your style sheets or JavaScript. Why do we do that? Well, we do that because we want to reduce the number of requests because we have this connection limit with HTTP 1.0. But the downside then is let’s say you’ve — actually Rails does this, you concatenate all of your CSS into one giant bundle. Great, we reduced the number of requests. We can download it faster. Awesome. Then you go in and your designer changes one line from whatever, the background color from blue to red. And now, you have to download the entire bundle. You have to invalidate that certain file and you need to download the whole thing.

Chances are, if you’re doing sound software development today, you already have things split into modules. Like here is my base.css, here is my other page.css. Here are my JavaScript modules. And there’s no reason why we need to concatenate those into one giant bundle and invalidate on every request. This is something that we’ve automated to some degree, but it’s unnecessary. And it actually slows down the browser, too, in unexpected ways.

We recently realized that serving these large JavaScript files actually hurts your performance because we can’t execute the JavaScript until we get the entire file. So, if you actually split it into a bunch of smaller chunks, it actually allows us to execute them incrementally, one chunk at a time. And that makes the site faster. Same thing for CSS, splitting all that stuff. And this may sound trivial, but in practice, it’s actually a giant pain for a lot of applications.

The conversation goes on to talk about how this change in thinking is largely caused by the fact that bandwidth is no longer a problem, latency is.

JAMES:  That seems really, really weird to me though. Everything has been moving in that direction and you’re saying our data on that’s just wrong. It’s not faster?

ILYA:  Yeah. Part of it is the connectivity profiles are also changing. So when we first started advocating for those sorts of changes back in, whatever it was, 2005, 2007, when this stuff started showing up, the connection speeds were different. We were primarily maybe DSL was state of the art and bandwidth was really an issue there. So, you spend more time just downloading resources. Now that bandwidth is much less of an issue, latency is the problem. And because of that, these “best practices” are changing. And with HTTP 2.0, you actually don’t have to do that at all. And in fact, some of those things will actually hurt your performance.

As you can imagine, this news is quite surprising to me. Optimizations like gzipping and expires headers will continue to be important, but concatenating and minifying might become a "worst" practice? That seems crazy, especially when the tools I test with (YSlow and Page Speed browser plugins) give me higher grades for minifying and reducing the number of requests.

The good news is there's lots of good things coming in HTTP 2.0, and you can use it today.

ILYA:  ... any application that’s delivered over HTTP 1.0 will work over HTTP 2.0. There’s nothing changing there. The semantics are all the same. It could be the case that certain optimizations that you’ve done for HTTP 1.1 will actually hurt in HTTP 2.0. And when I say hurt, in practice at least from what I’ve seen today, it doesn’t mean that your site is actually going to be slower. It’s just that it won’t be any better than HTTP 1.0.

Upgrading my servers to support HTTP 2.0 brings up an interesting dilemma. How do I measure that non-minified and non-concatenated assets are actually better? Do I just benchmark page load times or are there better tools for proving things are faster and better?

Posted in The Web at Jan 16 2014, 01:49:03 PM MST 5 Comments
Comments:

Well, I've learned to rely on sites like webpagetest.org - it has been quite conclusive so far. I don't quite care for PageSpeed per se - but rather for the perception of speed (Steve Souders has a good talk on that somewhere).

I think there's a middle ground to be had - it's probably still reasonable to have a 'minimum view' bundle that's required to render the page and then load the rest of the stuff in the background.

Posted by gmm on January 16, 2014 at 11:25 PM MST #

Hey Matt. It's not all as crazy as it sounds. Concatenation and spriting are a form of "application-layer multiplexing"... and both are unfortunate hacks that we've had to live with due to limitations in HTTP/1.x.

I cover this topic in HPBN, may be of interest: http://chimera.labs.oreilly.com/books/1230000000545/ch11.html#CONCATENATION_SPRITING

Also, some tips for optimizing for the brave new world of SPDY and HTTP/2: http://chimera.labs.oreilly.com/books/1230000000545/ch13.html#OPTIMIZING_HTTP2

Finally, to answer your question on how to test this: simple, use a tool like WebPageTest to run a side-by-side comparison!

Posted by Ilya Grigorik on January 16, 2014 at 11:43 PM MST #

YSlow and Page Speed scores are based on a set of rules (assumptions) about best practices for improving web performance, and they don't actually measure actual page load performance, so you should look at them as a set of heuristics, not as performance measurements.

We should expect these best practices change over time as the technology changes (servers, browsers, networks), and the tools will continue to change as well.

Posted by Ken Liu on January 23, 2014 at 12:04 AM MST #

Kyle Simpson has a really good post on this here: http://blog.getify.com/obsessions-http-request-reduction/

As gmm said, there is a middle ground to be had, and you can probably create a good subset of things in your website that are not likely to change frequently (libraries, frameworks, etc), and other subsets of things which do. That minimizes the problem Ilya mentions.

Plus, the reason why people reduce HTTP requests is not just due to HTTP limits but also due to HTTP overhead, although this is something that might change with HTTP 2.0 as well (I haven't listened to Ilya's podcast and can't say that I understand that much about protocols). Kyle also mentions this in his post and how you should weigh this overhead against the number of bundles you should create.

Finally, this idea that "bandwidth" is no longer a problem is a bit of a privileged misconception. Sure, it might not be a problem in some larger and more developed cities here in there, but it is still very much a problem on many parts of the world, even on some of those larger and more developed cities. And not just fixed connections, but also mobile coverage. I lived in London for a while, and for such a well known and developed city it has horrible internet coverage, both fixed and mobile (some people say wonders about the more recent LTE provider but I think the coverage is also not that widespread yet).

Posted by Tiago Rodrigues on January 23, 2014 at 12:04 AM MST #

I understand how HTTP2 removes the need for concatenation and spriting. But wouldn't minification still help? Even if bandwidth is not the limiting factor, wouldn't sending fewer bytes still be faster?

Posted by Aaron on January 23, 2014 at 12:04 AM MST #

Post a Comment:
  • HTML Syntax: Allowed