Distributed Tracing with Node JS

The microservice architecture pattern solves many of the problems inherent with monolithic applications. But microservices also bring challenges of their own, one of which is figuring out what went wrong when something breaks. There are at least 3 related challenges here:

  • Log collection
  • Metric collection
  • Distributed tracing

Log and metric collection is fairly straightforward (we’ll cover these in a separate post), but only gets you so far.

Let’s say your 20 microservice application starts behaving badly – you start getting timeouts on a particular API and want to find out why. The first place you look may be your centralized metrics service. This will likely confirm to you that you have a problem, as hopefully you have one or more metrics that are now showing out-of-band numbers.

But what if the issue only affects part of your user population, or worse, a single (but important) customer? In these cases your metrics – assuming you have the right ones in the first place – probably won’t tell you much.

In cases like these, where you have minimal or no guidance from your configured metrics, you start trying to figure out where the problem may be. You know your system architecture, and you’re pretty sure you’ve narrowed the issue down to three or four of your services.

So what’s next? Well, you’ve got your centrally aggregated service logs, right? So you open up three or four windows and try to find an example of a request that fails, and trace it through to the other 2-3 services in the mix. Of course, if your problem only manifests in production then you’ll be sifting through a large number of logs.

How good are you logs anyway? You’re in prod, so you’ve probably disabled debug logs, but even if you hadn’t, logs usually only get you so far. After some digging, you might be able to narrow things down to a function or two, but you’re likely not logging all the information you need to proceed from there. Time to start sifting through code…

But maybe there’s a better way.

Enter Distributed Tracing

Distributed Tracing is a method of tracking a request as it traverses multiple services. Let’s say you have a simple e-commerce app, which looks a little like this (simplified for clarity):

A simplified e-commerce architecture

Now, your user has made an order and wants to track the order’s status. In order for this to happen the user makes a request that hits your API Gateway, which needs to authenticate the request and then send it on to your Orders service. This fetches Order details, then consults your Shipping service to discover shipping status, which in turn calls an external API belonging to your shipping partner.

There are quite a few things that can go wrong here. Your Auth service could be down, your Orders service could be unable to reach its database, your Shipping service could be unable to access the external API, and so on. All you know, though, is that your customer is complaining that they can’t access their Order details and they’re getting aggravated.

We can solve this by tracing a request as it traverses your architecture, with each step surfacing details about what is going on and what (if anything) went wrong. We can then use the Jaeger UI to visualize the trace as it happened, allowing us to debug problems as well as identify bottlenecks.

An example distributed application

To demonstrate how this works I’ve created a distributed tracing example app on Github. The repo is pretty basic, containing a packages directory that contains 4 extremely simple apps: gateway, auth, orders and shipping, corresponding to 4 of the services in our service architecture diagram.

The easiest way to play with this yourself is to simply clone the repo and start the services using docker-compose:

git clone git@github.com:edspencer/tracing-example.git
cd tracing-example
docker-compose up

This will spin up 5 docker containers – one for each of our 4 services plus Jaeger. Now go to http://localhost:5000/orders/12345 and hit refresh a few times. I’ve set the services up to sometimes work and sometimes cause errors – there’s a 20% chance that the auth app will return an error and a 30% chance that the simulated call to the external shipping service API will fail.

After refreshing http://localhost:5000/orders/12345 a few times, open up the Jaeger UI at http://localhost:16686/search and you’ll see something like this:

Jaeger client UI showing traces

http://localhost:5000/orders/12345 serves up the Gateway service, which is a pretty simple one-file express app that will call the Auth service on every request, then make calls to the Orders service. The Orders service in turn calls the Shipping service, which makes a simulated call to the external shipping API.

Clicking into one of the traces will show you something like this:

Details for an individual error trace

This view shows you the the request took 44ms to complete, and has a nice breakdown of where that time was spent. The services are color coded automatically so you can see at a glance how the 44ms was distributed across them. In this case we can see that there was an error in the shipping service. Clicking into the row with the error yields additional information useful for debugging:

Expanded error details

The contents of this row are highly customizable. It’s easy to tag the request with whatever information you like. So let’s see how this works.

The Code

Let’s look at the Gateway service. First we set up the Jaeger integration:

const express = require('express')
const superagent = require('superagent')
const opentracing = require('opentracing')
const {initTracer} = require('jaeger-client')

const port = process.env.PORT || 80
const authHost = process.env.AUTH_HOST || "auth"
const ordersHost = process.env.ORDERS_HOST || "orders"
const app = express()

//set up our tracer
const config = {
serviceName: 'gateway',
reporter: {
logSpans: true,
collectorEndpoint: 'http://jaeger:14268/api/traces',
},
sampler: {
type: 'const',
param: 1
}
};

const options = {
tags: {
'gateway.version': '1.0.0'
}
};

const tracer = initTracer(config, options);

The most interesting stuff here is where we declare our config. Here we’re telling the Jaeger client tracer to post its traces to http://jaeger:14268/api/traces (this is set up in our docker-compose file), and to sample all requests – as specified in the sampler config. In production, you won’t want to sample every request – one in a thousand is probably enough – so you can switch to type: ‘probabilistic’ and param: 0.001 to achieve this.

Now that we have our tracer, let’s tell Express to instrument each request that it serves:

//create a root span for every request
app.use((req, res, next) => {
req.rootSpan = tracer.startSpan(req.originalUrl)
tracer.inject(req.rootSpan, "http_headers", req.headers)

res.on("finish", () => {
req.rootSpan.finish()
})

next()
})

Here we’re setting up our outer span and giving it a title matching the request url. We encounter 3 of the 4 simple concepts we need to understand:

  • startSpan – creates a new “span” in our distributed trace; this corresponds to one of the rows we see in the Jaeger UI. This span is given a unique span ID and may have a parent span ID
  • inject – adds the span ID somewhere else – usually into HTTP headers for a downstream request – we’ll see more of this in a moment
  • finishing the span – we hook into Express’ “finish” event on the response to make sure we call .finish() on the span. This is what sends it to Jaeger.

Now let’s see how we call the Auth service, passing along the span ID:

//use the auth service to see if the request is authenticated
const checkAuth = async (req, res, next) => {
const span = tracer.startSpan("check auth", {
childOf: tracer.extract(opentracing.FORMAT_HTTP_HEADERS, req.headers)
})

try {
const headers = {}
tracer.inject(span, "http_headers", headers)
const res = await superagent.get(`http://${authHost}/auth`).set(headers)

if (res && res.body.valid) {
span.setTag(opentracing.Tags.HTTP_STATUS_CODE, 200)
next()
} else {
span.setTag(opentracing.Tags.HTTP_STATUS_CODE, 401)
res.status(401).send("Unauthorized")
}
} catch(e) {
res.status(503).send("Auth Service gave an error")
}

span.finish()
}

There are 2 important things happening here:

  1. We create a new span representing the “check auth” operation, and set it to be the childOf the parent span we created previously
  2. When we send the superagent request to the Auth service, we inject the new child span into the HTTP request headers

We’re also showing how to add tags to a span via setTag. In this case we’re appending the HTTP status code that we return to the client.

Let’s examine the final piece of the Gateway service – the actual proxying to the Orders service:

//proxy to the Orders service to return Order details
app.all('/orders/:orderId', checkAuth, async (req, res) => {
const span = tracer.startSpan("get order details", {
childOf: tracer.extract(opentracing.FORMAT_HTTP_HEADERS, req.headers)
})
try {
const headers = {}
tracer.inject(span, "http_headers", headers)
const order = await superagent.get(`http://${ordersHost}/order/${req.params.orderId}`).set(headers)
if (order && order.body) {
span.finish()
res.json(order.body)
} else {
span.setTag(opentracing.Tags.HTTP_STATUS_CODE, 200)
span.finish()
res.status(500).send("Could not fetch order")
}
} catch(e) {
res.status(503).send("Error contacting Orders service")
}
})

app.listen(port, () => console.log(API Gateway app listening on port ${port}))

This looks pretty similar to what we just did for the Auth service – we’re creating a new span that represents the call to the Orders service, setting its parent to our outer span, and injecting it into the superagent call we make to Orders. Pretty simple stuff.

Finally, let’s look at the other side of this – how to pick up the trace in another service – in this case the Auth service:

//simulate our auth service being flaky with a 20% chance of 500 internal server error
app.get('/auth', (req, res) => {
const parentSpan = tracer.extract(opentracing.FORMAT_HTTP_HEADERS, req.headers)
const span = tracer.startSpan("checking user", {
childOf: parentSpan, tags: {
[opentracing.Tags.COMPONENT]: "database"
}
})

if (Math.random() > 0.2) {
span.finish()
res.json({valid: true, userId: 123})
} else {
span.setTag(opentracing.Tags.ERROR, true)
span.finish()
res.status(500).send("Internal Auth Service error")
}
})

Here we see the 4th and final concept involved in distributed tracing:

  • extract – pulls the trace ID from the upstream service from the incoming HTTP headers

This is how the trace is able to traverse our services – in service A we create a span and inject it into calls to service B. Service B picks it up and creates a new span with the extracted span as its parent. We can then pass this span ID on to service C.

Jaeger is even nice enough to automatically create a system architecture diagram for you:

Automatic System Architecture diagram

Conclusion

Distributed tracing is immensely powerful when it comes to understanding why distributed systems behave the way they do. There is a lot more to distributed tracing than we covered above, but at its core it really comes down to those 4 key concepts: starting spans, finishing them, injecting them into downstream requests and extracting them from the upstream.

One nice attribute of open tracing standards is that they work across technologies. In this example we saw how to hook up 4 Node JS microservices with it, but there’s nothing special about Node JS here – this stuff is well supported in other languages like Go and can be added pretty much anywhere – it’s just basic UDP and (usually) HTTP.

For further reading I recommend you check out the Jaeger intro docs, as well as the architecture. The Node JS Jaeger client repo is a good place to poke around, and has links to more resources. Actual example code for Node JS was a little hard to come by, which is why I wrote this post. I hope it helps you in your microservice applications.

A New Stack for 2016: Getting Started with React, ES6 and Webpack

A lot has changed in the last few years when it comes to implementing applications using JavaScript. Node JS has revolutionized how many of us create backend apps, React has become a widely-used standard for creating the frontend, and ES6 has come along and completely transformed JavaScript itself, largely for the better.

All of this brings new capabilities and opportunities, but also new challenges when it comes to figuring out what’s worth paying attention to, and how to learn it. Today we’ll look at how to set up my personal take on a sensible stack in this new world, starting from scratch and building it up as we go. We’ll focus on getting to the point where everything is set up and ready for you to create the app.

The stack we’ll be setting up today is as follows:

  • React – to power the frontend
  • Babel – allows us to use ES6 syntax in our app
  • Webpack – builds our application files and dependencies into a single build

Although we won’t be setting up a Node JS server in this article, we’ll use npm to put everything else in place, so adding a Node JS server using Express or any other backend framework is trivial. We’re also going to omit setting up a testing infrastructure in this post – this will be the subject of the next article.

If you want to get straight in without reading all the verbiage, you can clone this github repo that contains all of the files we’re about to create.

Let’s go

The only prerequisite here is that your system has Node JS already installed. If that isn’t the case, go install it now from http://nodejs.org. Once you have Node, we’ll start by creating a new directory for our project and setting up NPM:

mkdir myproject
npm init

The npm init command takes you through a short series of prompts asking for information about your new project – author name, description, etc. Most of this doesn’t really matter at this stage – you can easily change it later. Once that’s done you’ll find a new file called package.json in your project directory.

Before we take a look at this file, we already know that we need to bring in some dependencies, so we’ll do that now with the following terminal commands:

npm install react –save
npm install react-dom –save
npm install webpack –save-dev

Note that for the react dependency we use –save, whereas for webpack we use –save-dev. This indicates that react is required when running our app in production, whereas webpack is only needed while developing (as once webpack has created your production build, its role is finished). Opening our package.json file now yields this:

{
   "name": "myproject",
   "version": "1.0.0",
   "description": "",
   "main": "index.js",
   "scripts": {
       "test": "echo \"Error: no test specified\" && exit 1"
   },
   "author": "",
   "license": "ISC",
   "dependencies": {
     "react": "^0.14.7",
     "react-dom": "^0.14.7"
   },
   "devDependencies": {
     "webpack": "^1.12.14"
   }
 }

This is pretty straightforward. Note the separate dependencies and devDependencies objects in line with our –save vs –save-dev above. Depending on when you created your app the version numbers for the dependencies will be different, but the overall shape should be the same.

We’re not done installing npm packages yet, but before we get started with React and ES6 we’re going to get set up with Webpack.

Setting up Webpack

We’ll be using Webpack to turn our many application files into a single file that can be loaded into the browser. As it stands, though, we don’t have any application files at all. So let’s start by creating those:

mkdir src
touch src/index.js
touch src/App.js

Now we have a src directory with two empty files. Into App.js, we’ll place the following trivial component rendering code:

var App = function() {
  return "<h1>Woop</h1>";
};

module.exports = App;

All we’re doing here is returning an HTML string when you call the App function. Once we bring React into the picture we’ll change the approach a little, but this is good enough for now. Into our src/index.js, we’ll use:

var app = require('./App');
document.write(app());

So we’re simply importing our App, running it and then writing the resulting HTML string into the DOM. Webpack will be responsible for figuring out how to combine index.js and App.js and building them into a single file. In order to use Webpack, we’ll create a new file called webpack.config.js (in the root directory of our project) with the following contents:

var path = require('path');
var webpack = require('webpack');

module.exports = {
  output: {
    filename: 'bundle.js'
  },
  entry: [
    './src/index.js'
  ]
};

This really couldn’t be much simpler – it’s just saying take the entry point (our src/index.js file) as input, and save the output into a file called bundle.js. Webpack takes those entry file inputs, figures out all of the require(‘…’) statements and fetches all of the dependencies as required, outputting our bundle.js file.

To run Webpack, we simply use the `webpack` command in our terminal, which will do something like this:

webpack

webpack terminal output

As we can see, we now have a 1.75kb file called bundle.js that we can serve up in our project. That’s a little heavier than our index.js and App.js files combined, because there is a little Webpack plumbing that gets included into the file too.

Now finally we’ll create a very simple index.html file that loads our bundle.js and renders our app:

<html>
  <head>
    <meta charset="utf-8">
  </head>
  <body>
    < div id="main"></div>
    < script type="text/javascript" src="bundle.js" charset="utf-8"></script>
  </body>
 </html>

Can’t get much simpler than that. We don’t have a web server set up yet, but we don’t actually need one. As we have no backend we can just load the index.html file directly into the browser, either by dragging it in from your OS’s file explorer program, or entering the address manually. For me, I can enter file:///Users/ed/Code/myproject/index.html into my browser’s address bar, and be greeted with the following:

woop

Our first rendered output

Great! That’s our component being rendered and output into the DOM as desired. Now we’re ready to move onto using React and ES6.

React and ES6

React can be used either with or without ES6. Because this is the future, we desire to use the capabilities of ES6, but we can’t do that directly because most browsers currently don’t support it. This is where babel comes in.

Babel (which you’ll often hear pronounced “babble” instead of the traditional “baybel”) a transpiler, which takes one version of the JavaScript language and translates it into another. In our case, it will be translating the ES6 version of JavaScript into an earlier version that is guaranteed to run in browsers. We’ll start by adding a few new npm package dependencies:

npm install babel-core –save-dev
npm install babel-loader –save-dev
npm install babel-preset-es2015 –save-dev
npm install babel-preset-react –save-dev
npm install babel-plugin-transform-runtime –save-dev

npm install babel-polyfill –save
npm install babel-runtime –save

This is quite a substantial number of new dependencies. Because babel can convert between many different flavors of JS, once we’ve specified the babel-core and babel-loader packages, we also need to specify babel-preset-es2015 to enable ES6 support, and babel-preset-react to enable React’s JSX syntax. We also bring in a polyfill that makes available new APIs like Object.assign that babel would not usually bring to the browser as it requires some manipulation of the browser APIs, which is something one has to opt in to.

Once we have these all installed, however, we’re ready to go. The first thing we’ll need to do is update our webpack.config.js file to enable babel support:

var path = require('path');
var webpack = require('webpack');

module.exports = {
  module: {
    loaders: [
      {
        loader: "babel-loader",
        // Skip any files outside of your project's `src` directory
        include: [
          path.resolve(__dirname, "src"),
        ],
        // Only run `.js` and `.jsx` files through Babel
        test: /\.jsx?$/,
        // Options to configure babel with
        query: {
          plugins: ['transform-runtime'],
          presets: ['es2015', 'react'],
        }
      }
    ]
  },
  output: {
    filename: 'bundle.js'
  },
  entry: [
    './src/index.js'
  ]
};

Hopefully the above is clear enough – it’s the same as last time, with the exception of the new module object, which contains a loader configuration that we’ve configured to convert any file that ends in .js or .jsx in our src directory into browser-executable JavaScript.

Next we’ll update our App.js to look like this:

import React, {Component} from 'react';

class App extends Component {
  render() {
    return (<h1>This is React!</h1>);
  }
}
export default App;

Cool – new syntax! We’ve switched from require(”) to import, though this does essentially the same thing. We’ve also switched from `module.exports = ` to `export default `, which is again doing the same thing (though we can export multiple things this way).

We’re also using the ES6 class syntax, in this case creating a class called App that extends React’s Component class. It only implements a single method – render – which returns a very similar HTML string to our earlier component, but this time using inline JSX syntax instead of just returning a string.

Now all that remains is to update our index.js file to use the new Component:

import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';

ReactDOM.render(<App />, document.getElementById("main"));

Again we’re using the import syntax to our advantage here, and this time we’re using ReactDOM.render instead of document.write to place the rendered HTML into the DOM. Once we run the `webpack` command again and refresh our browser window, we’ll see a screen like this:

react-heading

Now we’re cooking with gas. Or, at least, rendering with React

Next Steps

We’ll round out by doing a few small things to improve our workflow. First off, it’s annoying to have to switch back to the terminal to run `webpack` every time we change any code, so let’s update our webpack.config.js with a few new options:

module.exports = {
  //these remain unchanged
  module: {...},
  output: {...},
  entry: [...],

  //these are new
  watch: true,
  colors: true,
  progress: true
};

Now we just run `webpack` once and it’ll stay running, rebuilding whenever we save changes to our source files. This is generally much faster – on my 2 year old MacBook Air it takes about 5 seconds to run `webpack` a single time, but when using watch mode each successive build is on the order of 100ms. Usually this means that I can save my change in my text editor, and by the time I’ve switched to the browser the new bundle.js has already been created so I can immediately refresh to see the results of my changes.

The last thing we’ll do is add a second React component to be consumed by the first. This one we’ll call src/Paragraph.js, and it contains the following:

import React, {Component} from 'react';

export default class Paragraph extends Component {
  render() {
    return (<p>{this.props.text}</p>);
  }
}

This is almost identical to our App, with a couple of small tweaks. First, notice that we’ve moved the `export default` inline with the class declaration to save on space, and then secondly this time we’re using {this.props} to access a configured property of the Paragraph component. Now, to use the new component we’ll update App.js to look like the following:

import React, {Component} from 'react';
import Paragraph from './Paragraph';

export default class App extends Component {
  render() {
    return (
      < div className="my-app">
        <h1>This is React!!!</h1>
        <Paragraph text="First Paragraph" />
        <Paragraph text="Second Paragraph" />
      </div>
    );
  }
}

Again a few small changes here. First, note that we’re now importing the Paragraph component and then using it twice in our render() function – each time with a different `text` property, which is what is read by {this.props.text} in the Paragraph component itself. Finally, React requires that we return a single root element for each rendered Component, so we wrap our <h1> and <Paragraph> tags into an enclosing <div>

By the time you hit save on those changes, webpack should already have built a new bundle.js for you, so head back to your browser, hit refresh and you’ll see this:

react-with-paragraphs

The final rendered output

That’s about as far as we’ll take things today. The purpose of this article was to get you to a point where you can start building a React application, instead of figuring out how to set up all the prerequisite plumbing; hopefully it’s clear enough how to continue from here.

You can find a starter repository containing all of the above over on GitHub. Feel free to clone it as the starting point for your own project, or just look through it to see how things fit together.

In the next article, we’ll look at how to add some unit testing to our project so that we can make sure our Components are behaving as they should. Until then, happy Reacting!

Jasmine and Jenkins Continuous Integration

I use Jasmine as my JavaScript unit/behavior testing framework of choice because it’s elegant and has a good community ecosystem around it. I recently wrote up how to get Jasmine-based autotesting set up with Guard, which is great for development time testing, but what about continuous integration?

Well, it turns out that it’s pretty difficult to get Jasmine integrated with Jenkins. This is not because of an inherent problem with either of those two, it’s just that no-one got around to writing an open source integration layer until now.

The main problem is that Jasmine tests usually expect to run in a browser, but Jenkins needs results to be exposed in .xml files. Clearly we need some bridge here to take the headless browser output and dump it into correctly formatted .xml files. Specifically, these xml files need to follow the JUnit XML file format for Jenkins to be able to process them. Enter guard-jasmine.

guard-jasmine

In my previous article on getting Jasmine and Guard set up, I was using the jasmine-headless-webkit and guard-jasmine-headless-webkit gems to provide the glue. Since then I’ve replaced those 2 gems with a single gem – guard-jasmine, written by Michael Kessler, the Guard master himself. This simplifies our dependencies a little, but doesn’t buy us the .xml file functionality we need.

For that, I had to hack on the gem itself (which involved writing coffeescript for the first time, which was not a horrible experience). The guard-jasmine gem now exposes 3 additional configurations:

  • junit – set to true to save output to xml files (false by default)
  • junit_consolidate – rolls nested describes up into their parent describe blocks (true by default)
  • junit_save_path – optional path to save the xml files to

The JUnit Xml reporter itself borrows heavily from larrymyers‘ excellent jasmine-reporters project. Aside from a few changes to integrate it into guard-jasmine it’s the same code, so all credit goes to to Larry and Michael.

Sample usage:

In your Guardfile:

guard :jasmine, :junit => true, :junit_save_path => 'reports' do
  watch(%r{^spec/javascripts/.+$}) { 'spec/javascripts' }
  watch(%r{^spec/javascripts/fixtures/.+$}) { 'spec/javascripts' }
  watch(%r{^app/assets/javascripts/(.+?)\.(js\.coffee|js|coffee)(?:\.\w+)*$}) { 'spec/javascripts' }
end

This will just run the full set of Jasmine tests inside your spec/javascripts directory whenever any test, source file or asset like CSS files change. This is generally the configuration I use because the tests execute so fast I can afford to have them all run every time.

In the example above we set the :junit_save_path to ‘reports’, which means it will save all of the .xml files into the reports directory. It is going to output 1 .xml file for each Jasmine spec file that is run. In each case the name of the .xml file created is based on the name of the top-level `describe` block in your spec file.

To test that everything’s working, just run `bundle exec guard` as you normally would, and check to see that your `reports` folder now contains a bunch of .xml files. If it does, everything went well.

Jenkins Settings

Once we’ve got the .xml files outputting correctly, we just need to tell Jenkins where to look. In your Jenkins project configuration screen, click the Add Build Step button and add a “Publish JUnit test result report” step. Enter ‘reports/*.xml’ as the `Test report XMLs` field.

If you’ve already got Jenkins running your test script then you’re all done. Next time a build is triggered the script should run the tests and export the .xml files. If you don’t already have Jenkins set up to run your tests, but you did already set up Guard as per my previous article, you can actually use the same command to run the tests on Jenkins.

After a little experimentation, people tend to come up with a build command like this:

bash -c ' bundle install --quiet \
&& bundle exec guard '

If you’re using rvm and need to guarantee a particular version you may need to prepend an `rvm install` command before `bundle install` is called. This should just run guard, which will dump the files out as expected for Jenkins to pick up.

To clean up, we’ll just add a second post-build action, this time choosing the “Execute a set of scripts” option and entering the following:

kill -9 `cat guard.pid`

This just kills the Guard process, which ordinarily stays running to power your autotest capabilities. Once you run a new build you should see a chart automatically appear on your Jenkins project page telling you full details of how many tests failed over time and in the current build.

Getting it

Update: The Pull Request is now merged into the main guard-jasmine repo so you can just use `gem ‘guard-jasmine’` in your Gemfile

This is hot off the presses but I wanted to write it up while it’s still fresh in my mind. At the time of writing the pull request is still outstanding on the guard-jasmine repository, so to use the new options you’ll need to temporarily use my guard-jasmine fork. In your Gemfile:

gem 'guard-jasmine'

Once the PR is merged and a new version issued you should switch back to the official release channel. It’s working well for me but it’s fresh code so may contains bugs – YMMV. Hopefully this helps save some folks a little pain!

Autotesting JavaScript with Jasmine and Guard

One of the things I really loved about Rails in the early days was that it introduced me to the concept of autotest – a script that would watch your file system for changes and then automatically execute your unit tests as soon as you change any file.

Because the unit test suite typically executes quickly, you’d tend to have your test results back within a second or two of hitting save, allowing you to remain in the editor the entire time and only break out the browser for deeper debugging – usually the command line output and OS notifications (growl at the time) would be enough to set you straight.

This was a fantastic way to work, and I wanted to get there again with JavaScript. Turns out it’s pretty easy to do this. Because I’ve used a lot of ruby I’m most comfortable using its ecosystem to achieve this, and as it happens there’s a great way to do this already.

Enter Guard

Guard is a simple ruby gem that scans your file system for changes and runs the code of your choice whenever a file you care about is saved. It has a great ecosystem around it which makes automating filesystem-based triggers both simple and powerful. Let’s start by making sure we have all the gems we need:


gem install jasmine jasmine-headless-webkit guard-jasmine-headless-webkit guard \
 guard-livereload terminal-notifier-guard --no-rdoc --no-ri

This just installs a few gems that we’re going to use for our tests. First we grab the excellent Jasmine JavaScript BDD test framework via its gem – you can use the framework of your just but I find Jasmine both pleasant to deal with and it generally Just Works. Next we’re going to add the ‘jasmine-headless-webkit’ gem and its guard twin, which use phantomjs to run your tests on the command line, without needing a browser window.

Next up we grab guard-livereload, which enables Guard to act as a livereload server, automatically running your full suite in the browser each time your save a file. This might sound redundant – our tests are already going to be executed in the headless webkit environment, so why bother running them in the browser too? Well, the browser Jasmine runner tends to give a lot more information when something goes wrong – stack traces and most importantly a live debugger.

Finally we add the terminal-notifier-guard gem, which just allows guard to give us a notification each time the tests finish executing. Now we’ve got our dependencies in line it’s time to set up our environment. Thankfully both jasmine and guard provide simple scripts to get started:


jasmine init

guard init

And we’re ready to go! Let’s test out our setup by running `guard`:


guard

What you should see at this point is something like this:

guard-screenshot

Terminal output after starting guard

We see guard starting up, telling us it’s going to use TerminalNotifier to give us an OS notification every time the tests finish running, and that it’s going to use JasmineHeadlessWebkit to run the tests without a browser. You’ll see that 5 tests were run in about 5ms, and you should have seen an OS notification flash up telling you the same thing. This is great for working on a laptop where you don’t have the screen real estate to keep a terminal window visible at all times.

What about those 5 tests? They’re just examples that were generated by `jasmine init`. You can find them inside the spec/javascripts directory and by default there’s just 1 – PlayerSpec.js.

Now try editing that file and hitting save – nothing happens. The reason for this is that the Guardfile generated by `guard init` isn’t quite compatible out of the box with the Jasmine folder structure. Thankfully this is trivial to fix – we just need to edit the Guardfile.

If you open up the Guardfile in your editor you’ll see it has about 30 lines of configuration. A large amount of the file is comments and optional configs, which you can delete if you like. Guard is expecting your spec files to have the format ‘my_spec.js’ – note the ‘_spec’ at the end.

To get it working the easiest way is to edit the ‘spec_location’ variable (on line 7 – just remove the ‘_spec’), and do the same to the last line of the `guard ‘jasmine-headless-webkit’ do` block. You should end up with something like this:


spec_location = "spec/javascripts/%s"

guard 'jasmine-headless-webkit' do
watch(%r{^app/views/.*\.jst$})
watch(%r{^public/javascripts/(.*)\.js$}) { |m| newest_js_file(spec_location % m[1]) }
watch(%r{^app/assets/javascripts/(.*)\.(js|coffee)$}) { |m| newest_js_file(spec_location % m[1]) }
watch(%r{^spec/javascripts/(.*)\..*}) { |m| newest_js_file(spec_location % m[1]) }
end

Once you save your Guardfile, there’s no need to restart guard, it’ll notice the change to the Guardfile and automatically restart itself. Now when you save PlayerSpec.js again you’ll see the terminal immediately run your tests and show your the notification that all is well (assuming your tests still pass!).

So what are those 4 lines inside the `guard ‘jasmine-headless-webkit’ do` block? As you’ve probably guessed they’re just the set of directories that guard should watch. Whenever any of the files matched by the patterns on those 4 lines change, guard will run its jasmine-headless-webkit command, which is what runs your tests. These are just the defaults, so if your JS files are not found inside those folders jus update it to point to the right place.

Livereload

The final part of the stack that I use is livereload. Livereload consists of two things – a browser plugin (available for Chrome, Firefox and others), and a server, which have actually already set up with Guard. First you’ll need to install the livereload browser plugin, which is extremely simple.

Because the livereload server is already running inside guard, all we need to do is give our browser a place to load the tests from. Unfortunately the only way I’ve found to do this is to open up a second terminal tab and in the same directory run:


rake jasmine

This sets up a lightweight web server that runs on http://localhost:8888. If you go to that page in your browser now you should see something like this:

livereload-screenshot

livereload in the browser – the livereload plugin is immediately to the right of the address bar

Just hit the livereload button in your browser (once you’ve installed the plugin), edit your file again and you’ll see the browser automatically refreshes itself and runs your tests. This step is optional but I find it extremely useful to get a notification telling me my tests have started failing, then be able to immediately tab into the browser environment to get a full stack trace and debugging environment.

That just about wraps up getting autotest up and running. Next time you come back to your code just run `guard` and `rake jasmine` and you’ll get right back to your new autotesting setup. And if you have a way to have guard serve the browser without requiring the second tab window please share in the comments!

Building a data-driven image carousel with Sencha Touch 2

This evening I embarked on a little stellar voyage that I’d like to share with you all. Most people with great taste love astronomy and Sencha Touch 2, so why not combine them in a fun evening’s web app building?

NASA has been running a small site called APOD (Astronomy Picture Of the Day) for a long time now, as you can probably tell by the awesome web design of that page. Despite its 1998-era styling, this site incorporates some pretty stunning images of the universe and is begging for a mobile app interpretation.

We’re not going to go crazy, in fact this whole thing only took about an hour to create, but hopefully it’s a useful look at how to put something like this together. In this case, we’re just going to write a quick app that pulls down the last 20 pictures and shows them in a carousel with an optional title.

Here’s what it looks like live. You’ll need a webkit browser (Chrome or Safari) to see this, alternatively load up http://code.edspencer.net/apod on a phone or tablet device:

The full source code for the app is up on github, and we’ll go through it bit by bit below.

The App

Our app consists of 5 files:

index.html, which includes our JavaScript files and a little CSS
app.js, which boots our application up
app/model/Picture.js, which represents a single APOD picture
app/view/Picture.js, which shows a picture on the page
app/store/Pictures.js, which fetches the pictures from the APOD RSS feed

The whole thing is up on github and you can see a live demo at http://code.edspencer.net/apod. To see what it’s doing tap that link on your phone or tablet, and to really feel it add it to your homescreen to get rid of that browser chrome.

The Code

Most of the action happens in app.js, which for your enjoyment is more documentation than code. Here’s the gist of it:

This is pretty simple stuff and you can probably just follow the comments to see what’s going on. Basically though the app.js is responsible for launching our application, creating the Carousel and info Components, and setting up a couple of convenient event listeners.

We also had a few other files:

Picture Model

Found in app/model/Picture.js, our model is mostly just a list of fields sent back in the RSS feed. There is one that’s somewhat more complicated than the rest though – the ‘image’ field. Ideally, the RSS feed would have sent back the url of the image in a separate field and we could just pull it out like any other, but alas it is embedded inside the main content.

To get around this, we just specify a convert function that grabs the content field, finds the first image url inside of it and pulls it out. To make sure it looks good on any device we also pass it through Sencha IO src, which resizes the image to fit the screen size of whatever device we happen to be viewing it on:

Pictures Store

Our Store is even simpler than our Model. All it does is load the APOD RSS feed over JSON-P (via Google’s RSS Feed API) and decode the data with a very simple JSON Reader. This automatically pulls down the images and runs them through our Model’s convert function:

Tying it all together

Our app.js loads our Model and Store, plus a really simple Picture view that is basically just an Ext.Img. All it does then is render the Carousel and Info Component to the screen and tie up a couple of listeners.

In case you weren’t paying attention before, the info component is just an Ext.Component that we rendered up in app.js as a place to render the title of the image you’re currently looking at. When you swipe between items in the carousel the activeitemchange event is fired, which we listen to near the top of app.js. All our activeitemchange listener does is update the HTML of the info component to the title of the image we just swiped to.

But what about the info component itself? Well at the bottom of app.js we added a tap listener on Ext.Viewport that hides or shows the info Component whenever you tap anywhere on the screen (except if you tap on the Carousel indicator icons). With a little CSS transition loveliness we get a nice fade in/out transition when we tap the screen to reveal the image title. Here’s that tap listener again:

The End of the Beginning

This was a really simple app that shows how easy it is to put these things together with Sencha Touch 2. Like with most stories though there’s more to come so keep an eye out for parts 2 and 3 of this intergalactic adventure.

Proxies in Ext JS 4

One of the classes that has a lot more prominence in Ext JS 4 is the data Proxy. Proxies are responsible for all of the loading and saving of data in an Ext JS 4 or Sencha Touch application. Whenever you’re creating, updating, deleting or loading any type of data in your app, you’re almost certainly doing it via an Ext.data.Proxy.

If you’ve seen January’s Sencha newsletter you may have read an article called Anatomy of a Model, which introduces the most commonly-used Proxies. All a Proxy really needs is four functions – create, read, update and destroy. For an AjaxProxy, each of these will result in an Ajax request being made. For a LocalStorageProxy, the functions will create, read, update or delete records from HTML5 localStorage.

Because Proxies all implement the same interface they’re completely interchangeable, so you can swap out your data source – at design time or run time – without changing any other code. Although the local Proxies like LocalStorageProxy and MemoryProxy are self-contained, the remote Proxies like AjaxProxy and ScriptTagProxy make use of Readers and Writers to encode and decode their data when communicating with the server.

Proxy-Reader-and-Writer

Whether we are reading data from a server or preparing data to be sent back, usually we format it as either JSON or XML. Both of our frameworks come with JSON and XML Readers and Writers which handle all of this for you with a very simple API.

Using a Proxy with a Model

Proxies are usually used along with either a Model or a Store. The simplest setup is just with a model:

var User = Ext.regModel('User', {
    fields: ['id', 'name', 'email'],
    
    proxy: {
        type: 'rest',
        url : '/users',
        reader: {
            type: 'json',
            root: 'users'
        }
    }
});

Here we’ve created a User model with a RestProxy. RestProxy is a special form of AjaxProxy that can automatically figure out Restful urls for our models. The Proxy that we set up features a JsonReader to decode any server responses – check out the recent data package post on the Sencha blog to see Readers in action.

When we use the following functions on the new User model, the Proxy is called behind the scenes:

var user = new User({name: 'Ed Spencer'});

//CREATE: calls the RestProxy's create function because the user has never been saved
user.save();

//UPDATE: calls the RestProxy's update function because it has been saved before
user.set('email', 'ed@sencha.com');

//DESTROY: calls the RestProxy's destroy function
user.destroy();

//READ: calls the RestProxy's read function
User.load(123, {
    success: function(user) {
        console.log(user);
    }
});

We were able to perform all four CRUD operations just by specifying a Proxy for our Model. Notice that the first 3 calls are instance methods whereas the fourth (User.load) is static on the User model. Note also that you can create a Model without a Proxy, you just won’t be able to persist it.

Usage with Stores

In Ext JS 3.x, most of the data manipulation was done via Stores. A chief purpose of a Store is to be a local subset of some data plus delta. For example, you might have 1000 products in your database and have 25 of them loaded into a Store on the client side (the local subset). While operating on that subset, your user may have added, updated or deleted some of the Products. Until these changes are synchronized with the server they are known as a delta.

In order to read data from and sync to the server, Stores also need to be able to call those CRUD operations. We can give a Store a Proxy in the same way:

var store = new Ext.data.Store({
    model: 'User',
    proxy: {
        type: 'rest',
        url : '/users',
        reader: {
            type: 'json',
            root: 'users'
        }
    }
});

We created the exact same Proxy for the Store because that’s how our server side is set up to deliver data. Because we’ll usually want to use the same Proxy mechanism for all User manipulations, it’s usually best to just define the Proxy once on the Model and then simply tell the Store which Model to use. This automatically picks up the User model’s Proxy:

//no need to define proxy - this will reuse the User's Proxy
var store = new Ext.data.Store({
    model: 'User'
});

Store invokes the CRUD operations via its load and sync functions. Calling load uses the Proxy’s read operation, which sync utilizes one or more of create, update and destroy depending on the current Store delta.

//CREATE: calls the RestProxy's create function to create the Tommy record on the server
store.add({name: 'Tommy Maintz'});
store.sync();

//UPDATE: calls the RestProxy's update function to update the Tommy record on the server
store.getAt(1).set('email', 'tommy@sencha.com');
store.sync();

//DESTROY: calls the RestProxy's destroy function
store.remove(store.getAt(1));
store.sync();

//READ: calls the RestProxy's read function
store.load();

Store has used the exact same CRUD operations on the shared Proxy. In all of the examples above we have used the exact same RestProxy instance from three different places: statically on our Model (User.load), as a Model instance method (user.save, user.destroy) and via a Store instance (store.load, store.sync):

Proxy-reuse

Of course, most Proxies have their own private methods to do the actual work, but all a Proxy needs to do is implement those four functions to be usable with Ext JS 4 and Sencha Touch. This means it’s easy to create new Proxies, as James Pearce did in a recent Sencha Touch example where he needed to read address book data from a mobile phone. Everything he does to set up his Proxy in the article (about 1/3rd of the way down) works the same way for Ext JS 4 too.

Ext JS 4: The Class Definition Pipeline

Last time, we looked at some of the features of the new class system in Ext JS 4, and explored some of the code that makes it work. Today we’re going to dig a little deeper and look at the class definition pipeline – the framework responsible for creating every class in Ext JS 4.

As I mentioned last time, every class in Ext JS 4 is an instance of Ext.Class. When an Ext.Class is constructed, it hands itself off to a pipeline populated by small, focused processors, each of which handles one part of the class definition process. We ship a number of these processors out of the box – there are processors for handling mixins, setting up configuration functions and handling class extension.

The pipeline is probably best explained with a picture. Think of your class starting its definition journey at the bottom left, working its way up the preprocessors on the left hand side and then down the postprocessors on the right, until finally it reaches the end, where it signals its readiness to a callback function:

Processors

The distinction between preprocessors and postprocessors is that a class is considered ‘ready’ (e.g. can be instantiated) after the preprocessors have all been executed. Postprocessors typically perform functions like aliasing the class name to an xtype or back to a legacy class name – things that don’t affect the class’ behavior.

Each processor runs asynchronously, calling back to the Ext.Class constructor when it is ready – this is what enables us to extend classes that don’t exist on the page yet. The first preprocessor is the Loader, which checks to see if all of the new Class’ dependencies are available. If they are not, the Loader can dynamically load those dependencies before calling back to Ext.Class and allowing the next preprocessor to run. We’ll take another look at the Loader in another post.

After running the Loader, the new Class is set up to inherit from the declared superclass by the Extend preprocessor. The Mixins preprocessor takes care of copying all of the functions from each of our mixins, and the Config preprocessor handles the creation of the 4 config functions we saw last time (e.g. getTitle, setTitle, resetTitle, applyTitle – check out yesterday’s post to see how the Configs processor helps out).

Finally, the Statics preprocessor looks for any static functions that we set up on our new class and makes them available statically on the class. The processors that are run are completely customizable, and it’s easy to add custom processors at any point. Let’s take a look at that Statics preprocessor as an example:

//Each processor is passed three arguments - the class under construction,
//the configuration for that class and a callback function to call when the processor has finished
Ext.Class.registerPreprocessor('statics', function(cls, data, callback) {
    if (Ext.isObject(data.statics)) {
        var statics = data.statics,
            name;
        
        //here we just copy each static function onto the new Class
        for (name in statics) {
            if (statics.hasOwnProperty(name)) {
                cls[name] = statics[name];
            }
        }
    }

    delete data.statics;

    //Once the processor's work is done, we just call the callback function to kick off the next processor
    if (callback) {
        callback.call(this, cls, data);
    }
});

//Changing the order that the preprocessors are called in is easy too - this is the default
Ext.Class.setDefaultPreprocessors(['extend', 'mixins', 'config', 'statics']);

What happens above is pretty straightforward. We’re registering a preprocessor called ‘statics’ with Ext.Class. The function we provide is called whenever the ‘statics’ preprocessor is invoked, and is passed the new Ext.Class instance, the configuration for that class, and a callback to call when the preprocessor has finished its work.

The actual work that this preprocessor does is trivial – it just looks to see if we declared a ‘statics’ property in our class configuration and if so copies it onto the new class. For example, let’s say we want to create a static getNextId function on a class:

Ext.define('MyClass', {
    statics: {
        idSeed: 1000,
        getNextId: function() {
            return this.idSeed++;
        }
    }
});

Because of the Statics preprocessor, we can now call the function statically on the Class (e.g. without creating an instance of MyClass):

MyClass.getNextId(); //1000
MyClass.getNextId(); //1001
MyClass.getNextId(); //1002
... etc

Finally, let’s come back to that callback at the bottom of the picture above. If we supply one, a callback function is run after all of the processors have run. At this point the new class is completely ready for use in your application. Here we create an instance of MyClass using the callback function, guaranteeing that the dependency on Ext.Window has been honored:

Ext.define('MyClass', {
    extend: 'Ext.Window'
}, function() {
   //this callback is called when MyClass is ready for use
   var cls = new MyClass();
   cls.setTitle('Everything is ready');
   cls.show();
});

That’s it for today. Next time we’ll look at some of the new features in the part of Ext JS 4 that is closest to my heart – the data package.

Sencha Touch tech talk at Pivotal Labs

I recently gave an introduction to Sencha Touch talk up at Pivotal Labs in San Francisco. The guys at Pivotal were kind enough to record this short talk and share it with the world – it’s under 30 minutes and serves as a nice, short introduction to Sencha Touch:

Ed Spencer Sencha Touch tech talk

The slides are available on slideshare and include the code snippets I presented. The Dribbble example used in the talk is very similar to the Kiva example that ships with the Sencha Touch SDK, so I recommend checking that out if you want to dive in further.

Using the Ext JS PivotGrid

One of the new components we just unveiled for the Ext JS 3.3 beta is PivotGrid. PivotGrid is a powerful new component that reduces and aggregates large datasets into a more understandable form.

A classic example of PivotGrid’s usefulness is in analyzing sales data. Companies often keep a database containing all the sales they have made and want to glean some insight into how well they are performing. PivotGrid gives the ability to rapidly summarize this large and unwieldy dataset – for example showing sales count broken down by city and salesperson.

A simple example

We created an example of this scenario in the 3.3 beta release. Here we have a fictional dataset containing 300 rows of sales data (see the raw data). We asked PivotGrid to break the data down by Salesperson and Product, showing us how they performed over time. Each cell contains the sum of sales made by the given salesperson/product combination in the given city and year.

Let’s see how we create this PivotGrid:

var SaleRecord = Ext.data.Record.create([
    {name: 'person',   type: 'string'},
    {name: 'product',  type: 'string'},
    {name: 'city',     type: 'string'},
    {name: 'state',    type: 'string'},
    {name: 'month',    type: 'int'},
    {name: 'quarter',  type: 'int'},
    {name: 'year',     type: 'int'},
    {name: 'quantity', type: 'int'},
    {name: 'value',    type: 'int'}
]);

var myStore = new Ext.data.Store({
    url: 'salesdata.json',
    autoLoad: true,
    reader: new Ext.data.JsonReader({
        root: 'rows',
        idProperty: 'id'
    }, SaleRecord)
});

var pivotGrid = new Ext.grid.PivotGrid({
    title     : 'Sales Performance',
    store     : myStore,
    aggregator: 'sum',
    measure   : 'value',
    
    leftAxis: [
        {dataIndex: 'person',  width: 80},
        {dataIndex: 'product', width: 90}
    ],
    
    topAxis: [
        {dataIndex: 'year'},
        {dataIndex: 'city'}
    ]
});

The first half of this ought to be very familiar – we just set up a normal Record and Store. This is all we need to load our sample data so that it’s ready for pivoting. This is all exactly the same code as for our other Store-bound components like Grid and DataView so it’s easy to take an existing Grid and turn it into a PivotGrid.

The second half of the code creates the PivotGrid itself. There are 5 main components to a PivotGrid – the store, the measure, the aggregator, the left axis and the top axis. Taking these in turn:

  • Store – the Store we created above
  • Measure – the field in the data that we want to aggregate (in this case the sale value)
  • Aggregator – the function we use to combine data into the cells. See the docs for full details
  • Left Axis – the fields to break data down by on the left axis
  • Top Axis – the fields to break data down by on the top axis

The measure and the items in the axes must all be fields from the Store. The aggregator function can usually be passed in as a string – there are 5 aggregator functions built in: sum, count, min, max and avg.

Renderers

This is all we need to create a simple PivotGrid; now it’s time to look at a few more advanced options. Let’s start with renderers. Once the data for each cell has been calculated, the value is passed to an optional renderer function, which takes each value in turn and returns another value. One of the PivotGrid examples shows average heights in feet and inches but the calculated data is in decimal. Here’s the renderer we use in that example:

new Ext.grid.PivotGrid({
    store     : myStore,
    aggregator: 'avg',
    measure   : 'height',
    
    //turns a decimal number of feet into feet and inches
    renderer  : function(value) {
        var feet   = Math.floor(value),
            inches = Math.round((value - feet) * 12);
            
        return String.format("{0}' {1}"", feet, inches);
    },
    //the rest of the config
});

Customising cell appearance

Another one of the PivotGrid examples uses a custom cell style. As with the renderer, each cell has the opportunity to alter itself with a custom function – here’s the one we use in the countries example:

new Ext.grid.PivotGrid({
    store     : myStore,
    aggregator: 'avg',
    measure   : 'height',
    
    viewConfig: {
        getCellCls: function(value) {
            if (value < 20) {
                return 'expense-low';
            } else if (value < 75) {
                return 'expense-medium';
            } else {
                return 'expense-high';
            }
        }
    },
    //the rest of the config
});

Reconfiguring at runtime

A lot of the power of PivotGrid is that it can be used by users of your application to summarize datasets any way they want. This is made possible by PivotGrid’s ability to reconfigure itself at runtime. We present one final example of a PivotGrid that can be reconfigured at runtime. Here’s how we perform the reconfiguration:

//the left axis can also be changed
pivot.topAxis.setDimensions([
    {dataIndex: 'city', direction: 'DESC'},
    {dataIndex: 'year', direction: 'ASC'}
]);

pivot.setMeasure('value');
pivot.setAggregator('avg');

pivot.view.refresh(true);

It’s easy to change the axes, dimension, aggregator and measure at any time and then refresh the data. The calculations are all performed client side so there is no need for another round-trip to the server when reconfiguring. The example linked above gives an example interface for updating a PivotGrid, though anything that can make the API calls above could be used.

I hope you enjoy the new components in this Ext JS 3.3 beta and look forward to comments and suggestions. Although we’re only at beta stage I think the additions are already quite robust so feel free to stress-test them.

Offline Apps with HTML5: A case study in Solitaire

One of my contributions to the newly-launched Sencha Touch mobile framework is the Touch Solitaire game. This is not the first time I have ventured into the dizzying excitement of Solitaire game development; you may remember the wonderful Ext JS Solitaire from 18 months ago. I’m sure you’ll agree that the new version is a small improvement.

Solitaire is a nice example of a fun application that can be written with Sencha Touch. It makes use of the provided Draggables and Droppables, CSS-based animations, the layout manager and the brand new data package. The great thing about a game like this though is that it can be run entirely offline. Obviously this is simple with a native application, but what about a web app? Our goal is not just having the game able to run offline, but to save your game state locally too.

The answer comes in two parts:

Web Storage and the Sencha data package

HTML5 provides a brand new API called Web Storage for storing data locally. You can read all about it on my Web Storage post on Sencha’s blog but the summary is that you can store string data locally in the browser and retrieve it later, even if the browser or the user’s computer had been restarted in the meantime.

The crucial part of the sentence above is that we can only store string data. In the case of a game of Solitaire we need to store data on the elapsed time and number of moves as well as the location and status of each card. This doesn’t sound like the kind of data we want to manually encode into a string, so thankfully the data package comes to the rescue.

The Sencha Touch data package is a complete rewrite of the package that has been so successful in powering Ext JS 3.x. It shares many of the same philosophies and adds the learning we have gained from developing Ext JS 3.x over the past year. One of the new capabilities it offers us is a Local Storage proxy, which automatically marshalls your model data into local storage and transparently restores it when you need it.

Using the new proxy is simple – all we need to do is set up a new Store, specifying the Proxy and the Model that will be saved to it. Models are the spiritual successor to Ext JS 3.x’s Records. Now whenever we add, remove or update model instances in the store they are automatically saved to localStorage for us. Loading the store again is equally easy:

//set the store up
var gameStore = new Ext.data.Store({
    proxy: new Ext.data.LocalStorageProxy({
        id: 'solitaire-games'
    }),
    model: 'Game'
});

//saves all outstanding modifications, deletions or creations to localStorage
gameStore.sync();

//load our saved games
gameStore.read({
    scope: this,
    callback: function(records) {
        //code to load the first record
    }
});

And just like that we can save and restore games with Web Storage. We can visit our app’s webpage and start a game then come back later and find it automatically restored. But we still can’t play offline, for that we need the application cache.

The HTML5 Application Cache Manifest

The application cache is one of the best features of HTML5. It provides a simple (though sometimes frustrating) way of telling the browser about all of the files your application relies on so that it can download them all ready for offline use. All you have to do is create what’s known as a manifest file which lists all of the files the application needs – the Solitaire manifest looks like this:

CACHE MANIFEST
#rev49

resources/icon.png
resources/loading.png

resources/themes/wood/board.jpg
resources/themes/wood/cards.png

resources/css/ext-touch.css
resources/solitaire-notheme.css
resources/themes/wood/wood.css
resources/themes/metal/metal.css

ext-touch-debug.js
solitaire-all-debug.js

We tell the browser about the manifest file by pointing to it in the tag’s manifest atttibute. When the browser finds this file it downloads each of the listed assets so that they are ready for offline consumption. Note that it does not automatically include them on the page, you still need to do that yourself via the usual link and script tags. Here’s a snippet of the Solitaire index.html file:

<!doctype html>
<html manifest="solitaire.manifest">
    <head>
        <meta http-equiv="Content-Type" content="text/html; charset=utf-8">	
        <title>Solitaire</title>

        <link rel="stylesheet" href="resources/css/ext-touch.css" type="text/css">
        <link rel="stylesheet" href="resources/solitaire-notheme.css" type="text/css">
        <link rel="stylesheet" href="resources/themes/wood/wood.css" type="text/css">

        <script type="text/javascript" src="ext-touch-debug.js"></script>
        <script type="text/javascript" src="solitaire-all-debug.js"></script>

Note the manifest file definition in the html element at the top, and the fact that we still include our page resources the normal way. It sounds easy, but without a little setup first it can be a very frustrating experience. Usually your browser will try to cache as many files as possible, including the manifest file itself – we don’t want this. As soon as your browser has a long-term cache of the manifest file it is extremely difficult to update your application – all of the files are already offline and won’t be updated, and the browser won’t even ask the server for an updated manifest file.

Preventing this behaviour turns out to be fairly easy, and the solution in its simplest form comes in the shape of a .htaccess file with contents like the following:

<Files solitaire.manifest> 
    ExpiresActive On 
    ExpiresDefault "access" 
</Files>

This directs Apache to tell the browser not to cache the manifest file at all, instead requesting the file from the server on every page load. Note that if the device is currently offline it will use the last manifest file it received.

This is half the battle won, but let’s say you change one of your application files and reload – you’ll find nothing happened. This is because when your browser asked the server for the manifest file it actually asked if the file had changed or not. As the manifest itself wasn’t updated, the server responds with a 304 (Not Modified) and your browser keeps the old file.

To make the browser pick up on the change to the application file you need to update the manifest file itself. This is where the mysterious “#rev49” comes in on the manifest example file above. This is a suggestion from the excellent diveintohtml5 article on the subject – whenever you change any application files just bump up the revision number in the manifest file and your browser will know to download the updated files.

One final detail is that your Apache server probably isn’t set up to server manifest files with the correct mime type, so be sure to add the following line to your Apache config and restart the server:

AddType text/cache-manifest .manifest  

Wrapping it up

Offline access is a big deal for mobile apps and Sencha Touch makes them much easier to write. The benefit is not so much that the apps can run without an internet connection (many modern touch devices have a near-permanent connection to the internet already), but that web apps can now be treated as first-class citizens alongside native apps.

The fact that many devices allow your users to save your app to their home screen and load it as though it were native is an important step – you keep all of the advantages of web app deployment while gaining some of the benefits of native apps. As more and more native hardware APIs become available to web apps their importance will only grow.

If you want to check out Solitaire’s offline support for yourself visit the application’s site and save it to your iPad’s home page. Try turning on airplane mode and loading the app and see how it behaves as though it were native. If you don’t have an iPad, you can load the app in up-to-date versions of Chrome or Safari and get a similar experience.

%d bloggers like this: