Thoughts on communication

The purpose of directed communication is to cause some change in understanding in some audience. All communication results in some change in understanding, though it may not be the change intended.

By definition, successful communication of this kind requires clear identification of the change in understanding desired. So long as the goal state is understood, success then hinges upon simplicity and clarity. Simplicity of the message, and clarity of the delivery.

Communication

The onus to compose a simple message and deliver it with clarity rests solely with the sender. Each delivery of a message may involve multiple recipients, all of whom will understand it differently. Not all of them will matter.


I leave you with a question. What message do you believe I am trying to communicate by writing this post, and what actual change in understanding happened to you as a result? What is the connection between those two things?

Jasmine and Jenkins Continuous Integration

I use Jasmine as my JavaScript unit/behavior testing framework of choice because it’s elegant and has a good community ecosystem around it. I recently wrote up how to get Jasmine-based autotesting set up with Guard, which is great for development time testing, but what about continuous integration?

Well, it turns out that it’s pretty difficult to get Jasmine integrated with Jenkins. This is not because of an inherent problem with either of those two, it’s just that no-one got around to writing an open source integration layer until now.

The main problem is that Jasmine tests usually expect to run in a browser, but Jenkins needs results to be exposed in .xml files. Clearly we need some bridge here to take the headless browser output and dump it into correctly formatted .xml files. Specifically, these xml files need to follow the JUnit XML file format for Jenkins to be able to process them. Enter guard-jasmine.

guard-jasmine

In my previous article on getting Jasmine and Guard set up, I was using the jasmine-headless-webkit and guard-jasmine-headless-webkit gems to provide the glue. Since then I’ve replaced those 2 gems with a single gem – guard-jasmine, written by Michael Kessler, the Guard master himself. This simplifies our dependencies a little, but doesn’t buy us the .xml file functionality we need.

For that, I had to hack on the gem itself (which involved writing coffeescript for the first time, which was not a horrible experience). The guard-jasmine gem now exposes 3 additional configurations:

  • junit – set to true to save output to xml files (false by default)
  • junit_consolidate – rolls nested describes up into their parent describe blocks (true by default)
  • junit_save_path – optional path to save the xml files to

The JUnit Xml reporter itself borrows heavily from larrymyers‘ excellent jasmine-reporters project. Aside from a few changes to integrate it into guard-jasmine it’s the same code, so all credit goes to to Larry and Michael.

Sample usage:

In your Guardfile:

guard :jasmine, :junit => true, :junit_save_path => 'reports' do
  watch(%r{^spec/javascripts/.+$}) { 'spec/javascripts' }
  watch(%r{^spec/javascripts/fixtures/.+$}) { 'spec/javascripts' }
  watch(%r{^app/assets/javascripts/(.+?)\.(js\.coffee|js|coffee)(?:\.\w+)*$}) { 'spec/javascripts' }
end

This will just run the full set of Jasmine tests inside your spec/javascripts directory whenever any test, source file or asset like CSS files change. This is generally the configuration I use because the tests execute so fast I can afford to have them all run every time.

In the example above we set the :junit_save_path to ‘reports’, which means it will save all of the .xml files into the reports directory. It is going to output 1 .xml file for each Jasmine spec file that is run. In each case the name of the .xml file created is based on the name of the top-level `describe` block in your spec file.

To test that everything’s working, just run `bundle exec guard` as you normally would, and check to see that your `reports` folder now contains a bunch of .xml files. If it does, everything went well.

Jenkins Settings

Once we’ve got the .xml files outputting correctly, we just need to tell Jenkins where to look. In your Jenkins project configuration screen, click the Add Build Step button and add a “Publish JUnit test result report” step. Enter ‘reports/*.xml’ as the `Test report XMLs` field.

If you’ve already got Jenkins running your test script then you’re all done. Next time a build is triggered the script should run the tests and export the .xml files. If you don’t already have Jenkins set up to run your tests, but you did already set up Guard as per my previous article, you can actually use the same command to run the tests on Jenkins.

After a little experimentation, people tend to come up with a build command like this:

bash -c ' bundle install --quiet \
&& bundle exec guard '

If you’re using rvm and need to guarantee a particular version you may need to prepend an `rvm install` command before `bundle install` is called. This should just run guard, which will dump the files out as expected for Jenkins to pick up.

To clean up, we’ll just add a second post-build action, this time choosing the “Execute a set of scripts” option and entering the following:

kill -9 `cat guard.pid`

This just kills the Guard process, which ordinarily stays running to power your autotest capabilities. Once you run a new build you should see a chart automatically appear on your Jenkins project page telling you full details of how many tests failed over time and in the current build.

Getting it

Update: The Pull Request is now merged into the main guard-jasmine repo so you can just use `gem ‘guard-jasmine’` in your Gemfile

This is hot off the presses but I wanted to write it up while it’s still fresh in my mind. At the time of writing the pull request is still outstanding on the guard-jasmine repository, so to use the new options you’ll need to temporarily use my guard-jasmine fork. In your Gemfile:

gem 'guard-jasmine'

Once the PR is merged and a new version issued you should switch back to the official release channel. It’s working well for me but it’s fresh code so may contains bugs – YMMV. Hopefully this helps save some folks a little pain!

Sencha Con 2013 Wrapup

So another great Sencha Con is over, and I’m left to reflect on everything that went on over the last few days. This time was easily the biggest and best Sencha Con that I’ve been to, with 800 people in attendance and a very high bar set by the speakers. The organization was excellent, the location fun (even if the bars don’t open until 5pm…), and the enthusiasm palpable.

I’ve made a few posts over the last few days so won’t repeat the content here – if you want to see what else happened check these out too:

What I will do though is repeat my invitation to take a look at what we’re doing with JavaScript at C3 Energy. I wrote up a quick post about it yesterday and would love to hear from you – whether you’re at Sencha Con or not.

Now on to some general thoughts.

Content

There was a large range in the technical difficulty of the content, with perhaps a slightly stronger skew up the difficulty chain compared to previous events. This is a good thing, though there’s probably still room for more advanced content. Having been there before though, I know how hard it is to pitch that right so that everyone enjoys and gets value of out it.

The biggest challenge for me was the sheer number of tracks – at any one time there would be seven talks happening simultaneously, two or three of which I’d really want to watch. Personally I’d really love it if the hackathon was dropped in favor of a third day of sessions, with a shift down to 4-5 tracks. I’m sure there’s a cost implication to that, but it’s worth thinking about.

Videos

There were cameras set up in at least the main hall on the first day, but I didn’t see any on day 2. I did overhear that the video streams were being recorded directly from what was being shown on the projectors, with the audio recorded separately. If that’s true I’d guess it would make editing a bit easier so maybe that’ll means a quick release.

Naturally, take this with a pinch of salt until the official announcement comes out. In the meantime, there’s at least one video available so far:

Grgur lets off some steam

Grgur lets off some steam

Fun Things

The community pavilion was a great idea, and served as the perfect space for attendees with hang out away from the other rascals running around the hotel. Coffee and snacks were available whenever I needed them, and there was plenty of seating to chill out in.

I missed out on the visit to the theme park, which I hear was by far the most fun part of the event. Having a theme park kick out everyone but Sencha Con attendees while serving copious amounts of alcohol seemed to go down very well with the attendees!

Unconference

I had been hoping to give a presentation on the new C3UI framework at the unconference, but unfortunately there were no projectors available at that part of the event. My outrageous presentation style tends to require a projector and a stage to stomp around on so that was a no-go for me.

Maybe next time a lightning talk track alongside the unconference would be a good addition. So long as there is a projector :)

All in all, what a fantastic event. Can’t wait for next year.

Sencha Con Attendees: I Need You

Love working with Sencha frameworks? Want to come work with me on the next generation? I moved on to C3 Energy about a year ago, where we are busily building the operating system for the largest machine ever conceived by humans – the Smart Grid.

The Smart Grid is an amazing concept that’s being rolled out right now. C3 Energy is the only company in existence that addresses the full stack of Smart Grid architecture – from generation through transmission and end-user consumption.

But what’s that got to do with JavaScript? Well, my team gets to work on building the UI that powers everything that happens on the smart grid. We have some unique requirements that have led us to write our own beautiful little framework, optimized for end-user performance and developer productivity. Naturally, this leaves me feeling like this:

Success Kid

We’re a small (70 person) company of exceptionally talented people. We have a staggeringly successful collection of people both on the board and as the executive team.

We’d like to attract more people like us, and the Sencha community is the perfect place to look – especially given how much the framework has been inspired by what I helped create at Sencha.

If you’re intrigued but don’t know much about this space, I can’t recommend this video enough. This is a presentation our CEO Tom Siebel gave a few months back, introducing why the company exists, which problems it’s solving, and why we’re doing what we’re doing. If you can watch this without getting excited, this probably isn’t for you :)

siebs

You’ll get to work alongside people like this every day at C3. It’s really an incomparable feeling, and I’d love to introduce you to it. If you’re interested in finding out more in a low pressure way, drop me a comment or a tweet (@edspencer) or come grab me so I can buy you a beer.

Sencha Con 2013: Ext JS Performance tips

Just as with Jacky’s session, I didn’t plan on making a separate post about this, but again the content was so good and I ended up taking so many notes that it also warrants its own space. To save myself from early carpal tunnel syndrome I’m going to leave this one in more of a bullet point format.

Nige

Ext JS has been getting more flexible with each release. You can do many more things with it these days than you used to be able to, but there has been a performance cost associated with that. In many cases this performance degradation is down to the way the framework is being used, as opposed to a fundamental problem with the framework itself.

There’s a whole bunch of things that you can do to dramatically speed up the performance of an app you’re not happy with, and Nige “Animal” White took us through them this morning. Here’s what I was able to write down in time:

Slow things

Nige identified three of the top causes of sluggish apps, which we’ll go through one by one:

  • Network latency
  • JS execution
  • Layout activity

Network latency:

  • Bad ux – got to stare at blank screen for a while
  • Use Sencha Command to build the app – single file, minimized
  • 4810ms vs 352ms = dynamic loading vs built

JavaScript execution:

  • Avoid slow JS engines (he says with a wry smile)
  • Optimize repeated code – for loops should be tight, cache variables outside
  • Ideally, don’t do any processing at render time
  • Minimize function calls
  • Lazily instantiate items
  • Use the PageAnalyzer (in the Ext JS SDK examples folder) to benchmark your applications
  • Start Chrome with –enable-benchmarking to get much more accurate timing information out of the browser

Layouts

Suspend store events when adding/removing many records. Otherwise we’re going to get a full Ext JS layout pass for each modification

 grid.store.suspendEvents();
 //do lots of updating
 grid.store.resumeEvents();
 grid.view.refresh()

Ditto on trees (they’re the same as grids)
Coalesce multiple layouts. If you’re adding/removing a bunch of Components in a single go, do it like this:

 Ext.suspendLayouts();
 //do a bunch of UI updates
 Ext.resumeLayouts(true);

Container#add accepts an array of items, which is faster than iterating over that array yourself and calling .add for each one. Avoid layout constraints where possible – in box layouts, align: ‘stretchmax’ is slow because it has to do multiple layout runs. Avoid minHeight, maxHeight, minWidth, maxWidth if possible

At startup:

  • Embed initialization data inside the HTML if possible – avoids AJAX requests
  • Configure the entire layout in one shot using that data
  • Do not make multiple Ajax requests, and build the layout in response

Use the ‘idle’ event

  • Similar to the AnimationQueue
  • Ext.globalEvents.on(‘idle’, myFunction) – called once a big layout/repaint run has finished
  • Using the idle listener sometimes preferable to setTimeout(myFunction, 1), because it’s synchronous in the same repaint cycle. The setTimeout approach means the repaint happens, then your code is called. If your code itself requires a repaint, that means you’ll have 2 repaints in setTimeout vs 1 in on.(‘idle’)

Reduce layout depth

Big problem – overnesting. People very often do this with grids:

{
    xtype: 'tabpanel',
    items: [
        {
            title: 'Results',
            items: {
                xtype: 'grid'
            }
        }
    ]
}

Better:

{
    xtype: 'tabpanel',
    items: {
        title: 'Results',
        xtype: 'grid'
    }
}

This is important because redundant components still cost CPU and memory. Everything is a Component now – panel headers, icons, etc etc. Can be constructing more Components than you realize. Much more flexible, but easy to abuse

Lazy Instantiation

New plugin at https://gist.github.com/ExtAnimal/c93148f5194f2a232464

{
    xtype: 'tabpanel',
    ptype: 'lazyitems',
    items: {
        title: 'Results',
        xtype: 'grid'
    }
}

Overall impact

On a real life large example contributed by a Sencha customer:

Bad practices: 5187ms (IE8)
Good practices: 1813ms (IE8)
1300ms vs 550ms on Chrome (same example)

Colossal impact on the Ext.suspendLayout example – 4700ms vs 100ms on Chrome

Summary

This is definitely a talk you’ll want to watch when they go online. It was absolutely brimming with content and the advice comes straight from the horse’s mouth. Nige did a great job presenting, and reminded us that performance is a shared responsibility – the framework is getting faster as time goes by, but we the developers need to do our share too to make sure it stays fast.

Sencha Con 2013: Fastbook

I didn’t plan on writing a post purely on Fastbook, but Jacky’s presentation just now was so good I felt it needed one. If you haven’t seen Fastbook yet, it is Sencha’s answer to the (over reported) comments by Zuckerburg that using HTML5 for Facebook’s mobile app was a mistake.

Jacky on stage

After those comments there was a lot of debate around whether HTML5 is ready for the big time. Plenty of opinions were thrown around, but not all based on evidence. Jacky was curious about why Facebook’s old app was so slow, and wondered if he could use the same technologies to achieve a much better result. To say he was successful would be a spectacular understatement – Fastbook absolutely flies.

Performance can be hard to describe in words, so Sencha released this video that demonstrates the HTML5 Fastbook app against the new native Facebook apps. As you can see, not only is the HTML5 version at least as fast and fluid as the native versions, in several cases it’s actually significantly better (especially on Android).

Challenges

The biggest challenge here is dynamically loading and scrolling large quantities of data while presenting a 60fps experience to the user. 60fps means you have just 16.7ms per frame to do everything, which is a hugely tall order on a CPU and memory constrained mobile device.

The way to achieve this is to treat the page as an app rather than a traditional web page. This means we need to be a lot more proactive in managing how and when things are rendered – something that traditionally has been in the domain of the browser’s own rendering and layout engines. Thankfully, the framework will do all of this for you.

As an example, Jacky loaded up Gmail’s web app and showed what happens when you scroll a long way down your inbox. The more you scroll, the more divs are added to the document (one new div per message). Each div contains a bunch of child elements too, so we’re adding maybe a dozen or so nodes to our DOM tree per message.

The problem with this is that as the DOM tree gets larger and larger, everything slows down. You could see the inspector showing slower and slower layout recalculations, making the app sluggish.

The solution is to recycle DOM nodes once they’re no longer visible. In this way, a list that seems to have infinite content could contain only say 10 elements – just enough to fill the screen. Once you scroll down the list, DOM nodes that scrolled off the top are detached, updated with new data and placed at the bottom of the list. Simple. Ingenius. Beautiful.

Prioritization

There’s usually a lot more going on in an app than just animating a scrolling view though. There’s data to load via AJAX, images to load, compositing, processing, and whatever else your app needs to do. And then there are touch events, which need to feel perfectly responsive at all times, even while all of this is going on.

To make this sane and manageable, we have a new class called AnimationQueue. All of the jobs I just mentioned above – handling touch events, animation, network requests and so on – are dispatched through the AnimationQueue with a given priority. Touch event handling has the top priority, followed by animation, followed by everything else.

AnimationQueue does as much as it can in that 16.7ms window, then breaks execution again to allow the browser to reflow/repaint/whatever else it needs to do. What this means is that while scrolling down a large list, it’s likely that our CPU/GPU is being taxed so much that we don’t have any time to load images or other low priority jobs.

This is a Good Thing, because if we’re scrolling through a large list there’s a good chance we are going to skip right over those images anyway. In the end they’re loaded as soon as the AnimationQueue has some spare time, which is normally when your scrolling of the list has slowed down or stopped.

Sandboxing

The final, and most complex technique Jacky discussed was Sandboxing. The larger your application gets, the larger the DOM tree. Even if you are using best practices, there’s an expense to simply having so many components on the same page. The bottleneck here is in the browser itself – looks like we need another hack.

To get around this, we can dynamically create iframes that contain parts of our DOM tree. This way our main page DOM tree can remain small but we can still have a huge application. This not only speeds up browser repaint and reflow, it also improves compositing performance, DOM querying and more.

This all happens under the covers and Jacky’s aiming on including Ext.Sandbox in Sencha Touch 2.3 so that all apps can take advantage of this huge improvement. He cautioned (rightly) that it’ll only make 2.3 if it’s up to his high standards though, so watch this space.

Sencha Con 2013 Day 1

Sencha Con 2013 kicked off today, with some stunning improvements demoed across the product set. I’m attending as an audience member for the first time so thought I’d share how things look from the cheap seats.

Keynote

The keynote was very well put together, with none of the AV issues that plagued us last year (maybe they seemed worse from behind the curtain!). It started off with a welcome from Paul Kopacki, followed by some insights into the current status of developers in the world of business (apparently we’re kingmakers – who knew!). One of Blackberry’s evangelists came up and made a pretty good pitch for giving them a second look (the free hardware probably helped a little…)

The meat, though, was in the second half of the presentation. We were treated to a succession of great new features across Ext JS, Sencha Touch and Sencha Architect, which I’ll go into in a little more detail below.

But it was Abe Elias and Jacky Nguyen who stole the show in the end. Unleashing a visionary new product, Sencha Space, they demonstrated a brand new way to enable businesses to elegantly solve the problem of BYOD (Bring Your Own Device).

Nobody wants to be given a mobile phone by their IT department when they’ve got a brand new iPhone in their pocket. But those IT guys have good reason for doing this – consumer browsers are currently inherently insecure. Sencha Space solves this problem by providing a single app that employees can install, log in to and gain access to all of the apps needed to be productive in the company.

I could write a lot more about it but the 2 minute video below can surely do a better job:

Ext JS upgrades

The keynote lasted most of the morning, but in the afternoon Don Griffin came back on stage to tell us more about what’s coming soon in Ext JS. Don heads up Ext JS these days, and is one of the most intelligent and experienced people I’ve had the joy of working with. I’m pretty sure he gained the largest amount of spontaneous applause of the day during the Ext JS talk, which is no surprise given the awesome stuff he showed us.

I forget which order things were revealed in, but these things stood out for me:

  • Touch Support – while this may seem anathema to the thinking behind Ext JS, it’s an undeniable fact that people try to use Ext JS applications on tablets. Whether they should or not is a different question, but in this next release it will be officially supported by the framework. Momentum scrolling, pinch to zoom and dragdrop resizing are all supported at your fingertips.
  • Grid Gadgets – quite likely the coolest new feature, Gadgets allow you to render any Component into each cell in a Grid, in an extremely CPU and memory efficient manner. Seeing a live grid updating with rich charts and other widgets at high frequency was a fantastic experience
  • Border Layout – allows your users to rearrange the border layouts used in your apps with drag and drop. Easy to switch between accordion layout, box layout or tabs
  • A shedload more. The enforced pub crawl has temporarily relieved me of a full memory. So impressed with everything that was demonstrated today.

Sencha Touch upgrades

Jacky came up and delivered a presentation on what’s coming up in Sencha Touch, using his idiosyncratic and inimitable style. Some of the things that stood out for me:

  • Touch gets a grid. It performs really well and looks great. Good for (sparing) use on tablet apps
  • XML configs. Not sure how I feel about this yet, but ST 2.3 will allow for views to be declared in XML, which is transformed into the normal JSON format under the covers. You end up writing few lines of code, but the overall file size probably doesn’t change too much. With a decent editor the syntax highlighting definitely makes the View code easier to read though
  • ViewModel. Just as we have Ext.data.Model for encapsulating data models, we now have ViewModel for encapsulating a view model, which includes things like state. Leads to a much improved API for updating Views in response to other changes
  • Theming. 2 additional themes were added, and the others have all been refactored to make theming even easier

Again there’s a lot more here and I couldn’t possibly do it all justice in a blog post. It’s geniunely thrilling to see these young frameworks mature into stellar products that are being used by literally millions of developers. Very exciting.

Architect upgrades

Architect has come a really long way since its inception a couple of years ago. The new features introduced today looked like some of the largest steps forward the product has ever taken. I’m finally getting close to actually thinking about using it in real life (I’m a glutten for editing code in Sublime Text). Some standout features:

  • New template apps to get you up and running with a new app in seconds
  • Integration with Appurify, which allows you to test your Architect apps on real devices hosted by their service
  • Allows you to install third party extensions into Architect, and have them seamlessly integrated into your project

Day 1 Summary

Although I worked with these people for years, somehow I’m still surprised when I see every single developer giving world class presentations. I don’t know how I was able to leave Sencha a year ago, but every time I interact with Abe, Don, Jacky, Tommy, Jamie, Rob, Nige, and all of the other rockstars at that place I’m reminded what a great and unique time that was. Really looking forward to what tomorrow brings!

Autotesting JavaScript with Jasmine and Guard

One of the things I really loved about Rails in the early days was that it introduced me to the concept of autotest – a script that would watch your file system for changes and then automatically execute your unit tests as soon as you change any file.

Because the unit test suite typically executes quickly, you’d tend to have your test results back within a second or two of hitting save, allowing you to remain in the editor the entire time and only break out the browser for deeper debugging – usually the command line output and OS notifications (growl at the time) would be enough to set you straight.

This was a fantastic way to work, and I wanted to get there again with JavaScript. Turns out it’s pretty easy to do this. Because I’ve used a lot of ruby I’m most comfortable using its ecosystem to achieve this, and as it happens there’s a great way to do this already.

Enter Guard

Guard is a simple ruby gem that scans your file system for changes and runs the code of your choice whenever a file you care about is saved. It has a great ecosystem around it which makes automating filesystem-based triggers both simple and powerful. Let’s start by making sure we have all the gems we need:


gem install jasmine jasmine-headless-webkit guard-jasmine-headless-webkit guard \
 guard-livereload terminal-notifier-guard --no-rdoc --no-ri

This just installs a few gems that we’re going to use for our tests. First we grab the excellent Jasmine JavaScript BDD test framework via its gem – you can use the framework of your just but I find Jasmine both pleasant to deal with and it generally Just Works. Next we’re going to add the ‘jasmine-headless-webkit’ gem and its guard twin, which use phantomjs to run your tests on the command line, without needing a browser window.

Next up we grab guard-livereload, which enables Guard to act as a livereload server, automatically running your full suite in the browser each time your save a file. This might sound redundant – our tests are already going to be executed in the headless webkit environment, so why bother running them in the browser too? Well, the browser Jasmine runner tends to give a lot more information when something goes wrong – stack traces and most importantly a live debugger.

Finally we add the terminal-notifier-guard gem, which just allows guard to give us a notification each time the tests finish executing. Now we’ve got our dependencies in line it’s time to set up our environment. Thankfully both jasmine and guard provide simple scripts to get started:


jasmine init

guard init

And we’re ready to go! Let’s test out our setup by running `guard`:


guard

What you should see at this point is something like this:

guard-screenshot

Terminal output after starting guard

We see guard starting up, telling us it’s going to use TerminalNotifier to give us an OS notification every time the tests finish running, and that it’s going to use JasmineHeadlessWebkit to run the tests without a browser. You’ll see that 5 tests were run in about 5ms, and you should have seen an OS notification flash up telling you the same thing. This is great for working on a laptop where you don’t have the screen real estate to keep a terminal window visible at all times.

What about those 5 tests? They’re just examples that were generated by `jasmine init`. You can find them inside the spec/javascripts directory and by default there’s just 1 – PlayerSpec.js.

Now try editing that file and hitting save – nothing happens. The reason for this is that the Guardfile generated by `guard init` isn’t quite compatible out of the box with the Jasmine folder structure. Thankfully this is trivial to fix – we just need to edit the Guardfile.

If you open up the Guardfile in your editor you’ll see it has about 30 lines of configuration. A large amount of the file is comments and optional configs, which you can delete if you like. Guard is expecting your spec files to have the format ‘my_spec.js’ – note the ‘_spec’ at the end.

To get it working the easiest way is to edit the ‘spec_location’ variable (on line 7 – just remove the ‘_spec’), and do the same to the last line of the `guard ‘jasmine-headless-webkit’ do` block. You should end up with something like this:


spec_location = "spec/javascripts/%s"

guard 'jasmine-headless-webkit' do
watch(%r{^app/views/.*\.jst$})
watch(%r{^public/javascripts/(.*)\.js$}) { |m| newest_js_file(spec_location % m[1]) }
watch(%r{^app/assets/javascripts/(.*)\.(js|coffee)$}) { |m| newest_js_file(spec_location % m[1]) }
watch(%r{^spec/javascripts/(.*)\..*}) { |m| newest_js_file(spec_location % m[1]) }
end

Once you save your Guardfile, there’s no need to restart guard, it’ll notice the change to the Guardfile and automatically restart itself. Now when you save PlayerSpec.js again you’ll see the terminal immediately run your tests and show your the notification that all is well (assuming your tests still pass!).

So what are those 4 lines inside the `guard ‘jasmine-headless-webkit’ do` block? As you’ve probably guessed they’re just the set of directories that guard should watch. Whenever any of the files matched by the patterns on those 4 lines change, guard will run its jasmine-headless-webkit command, which is what runs your tests. These are just the defaults, so if your JS files are not found inside those folders jus update it to point to the right place.

Livereload

The final part of the stack that I use is livereload. Livereload consists of two things – a browser plugin (available for Chrome, Firefox and others), and a server, which have actually already set up with Guard. First you’ll need to install the livereload browser plugin, which is extremely simple.

Because the livereload server is already running inside guard, all we need to do is give our browser a place to load the tests from. Unfortunately the only way I’ve found to do this is to open up a second terminal tab and in the same directory run:


rake jasmine

This sets up a lightweight web server that runs on http://localhost:8888. If you go to that page in your browser now you should see something like this:

livereload-screenshot

livereload in the browser – the livereload plugin is immediately to the right of the address bar

Just hit the livereload button in your browser (once you’ve installed the plugin), edit your file again and you’ll see the browser automatically refreshes itself and runs your tests. This step is optional but I find it extremely useful to get a notification telling me my tests have started failing, then be able to immediately tab into the browser environment to get a full stack trace and debugging environment.

That just about wraps up getting autotest up and running. Next time you come back to your code just run `guard` and `rake jasmine` and you’ll get right back to your new autotesting setup. And if you have a way to have guard serve the browser without requiring the second tab window please share in the comments!

On Leaving Sencha

As some of you may know, I left Sencha last week to move to another startup just up the road in San Mateo. Leaving the company was a hugely difficult thing to do for lots of reasons, some obvious, some less so. I’d like to share a few thoughts on my time there and look forward a little to the future.

I first came across Sencha’s products when I saw an early preview of Ext JS 2 way back in 2007. I thought it was amazing stuff, and I started using it all over the place despite being a Ruby guy at the time. As time went by and I got deeper into the language and the framework, it became clear that JavaScript was the future, even though most people at the time still thought that was a little crazy.

I didn’t really intend to join the company. I was having fun writing components and exploring the framework from the outside already, but a chance meeting in San Francisco with the team changed all that. What I found was a small but immensely talented group of people who loved what they did – writing awesome frameworks all day. Underqualified though I felt, being invited into that group was an honor I couldn’t really refuse.

Early Days

When I started back in late 2009, Ext JS 3.1 was just being wrapped up for release so I leapt straight into creating 3.2. Having only ever consumed the framework before, making the leap to creating brand new components was quite a challenge. Thankfully Sencha can count many veterans in its ranks, and Jamie in particular demonstrated his saintly patience in bringing me up to speed.

Ext JS 3.2 saw the addition of animated DataView transitions, composite fields and a few Toolbar plugins. It also required some upgrades to Store, which was a horrifying enough experience that I’d spend a few weeks rewriting the entire data package for Sencha Touch and Ext JS 4. 3.2 also saw the first of my allegedly bombastic blog posts (I’m just enthusiastic…)!

All this time we were a very small group working out of a picturesque little office on University Avenue in Palo Alto. During that first year we grew to maybe 25 people and all fit happily into the one big open plan room, descending en masse upon one of the many restaurants along the strip or bringing food back to eat in the sunny courtyard outside the office.

The original Palo Alto office. My desk was at the lower left

The original Palo Alto office. My desk was at the lower left

I think of that time as the happiest part of my Sencha experience. Somehow I’d found myself in the heart of Silicon Valley surrounded by unbelievably talented people, creating groundbreaking products – some of which we were even allowed to give away for free! We worked like crazy, often well into the early hours of the morning, but it was a lot of fun and I think we created a lot we can be proud of in that time.

Creating Sencha Touch, Learning how to Conference

Not long after Ext JS 3.2 went final, and in parallel with Ext JS 3.3, we started creating Sencha Touch. The initial work was all from Tommy and Dave, before I got a chance to jump in and start writing the new data package. Over time most of the team got a chance to put their name on Touch as we raced to create the world’s first HTML5 mobile app framework. Creating a new product from scratch like that was an awesome experience, and the final product was pretty good (though nowhere near as good as we’d get it with 2.0).

SenchaCon 2010 was scheduled for mid-November and we’d decided we wanted to make a big splash by releasing Sencha Touch – for free. Naturally, this meant a lot of work in a very short period of time in the months running up to the conference. I have vivid memories of a particular evening (read: 3am) in the office just before an imminent release. That can be stressful enough at the best of times but this particular evening our fire alarm would not stop going off. I don’t know whether it was the people, the project or the pressure, but what should have been a dreadful night was a really fun experience. And I think it paid off – we shipped on time at the conference, but only just.

This would be a pattern we’d repeat more than once – working night and day to create both products and presentations that have an immovable deadline. Once more it amazes me how talented my friends at Sencha really are: how many developers do you know who can write great code and deliver world-class conference presentations? That all came from a lot of hard work but it’s one more reason why it was so hard to leave that group of people behind.

This happened a lot in the line of duty

This happened a lot in the line of duty

Later Days

Later on, our time was dominated first by Ext JS 4, then by Sencha Touch 2. I was able to make a couple of contributions to Ext JS 4 – chiefly the new data package plus an evolution of the MVC architecture that debuted in Sencha Touch 1. I probably spent as much time writing documentation as I did writing code though, which is a pattern I’d later repeat on Sencha Touch 2. For whatever reason there’s a misalignment in my brain that makes me pretty passionate about docs, so if you’re reading the guides and class docs from those projects and none of it makes sense, well, sorry! (but you should see how it was before…)

By this time we’d outgrown our little office in Palo Alto and moved to a much bigger space in Redwood City. With 5x the floor space at our disposal the company started growing like crazy, easily expanding by a factor of 10 during the time I was there. That transition was harder than I expected – at 10 people it was like a large family, at 100 it was definitely a Company. I think a lot of that is down to Sencha’s success, but it still caught me off guard having never been through that before.

I think the thing I’m proudest of during my time at Sencha was the release of Sencha Touch 2. This was the first release where we got (almost) everything right – the quality was high, the performance was great, and we finally cracked MVC. We even launched with relatively good docs and examples from day one, though I’ve learned by now that you can never have enough of that stuff.

People/Future

As well as getting to work with so many talented people inside the company, I’ve also been lucky enough to meet a huge number of people from the Sencha community. If anything you guys seem even more passionate about our stuff than we are. Until SenchaCon I could honestly say I’d never been mobbed but for those few days a year you make us all feel like rockstars. We may not say it at the time but I know everyone involves gets a huge high from those interactions, so thanks.

While I’m at a new company now I expect to stay active in the Sencha community, I’m far too attached to what we created together to leave that behind any time soon. I’ll stay active on the forums and maybe even blog once a while – if you want to get in touch feel free to reach out here, on twitter or linkedin, or if you’re near Palo Alto maybe I’ll buy you a beer.

Sencha’s best days are ahead of it and they have a great team there to deliver on the mission. I remain a big fan of the company, its people, its products and especially its community and can’t wait to see what happens next.

Anatomy of a Sencha Touch 2 App

At its simplest, a Sencha Touch 2 application is just a small collection of text files – html, css and javascript. But applications often grow over time so to keep things organized and maintainable we have a set of simple conventions around how to structure and manage your application’s code.

A little while back we introduced a technology called Sencha Command. Command got a big overhaul for 2.0 and today it can generate all of the files your application needs for you. To get Sencha Command you’ll need to install the SDK Tools and then open up your terminal. To run the app generator you’ll need to make sure you’ve got a copy of the Sencha Touch 2 SDK, cd into it in your terminal and run the app generate command:

sencha generate app MyApp ../MyApp

This creates an application called MyApp with all of the files and folders you’ll need to get started generated for you. You end up with a folder structure that looks like this:

st2-dir-overview

This looks like a fair number of files and folders because I’ve expanded the app folder in the image above but really there are only 4 files and 3 folders at the top level. Let’s look at the files first:

  • index.html: simplest HTML file ever, just includes the app JS and CSS, plus a loading spinner
  • app.js: this is the heart of your app, sets up app name, dependencies and a launch function
  • app.json: used by the microloader to cache your app files in localStorage so it boots up faster
  • packager.json: configuration file used to package your app for native app stores

To begin with you’ll only really need to edit app.js – the others come in useful later on. Now let’s take a look at the folders:

  • app: contains all of your application’s source files – models, views, controllers etc
  • resources: contains the images and CSS used by your app, including the source SASS files
  • sdk: contains the important parts of the Touch SDK, including Sencha Command

The app folder

You’ll spend 90%+ of your time inside the app folder, so let’s drill down and take a look at what’s inside that. We’ve got 5 subfolders, all of which are empty except one – the view folder. This just contains a template view file that renders a tab panel when you first boot the app up. Let’s look at each:

Easy stuff. There’s a bunch of documentation on what each of those things are at the Touch 2 docs site, plus of course the Getting Started video with awesome narration by some British guy.

The resources folder

Moving on, let’s take a look at the resources folder:

st2-dir-resources

Five folders this time – in turn:

  • icons: the set of icons used when your app is added to the home screen. We create some nice default ones for you
  • loading: the loading/startup screen images to use when your app’s on a home screen or natively packaged
  • images: this is where you should put any app images that are not icons or loading images
  • sass: the source SASS files for your app. This is the place to alter the theming for your app, remove any CSS you’re not using and add your own styles
  • css: the compiled SASS files – these are the CSS files your app will use in production and are automatically minified for you

There are quite a few icon and loading images needed to cover all of the different sizes and resolutions of the devices that Sencha Touch 2 supports. We’ve included all of the different formats with the conventional file names as a guide – you can just replace the contents of resources/icons and resources/loading with your own images.

The sdk folder

Finally there’s the SDK directory, which contains the SDK’s source code and all of the dependencies used by Sencha Command. This includes Node.js, Phantom JS and others so it can start to add up. Of course, none of this goes into your production builds, which we keep as tiny and fast-loading as possible, but if you’re not going to use the SDK Tools (bad move, but your call!) you can remove the sdk/command directory to keep things leaner.

By vendoring all third-party dependencies like Node.js into your application directory we can be confident that there are no system-specific dependencies required, so you can zip up your app, send it to a friend and so long as she has the SDK Tools installed, everything should just work.

Hopefully that lays out the large-scale structure of what goes where and why – feel free to ask questions!

Follow

Get every new post delivered to your Inbox.

Join 2,604 other followers

%d bloggers like this: