Everything tagged examples (15 posts)

Using ChatGPT to generate ChatGPT Assistants

OpenAI dropped a ton of cool stuff in their Dev Day presentations, including some updates to function calling. There are a few function-call-like things that currently exist within the Open AI ecosystem, so let's take a moment to disambiguate:

  • Plugins: introduced in March 2023, allowed GPT to understand and call your HTTP APIs
  • Actions: an evolution of Plugins, makes it easier but still calls your HTTP APIs
  • Function Calling: Chat GPT understands your functions, tells you how to call them, but does not actually call them

It seems like Plugins are likely to be superseded by Actions, so we end up with 2 ways to have GPT call your functions - Actions for automatically calling HTTP APIs, Function Calling for indirectly calling anything else. We could call this Guided Invocation - despite the name it doesn't actually call the function, it just tells you how to.

That second category of calls is going to include anything that isn't an HTTP endpoint, so gives you a lot of flexibility to call internal APIs that never learned how to speak HTTP. Think legacy systems, private APIs that you don't want to expose to the internet, and other places where this can act as a highly adaptable glue.

I've put all the source code for this article up at https://github.com/edspencer/gpt-functions-example, so check that out if you want to follow along. It should just be a matter of following the steps in the README, but YMMV. We are, of course, going to use a task management app as a playground.

Creating Function definitions

In order for OpenAI Assistants to be able to call your code, you need to provide them with signatures for all of your functions, in the format that it wants, which look like this:

{
"type": "function",
"function": {
"name": "addTask",
"description": "Adds a new task to the database.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name of the task."
},
"priority": {
"type": "number",
"description": "The priority of the task, lower numbers indicating higher priority."
},
"completed": {
"type": "boolean",
"description": "Whether the task is marked as completed."
}
},
"required": ["name"]
}
}
}
{
"type": "function",
"function": {
"name": "addTask",
"description": "Adds a new task to the database.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name of the task."
},
"priority": {
"type": "number",
"description": "The priority of the task, lower numbers indicating higher priority."
},
"completed": {
"type": "boolean",
"description": "Whether the task is marked as completed."
}
},
"required": ["name"]
}
}
}

That's pretty self-explanatory. It's also a pain in the ass to keep tweaking and updating as you evolve your app, so let's use the OpenAI Chat Completions API with the json_object setting enabled and see if we can have this done for us.

Our Internal API

Let's build a basic Task management app. We'll just use a super-naive implementation of Todos written in TypeScript. My little API.ts has functions like addTask, updateTask, removeTask, getTasks, etc. All the stuff you'd expect. Some of them take a bunch of different inputs.

Here's a snippet of our API.ts file. It's very basic but functional, using a sqlite database driven by Prisma:

interface TaskInput {
name: string;
priority?: number;
completed?: boolean;
deleted?: boolean;
}

/**
* Adds a new task to the database.
* @param taskInput - An object containing the details of the task to be added.
* @param taskInput.name - The name of the task.
* @param taskInput.priority - The priority of the task.
* @returns A Promise that resolves when the task has been added to the database.
*/
async function addTask(taskInput: Task): Promise<Task | void> {
try {
const task = await prisma.task.create({
data: taskInput
})
console.log(`Task ${task.id} created with name ${task.name} and priority ${task.priority}.`)

return task;
} catch (e) {
console.error(e)
}
}

/**
* Updates a task in the database.
* @param id - The ID of the task to update.
* @param updates - An object containing the updates to apply to the task.
* @param updates.name - The updated name of the task.
* @param updates.priority - The updated priority of the task.
* @param updates.completed - The updated completed status of the task.
* @returns A Promise that resolves when the task has been updated in the database.
*/
async function updateTask(id: string, updates: Partial<TaskInput>): Promise<void> {
try {
const task = await prisma.task.update({
where: { id },
data: updates,
})
console.log(`Task ${task.id} updated with name ${task.name} and priority ${task.priority}.`)
} catch (e) {
console.error(e)
}
}
interface TaskInput {
name: string;
priority?: number;
completed?: boolean;
deleted?: boolean;
}

/**
* Adds a new task to the database.
* @param taskInput - An object containing the details of the task to be added.
* @param taskInput.name - The name of the task.
* @param taskInput.priority - The priority of the task.
* @returns A Promise that resolves when the task has been added to the database.
*/
async function addTask(taskInput: Task): Promise<Task | void> {
try {
const task = await prisma.task.create({
data: taskInput
})
console.log(`Task ${task.id} created with name ${task.name} and priority ${task.priority}.`)

return task;
} catch (e) {
console.error(e)
}
}

/**
* Updates a task in the database.
* @param id - The ID of the task to update.
* @param updates - An object containing the updates to apply to the task.
* @param updates.name - The updated name of the task.
* @param updates.priority - The updated priority of the task.
* @param updates.completed - The updated completed status of the task.
* @returns A Promise that resolves when the task has been updated in the database.
*/
async function updateTask(id: string, updates: Partial<TaskInput>): Promise<void> {
try {
const task = await prisma.task.update({
where: { id },
data: updates,
})
console.log(`Task ${task.id} updated with name ${task.name} and priority ${task.priority}.`)
} catch (e) {
console.error(e)
}
}

It goes on from there. You get the picture. No it's not production-grade code - don't use this as a launchpad for your Todo list manager app. GitHub Copilot actually wrote most of that code (and most of the documentation) for me.

Side note on documentation: it took me more years than I care to admit to figure out that the primary consumer of source code is humans, not machines. The machine doesn't care about your language, formatting, awfulness of your algorithms, weird variable names, etc; algorithmic complexity aside it'll do exactly the same thing regardless of how you craft your code. Humans are a different matter though, and benefit enormously from a little context written in a human language.

Ironically, that same documentation that benefitted human code consumers all this time is now what enables these new machine consumers to grok and invoke your code, saving you the work of coming up with a translation layer to integrate with AI agents. So writing documentation really does help you after all. Also, write tests and eat your vegetables.

Generating the OpenAI translation layer

The code to translate our internal API into something OpenAI can use is fairly simple and reusable. All we do is read in a file as text, stuff the contents of that file into a GPT prompt, send that off to OpenAI, stream the results back to the terminal and save it to a file when done:

/**
* This file uses the OpenAI Chat Completions API to automatically generate OpenAI Function Call
* JSON objects for an arbitrary code file. It takes a source file, reads it and passes it into
* OpenAI with a simple prompt, then writes the output to another file. Extend as needed.
*/

import OpenAI from 'openai';
import fs from 'fs';
import path from 'path';

import { OptionValues, program } from 'commander';

//takes an input file, and generates a new tools.json file based on the input file
program.option('sourceFile', 'The source file to use for the prompt', './API.ts');
program.option('outputFile', 'The output file to write the tools.json to (defaults to your input + .tools.json');

const openai = new OpenAI();

/**
* Takes an input file, and generates a new tools.json file based on the input file.
* @param sourceFile - The source file to use for the prompt.
* @param outputFile - The output file to write the tools.json to. Defaults to
* @returns Promise<void>
*/
async function build({ sourceFile, outputFile = `${sourceFile}.tools.json` }: OptionValues) {
console.log(`Reading ${sourceFile}...`);
const sourceFileText = fs.readFileSync(path.join(__dirname, sourceFile), 'utf-8');

const prompt = `
This is the implementation of my ${sourceFile} file:

${sourceFileText}

Please give me a JSON object that contains a single key called "tools", which is an array of the functions in this file.
This is an example of what I expect (one element of the array):

{
"type": "function",
"function": {
"name": "addTask",
"description": "Adds a new task to the database.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name of the task."
},
"priority": {
"type": "number",
"description": "The priority of the task, with lower numbers indicating higher priority."
},
"completed": {
"type": "boolean",
"description": "Whether the task is marked as completed."
}
},
"required": ["name"]
}
}
},

`
//Call the OpenAI API to generate the function definition, and stream the results back
const stream = await openai.chat.completions.create({
model: 'gpt-4-1106-preview',
response_format: { type: 'json_object' },
messages: [{ role: 'user', content: prompt }],
stream: true,
});

//Keep the new tools.json in memory until we have it all
let newToolsJson = "";

for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || ''
process.stdout.write(content);
newToolsJson += content;
}

console.log(`Updating ${outputFile}...}`);

// Write the tools JSON to ../tools.json
fs.writeFileSync(path.join(__dirname, outputFile), newToolsJson);
}

build(program.parse(process.argv).opts());
/**
* This file uses the OpenAI Chat Completions API to automatically generate OpenAI Function Call
* JSON objects for an arbitrary code file. It takes a source file, reads it and passes it into
* OpenAI with a simple prompt, then writes the output to another file. Extend as needed.
*/

import OpenAI from 'openai';
import fs from 'fs';
import path from 'path';

import { OptionValues, program } from 'commander';

//takes an input file, and generates a new tools.json file based on the input file
program.option('sourceFile', 'The source file to use for the prompt', './API.ts');
program.option('outputFile', 'The output file to write the tools.json to (defaults to your input + .tools.json');

const openai = new OpenAI();

/**
* Takes an input file, and generates a new tools.json file based on the input file.
* @param sourceFile - The source file to use for the prompt.
* @param outputFile - The output file to write the tools.json to. Defaults to
* @returns Promise<void>
*/
async function build({ sourceFile, outputFile = `${sourceFile}.tools.json` }: OptionValues) {
console.log(`Reading ${sourceFile}...`);
const sourceFileText = fs.readFileSync(path.join(__dirname, sourceFile), 'utf-8');

const prompt = `
This is the implementation of my ${sourceFile} file:

${sourceFileText}

Please give me a JSON object that contains a single key called "tools", which is an array of the functions in this file.
This is an example of what I expect (one element of the array):

{
"type": "function",
"function": {
"name": "addTask",
"description": "Adds a new task to the database.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name of the task."
},
"priority": {
"type": "number",
"description": "The priority of the task, with lower numbers indicating higher priority."
},
"completed": {
"type": "boolean",
"description": "Whether the task is marked as completed."
}
},
"required": ["name"]
}
}
},

`
//Call the OpenAI API to generate the function definition, and stream the results back
const stream = await openai.chat.completions.create({
model: 'gpt-4-1106-preview',
response_format: { type: 'json_object' },
messages: [{ role: 'user', content: prompt }],
stream: true,
});

//Keep the new tools.json in memory until we have it all
let newToolsJson = "";

for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || ''
process.stdout.write(content);
newToolsJson += content;
}

console.log(`Updating ${outputFile}...}`);

// Write the tools JSON to ../tools.json
fs.writeFileSync(path.join(__dirname, outputFile), newToolsJson);
}

build(program.parse(process.argv).opts());

I've made a simple little repo with this file, the API.ts file, and a little demo that shows it all integrated. Run it like this:

ts-node rebuildTools.ts -s API.ts
ts-node rebuildTools.ts -s API.ts

Which will give you some output like this, and then update your API.ts.tools.json file:

ts-node rebuildTools.ts -s API.ts
Reading API.ts...
{
"tools": [
{
"type": "function",
"function": {
"name": "addTask",
"description": "Adds a new task to the database.",
"parameters": {
"type": "object",
"properties": {
"name": {

..........truncated...
full output at https://github.com/edspencer/gpt-functions-example/blob/main/API.ts.tools.json
.............................

"returns": {
"type": "Promise<void>",
"description": "A Promise that resolves when all tasks have been deleted from the database."
}
}
}
]
}
Updating ./API.ts.tools.json...
Done
ts-node rebuildTools.ts -s API.ts
Reading API.ts...
{
"tools": [
{
"type": "function",
"function": {
"name": "addTask",
"description": "Adds a new task to the database.",
"parameters": {
"type": "object",
"properties": {
"name": {

..........truncated...
full output at https://github.com/edspencer/gpt-functions-example/blob/main/API.ts.tools.json
.............................

"returns": {
"type": "Promise<void>",
"description": "A Promise that resolves when all tasks have been deleted from the database."
}
}
}
]
}
Updating ./API.ts.tools.json...
Done

Creating an OpenAI Assistant and talking to it

We've had Open AI generate our Tools JSON file, now let's see if it can use it with a simple demo.ts, which:

The code is all up on GitHub, and I won't do a blow-by-blow here but let's have a look at the output when we run it:

ts-node ./demo.ts -m "I need to go buy bread from the store, then go to \
the gym. I also need to do my taxes, which is a P1."
ts-node ./demo.ts -m "I need to go buy bread from the store, then go to \
the gym. I also need to do my taxes, which is a P1."

And the output:

Creating assistant...
Created assistant asst_hkT3BFQsNf3HSmJpE8KytiX9 with name Task Planner.
Created thread thread_AigYi0oFrytu3aO5k0mRacIV
Retrieved 0 tasks from the database.
Created message
msg_uLpR3UpQB3pX62wVIA7TcqIl
Polling thread
Current status: queued
Trying again in 2 seconds...
Polling thread
Current status: in_progress
Trying again in 2 seconds...
Polling thread
Current status: in_progress
Trying again in 2 seconds...
Polling thread
Current status: requires_action
Actions:
[
{
id: 'call_8JX5ffKFpxIhYmJeZYYilpv3',
type: 'function',
function: {
name: 'addTask',
arguments: '{"name": "Buy bread from the store", "priority": 2}'
}
},
{
id: 'call_GC4axxSB6Oso0tiolDLr900X',
type: 'function',
function: {
name: 'addTask',
arguments: '{"name": "Go to the gym", "priority": 2}'
}
},
{
id: 'call_7c5mWt1I5Ff3h5Lvb0Hfw2L7',
type: 'function',
function: {
name: 'addTask',
arguments: '{"name": "Do taxes", "priority": 1}'
}
}
]
Adding task
Task cloyl2gxs0000c3a7hxe6hupc created with name Buy bread from the store and priority 2.
Adding task
Task cloyl2gxv0001c3a7zi4hqt8z created with name Go to the gym and priority 2.
Adding task
Task cloyl2gxx0002c3a7l0gv7f07 created with name Do taxes and priority 1.
Creating assistant...
Created assistant asst_hkT3BFQsNf3HSmJpE8KytiX9 with name Task Planner.
Created thread thread_AigYi0oFrytu3aO5k0mRacIV
Retrieved 0 tasks from the database.
Created message
msg_uLpR3UpQB3pX62wVIA7TcqIl
Polling thread
Current status: queued
Trying again in 2 seconds...
Polling thread
Current status: in_progress
Trying again in 2 seconds...
Polling thread
Current status: in_progress
Trying again in 2 seconds...
Polling thread
Current status: requires_action
Actions:
[
{
id: 'call_8JX5ffKFpxIhYmJeZYYilpv3',
type: 'function',
function: {
name: 'addTask',
arguments: '{"name": "Buy bread from the store", "priority": 2}'
}
},
{
id: 'call_GC4axxSB6Oso0tiolDLr900X',
type: 'function',
function: {
name: 'addTask',
arguments: '{"name": "Go to the gym", "priority": 2}'
}
},
{
id: 'call_7c5mWt1I5Ff3h5Lvb0Hfw2L7',
type: 'function',
function: {
name: 'addTask',
arguments: '{"name": "Do taxes", "priority": 1}'
}
}
]
Adding task
Task cloyl2gxs0000c3a7hxe6hupc created with name Buy bread from the store and priority 2.
Adding task
Task cloyl2gxv0001c3a7zi4hqt8z created with name Go to the gym and priority 2.
Adding task
Task cloyl2gxx0002c3a7l0gv7f07 created with name Do taxes and priority 1.

You can see all of the steps it takes in the console output. We had the creation of the Assistant, the Thread, then we looked to see if our sqlite database has any existing Tasks, in which case we're going to send those along as input too, then we pass those along with the user's message and get back OpenAI's function invocations (3 in this case). Finally, we iterate over them all and call our internal addTask function, and at the bottom of the output we see that our tasks were created successfully.

Let's go call it again, updating the tasks that we just made:

ts-node demo.ts -m "I finished the laundry, please mark it complete. Also the gym is a P1"
ts-node demo.ts -m "I finished the laundry, please mark it complete. Also the gym is a P1"

Output:

Creating assistant...
Created assistant asst_WbTXKoXWL1yTWs4zvcVkDIDT with name Task Planner.
Created thread thread_mLvr7acahXbnmoe217f0gMRF
Retrieved 3 tasks from the database.
Created message
msg_iYYkAeuxRPNmJZ5vAKwiI8S7
Polling thread
Current status: queued
Trying again in 2 seconds...
Polling thread
Current status: in_progress
Trying again in 2 seconds...
Polling thread
Current status: in_progress
Trying again in 2 seconds...
Polling thread
Current status: requires_action
Actions:
[
{
id: 'call_W4UKGadROhaJJFZym7vQocP7',
type: 'function',
function: {
name: 'completeTask',
arguments: '{"id": "cloyl2gxs0000c3a7hxe6hupc"}'
}
},
{
id: 'call_KzaYk1x4sIRFWeKlvgOk37qf',
type: 'function',
function: {
name: 'updateTask',
arguments: '{"id": "cloyl2gxv0001c3a7zi4hqt8z", "updates": {"priority": 1}}'
}
}
]
Completing task
Task cloyl2gxs0000c3a7hxe6hupc marked as completed.
Updating task
Task cloyl2gxv0001c3a7zi4hqt8z updated with name Go to the gym and priority 1.
Creating assistant...
Created assistant asst_WbTXKoXWL1yTWs4zvcVkDIDT with name Task Planner.
Created thread thread_mLvr7acahXbnmoe217f0gMRF
Retrieved 3 tasks from the database.
Created message
msg_iYYkAeuxRPNmJZ5vAKwiI8S7
Polling thread
Current status: queued
Trying again in 2 seconds...
Polling thread
Current status: in_progress
Trying again in 2 seconds...
Polling thread
Current status: in_progress
Trying again in 2 seconds...
Polling thread
Current status: requires_action
Actions:
[
{
id: 'call_W4UKGadROhaJJFZym7vQocP7',
type: 'function',
function: {
name: 'completeTask',
arguments: '{"id": "cloyl2gxs0000c3a7hxe6hupc"}'
}
},
{
id: 'call_KzaYk1x4sIRFWeKlvgOk37qf',
type: 'function',
function: {
name: 'updateTask',
arguments: '{"id": "cloyl2gxv0001c3a7zi4hqt8z", "updates": {"priority": 1}}'
}
}
]
Completing task
Task cloyl2gxs0000c3a7hxe6hupc marked as completed.
Updating task
Task cloyl2gxv0001c3a7zi4hqt8z updated with name Go to the gym and priority 1.

That's kinda amazing. All that any of this really does is assemble blobs of text and send them to the OpenAI API, which is able to figure it all out, even with the context of the data, and correctly call both create and update APIs that exist only internally within your system, without exposing anything to the internet at large.

Here it correctly figured out the IDs of the Tasks to update (because I passed that data in with the prompt - it's tiny), which functions to call and that they should be done in parallel, meaning your user can speak/type as much as they like, making a lot of demands in a single submission, and the Assistant will batch it all up into a set of functions that, from its perspective at least, it wants you to run in parallel,

After executing the functions you can send another request to tell the Assistant the outcome - this article is long enough already but you can see how to close that loop on the OpenAI Function Calling docs.

Closing Thoughts

This stuff is all very new, and there are some pros and cons here. While all looks rosy in the end, it did take a few iterations to get GPT to reliably and consistently output the JSON format expected in the translation stage - occasionally it would innovate and restructure things a little, which causes things to break. That's probably just something that time will take care of as this stuff gets polished up, both on OpenAI's end and on everyone else's, but it's something to be aware of.

This technology requires a considered approach to testing too: GPT is a big old black box floating off in the internet somewhere, it's semi-magical, and it doesn't always give the right answer. Bit rot seems a serious risk here - both due to the newness of the tech and the fact that most of us don't really understand it very well. It seems sensible to mock/stub out expected responses from OpenAI's APIs to do unit testing, but when it comes to integration testing, you probably need your tests to do something like what our demo.ts does, and then verify the database was updated correctly at the end.

It can be the case that you make no changes to your code or environment but still get different outcomes due to the non-determinism of GPT. Amelioration for this could be in the form of temperature control and fine tuning, but you're probably going to need to be less than 100% trustful that your Assistant is doing what you think it is.

Finally, there's obviously a huge security consideration here. Fundamentally, we're taking user input (text, speech, images, whatever), and calling code on our own systems as a result. This always involves peril, and one can imagine all kinds of SQL injection-style attacks against Agent systems that inadvertently run malicious actions the developer didn't intend. For example - my API.ts contains a deleteAllTasks function does what you think it does. Because it's part of API.ts, the Assistant knows about it, and could inadvertently call it, whether the user was trying to do that or not.

It would be extremely easy to mix up public and private code in this way and accidentally expose it to the Assistant, so in reality you probably want a sanity-check to run each time the tools JSON has been rebuilt, telling you what changed. Seems a good thing to have in your CI/CD.

Continue reading

Distributed Tracing with Node JS

The microservice architecture pattern solves many of the problems inherent with monolithic applications. But microservices also bring challenges of their own, one of which is figuring out what went wrong when something breaks. There are at least 3 related challenges here:

  • Log collection
  • Metric collection
  • Distributed tracing

Log and metric collection is fairly straightforward (we'll cover these in a separate post), but only gets you so far.

Let's say your 20 microservice application starts behaving badly - you start getting timeouts on a particular API and want to find out why. The first place you look may be your centralized metrics service. This will likely confirm to you that you have a problem, as hopefully you have one or more metrics that are now showing out-of-band numbers.

But what if the issue only affects part of your user population, or worse, a single (but important) customer? In these cases your metrics - assuming you have the right ones in the first place - probably won't tell you much.

In cases like these, where you have minimal or no guidance from your configured metrics, you start trying to figure out where the problem may be. You know your system architecture, and you're pretty sure you've narrowed the issue down to three or four of your services.

So what's next? Well, you've got your centrally aggregated service logs, right? So you open up three or four windows and try to find an example of a request that fails, and trace it through to the other 2-3 services in the mix. Of course, if your problem only manifests in production then you'll be sifting through a large number of logs.

How good are you logs anyway? You're in prod, so you've probably disabled debug logs, but even if you hadn't, logs usually only get you so far. After some digging, you might be able to narrow things down to a function or two, but you're likely not logging all the information you need to proceed from there. Time to start sifting through code...

But maybe there's a better way.

Enter Distributed Tracing

Distributed Tracing is a method of tracking a request as it traverses multiple services. Let's say you have a simple e-commerce app, which looks a little like this (simplified for clarity):

Now, your user has made an order and wants to track the order's status. In order for this to happen the user makes a request that hits your API Gateway, which needs to authenticate the request and then send it on to your Orders service. This fetches Order details, then consults your Shipping service to discover shipping status, which in turn calls an external API belonging to your shipping partner.

There are quite a few things that can go wrong here. Your Auth service could be down, your Orders service could be unable to reach its database, your Shipping service could be unable to access the external API, and so on. All you know, though, is that your customer is complaining that they can't access their Order details and they're getting aggravated.

We can solve this by tracing a request as it traverses your architecture, with each step surfacing details about what is going on and what (if anything) went wrong. We can then use the Jaeger UI to visualize the trace as it happened, allowing us to debug problems as well as identify bottlenecks.

An example distributed application

To demonstrate how this works I've created a distributed tracing example app on Github. The repo is pretty basic, containing a packages directory that contains 4 extremely simple apps: gateway, auth, orders and shipping, corresponding to 4 of the services in our service architecture diagram.

The easiest way to play with this yourself is to simply clone the repo and start the services using docker-compose:

git clone git@github.com:edspencer/tracing-example.git
cd tracing-example
docker-compose up
git clone git@github.com:edspencer/tracing-example.git
cd tracing-example
docker-compose up

This will spin up 5 docker containers - one for each of our 4 services plus Jaeger. Now go to http://localhost:5000/orders/12345 and hit refresh a few times. I've set the services up to sometimes work and sometimes cause errors - there's a 20% chance that the auth app will return an error and a 30% chance that the simulated call to the external shipping service API will fail.

After refreshing http://localhost:5000/orders/12345 a few times, open up the Jaeger UI at http://localhost:16686/search and you'll see something like this:

http://localhost:5000/orders/12345 serves up the Gateway service, which is a pretty simple one-file express app that will call the Auth service on every request, then make calls to the Orders service. The Orders service in turn calls the Shipping service, which makes a simulated call to the external shipping API.

Clicking into one of the traces will show you something like this:

This view shows you the the request took 44ms to complete, and has a nice breakdown of where that time was spent. The services are color coded automatically so you can see at a glance how the 44ms was distributed across them. In this case we can see that there was an error in the shipping service. Clicking into the row with the error yields additional information useful for debugging:

The contents of this row are highly customizable. It's easy to tag the request with whatever information you like. So let's see how this works.

The Code

Let's look at the Gateway service. First we set up the Jaeger integration:

const express = require('express')
const superagent = require('superagent')
const opentracing = require('opentracing')
const {initTracer} = require('jaeger-client')

const port = process.env.PORT || 80
const authHost = process.env.AUTH_HOST || "auth"
const ordersHost = process.env.ORDERS_HOST || "orders"
const app = express()

//set up our tracer
const config = {
serviceName: 'gateway',
reporter: {
logSpans: true,
collectorEndpoint: 'http://jaeger:14268/api/traces',
},
sampler: {
type: 'const',
param: 1
}
};

const options = {
tags: {
'gateway.version': '1.0.0'
}
};

const tracer = initTracer(config, options);
const express = require('express')
const superagent = require('superagent')
const opentracing = require('opentracing')
const {initTracer} = require('jaeger-client')

const port = process.env.PORT || 80
const authHost = process.env.AUTH_HOST || "auth"
const ordersHost = process.env.ORDERS_HOST || "orders"
const app = express()

//set up our tracer
const config = {
serviceName: 'gateway',
reporter: {
logSpans: true,
collectorEndpoint: 'http://jaeger:14268/api/traces',
},
sampler: {
type: 'const',
param: 1
}
};

const options = {
tags: {
'gateway.version': '1.0.0'
}
};

const tracer = initTracer(config, options);

The most interesting stuff here is where we declare our config. Here we're telling the Jaeger client tracer to post its traces to http://jaeger:14268/api/traces (this is set up in our docker-compose file), and to sample all requests - as specified in the sampler config. In production, you won't want to sample every request - one in a thousand is probably enough - so you can switch to type: 'probabilistic' and param: 0.001 to achieve this.

Now that we have our tracer, let's tell Express to instrument each request that it serves:

//create a root span for every request
app.use((req, res, next) => {
req.rootSpan = tracer.startSpan(req.originalUrl)
tracer.inject(req.rootSpan, "http_headers", req.headers)

res.on("finish", () => {
req.rootSpan.finish()
})

next()
})
//create a root span for every request
app.use((req, res, next) => {
req.rootSpan = tracer.startSpan(req.originalUrl)
tracer.inject(req.rootSpan, "http_headers", req.headers)

res.on("finish", () => {
req.rootSpan.finish()
})

next()
})

Here we're setting up our outer span and giving it a title matching the request url. We encounter 3 of the 4 simple concepts we need to understand:

  • startSpan - creates a new "span" in our distributed trace; this corresponds to one of the rows we see in the Jaeger UI. This span is given a unique span ID and may have a parent span ID
  • inject - adds the span ID somewhere else - usually into HTTP headers for a downstream request - we'll see more of this in a moment
  • finishing the span - we hook into Express' "finish" event on the response to make sure we call .finish() on the span. This is what sends it to Jaeger.

Now let's see how we call the Auth service, passing along the span ID:

//use the auth service to see if the request is authenticated
const checkAuth = async (req, res, next) => {
const span = tracer.startSpan("check auth", {
childOf: tracer.extract(opentracing.FORMAT_HTTP_HEADERS, req.headers)
})

try {
const headers = {}
tracer.inject(span, "http_headers", headers)
const res = await superagent.get(http://${authHost}/auth).set(headers)

if (res && res.body.valid) {
span.setTag(opentracing.Tags.HTTP_STATUS_CODE, 200)
next()
} else {
span.setTag(opentracing.Tags.HTTP_STATUS_CODE, 401)
res.status(401).send("Unauthorized")
}
} catch(e) {
res.status(503).send("Auth Service gave an error")
}

span.finish()
}
//use the auth service to see if the request is authenticated
const checkAuth = async (req, res, next) => {
const span = tracer.startSpan("check auth", {
childOf: tracer.extract(opentracing.FORMAT_HTTP_HEADERS, req.headers)
})

try {
const headers = {}
tracer.inject(span, "http_headers", headers)
const res = await superagent.get(http://${authHost}/auth).set(headers)

if (res && res.body.valid) {
span.setTag(opentracing.Tags.HTTP_STATUS_CODE, 200)
next()
} else {
span.setTag(opentracing.Tags.HTTP_STATUS_CODE, 401)
res.status(401).send("Unauthorized")
}
} catch(e) {
res.status(503).send("Auth Service gave an error")
}

span.finish()
}

There are 2 important things happening here:

  • We create a new span representing the "check auth" operation, and set it to be the childOf the parent span we created previously
  • When we send the superagent request to the Auth service, we inject the new child span into the HTTP request headers

We're also showing how to add tags to a span via setTag. In this case we're appending the HTTP status code that we return to the client.

Let's examine the final piece of the Gateway service - the actual proxying to the Orders service:

//proxy to the Orders service to return Order details
app.all('/orders/:orderId', checkAuth, async (req, res) => {
const span = tracer.startSpan("get order details", {
childOf: tracer.extract(opentracing.FORMAT_HTTP_HEADERS, req.headers)
})
try {
const headers = {}
tracer.inject(span, "http_headers", headers)
const order = await superagent.get(http://${ordersHost}/order/${req.params.orderId}).set(headers)
if (order && order.body) {
span.finish()
res.json(order.body)
} else {
span.setTag(opentracing.Tags.HTTP_STATUS_CODE, 200)
span.finish()
res.status(500).send("Could not fetch order")
}
} catch(e) {
res.status(503).send("Error contacting Orders service")
}
})

app.listen(port, () => console.log(`API Gateway app listening on port ${port}`))
//proxy to the Orders service to return Order details
app.all('/orders/:orderId', checkAuth, async (req, res) => {
const span = tracer.startSpan("get order details", {
childOf: tracer.extract(opentracing.FORMAT_HTTP_HEADERS, req.headers)
})
try {
const headers = {}
tracer.inject(span, "http_headers", headers)
const order = await superagent.get(http://${ordersHost}/order/${req.params.orderId}).set(headers)
if (order && order.body) {
span.finish()
res.json(order.body)
} else {
span.setTag(opentracing.Tags.HTTP_STATUS_CODE, 200)
span.finish()
res.status(500).send("Could not fetch order")
}
} catch(e) {
res.status(503).send("Error contacting Orders service")
}
})

app.listen(port, () => console.log(`API Gateway app listening on port ${port}`))

This looks pretty similar to what we just did for the Auth service - we're creating a new span that represents the call to the Orders service, setting its parent to our outer span, and injecting it into the superagent call we make to Orders. Pretty simple stuff.

Finally, let's look at the other side of this - how to pick up the trace in another service - in this case the Auth service:

//simulate our auth service being flaky with a 20% chance of 500 internal server error
app.get('/auth', (req, res) => {
const parentSpan = tracer.extract(opentracing.FORMAT_HTTP_HEADERS, req.headers)
const span = tracer.startSpan("checking user", {
childOf: parentSpan, tags: {
[opentracing.Tags.COMPONENT]: "database"
}
})

if (Math.random() > 0.2) {
span.finish()
res.json({valid: true, userId: 123})
} else {
span.setTag(opentracing.Tags.ERROR, true)
span.finish()
res.status(500).send("Internal Auth Service error")
}
})
//simulate our auth service being flaky with a 20% chance of 500 internal server error
app.get('/auth', (req, res) => {
const parentSpan = tracer.extract(opentracing.FORMAT_HTTP_HEADERS, req.headers)
const span = tracer.startSpan("checking user", {
childOf: parentSpan, tags: {
[opentracing.Tags.COMPONENT]: "database"
}
})

if (Math.random() > 0.2) {
span.finish()
res.json({valid: true, userId: 123})
} else {
span.setTag(opentracing.Tags.ERROR, true)
span.finish()
res.status(500).send("Internal Auth Service error")
}
})

Here we see the 4th and final concept involved in distributed tracing:

  • extract - pulls the trace ID from the upstream service from the incoming HTTP headers

This is how the trace is able to traverse our services - in service A we create a span and inject it into calls to service B. Service B picks it up and creates a new span with the extracted span as its parent. We can then pass this span ID on to service C.

Jaeger is even nice enough to automatically create a system architecture diagram for you:

Conclusion

Distributed tracing is immensely powerful when it comes to understanding why distributed systems behave the way they do. There is a lot more to distributed tracing than we covered above, but at its core it really comes down to those 4 key concepts: starting spans, finishing them, injecting them into downstream requests and extracting them from the upstream.

One nice attribute of open tracing standards is that they work across technologies. In this example we saw how to hook up 4 Node JS microservices with it, but there's nothing special about Node JS here - this stuff is well supported in other languages like Go and can be added pretty much anywhere - it's just basic UDP and (usually) HTTP.

For further reading I recommend you check out the Jaeger intro docs, as well as the architecture. The Node JS Jaeger client repo is a good place to poke around, and has links to more resources. Actual example code for Node JS was a little hard to come by, which is why I wrote this post. I hope it helps you in your microservice applications.

Continue reading

A New Stack for 2016: Getting Started with React, ES6 and Webpack

A lot has changed in the last few years when it comes to implementing applications using JavaScript. Node JS has revolutionized how many of us create backend apps, React has become a widely-used standard for creating the frontend, and ES6 has come along and completely transformed JavaScript itself, largely for the better.

All of this brings new capabilities and opportunities, but also new challenges when it comes to figuring out what's worth paying attention to, and how to learn it. Today we'll look at how to set up my personal take on a sensible stack in this new world, starting from scratch and building it up as we go. We'll focus on getting to the point where everything is set up and ready for you to create the app.

The stack we'll be setting up today is as follows:

  • React - to power the frontend
  • Babel - allows us to use ES6 syntax in our app
  • Webpack - builds our application files and dependencies into a single build

Although we won't be setting up a Node JS server in this article, we'll use npm to put everything else in place, so adding a Node JS server using Express or any other backend framework is trivial. We're also going to omit setting up a testing infrastructure in this post - this will be the subject of the next article.

If you want to get straight in without reading all the verbiage, you can clone this github repo that contains all of the files we're about to create.

Let's go

The only prerequisite here is that your system has Node JS already installed. If that isn't the case, go install it now from http://nodejs.org. Once you have Node, we'll start by creating a new directory for our project and setting up NPM:

mkdir myproject
npm init
mkdir myproject
npm init

The npm init command takes you through a short series of prompts asking for information about your new project - author name, description, etc. Most of this doesn't really matter at this stage - you can easily change it later. Once that's done you'll find a new file called package.json in your project directory.

Before we take a look at this file, we already know that we need to bring in some dependencies, so we'll do that now with the following terminal commands:

npm install react --save
npm install react-dom --save
npm install webpack --save-dev
npm install react --save
npm install react-dom --save
npm install webpack --save-dev

Note that for the react dependency we use --save, whereas for webpack we use --save-dev. This indicates that react is required when running our app in production, whereas webpack is only needed while developing (as once webpack has created your production build, its role is finished). Opening our package.json file now yields this:

{
"name": "myproject",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"react": "^0.14.7",
"react-dom": "^0.14.7"
},
"devDependencies": {
"webpack": "^1.12.14"
}
}
{
"name": "myproject",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"react": "^0.14.7",
"react-dom": "^0.14.7"
},
"devDependencies": {
"webpack": "^1.12.14"
}
}

This is pretty straightforward. Note the separate dependencies and devDependencies objects in line with our --save vs --save-dev above. Depending on when you created your app the version numbers for the dependencies will be different, but the overall shape should be the same.

We're not done installing npm packages yet, but before we get started with React and ES6 we're going to get set up with Webpack.

Setting up Webpack

We'll be using Webpack to turn our many application files into a single file that can be loaded into the browser. As it stands, though, we don't have any application files at all. So let's start by creating those:

mkdir src
touch src/index.js
touch src/App.js
mkdir src
touch src/index.js
touch src/App.js

Now we have a src directory with two empty files. Into App.js, we'll place the following trivial component rendering code:

var App = function() {
return "<h1>Woop</h1>";
};

module.exports = App;

var App = function() {
return "<h1>Woop</h1>";
};

module.exports = App;

All we're doing here is returning an HTML string when you call the App function. Once we bring React into the picture we'll change the approach a little, but this is good enough for now. Into our src/index.js, we'll use:

var app = require('./App');
document.write(app());

var app = require('./App');
document.write(app());

So we're simply importing our App, running it and then writing the resulting HTML string into the DOM. Webpack will be responsible for figuring out how to combine index.js and App.js and building them into a single file. In order to use Webpack, we'll create a new file called webpack.config.js (in the root directory of our project) with the following contents:

var path = require('path');
var webpack = require('webpack');

module.exports = {
output: {
filename: 'bundle.js'
},
entry: [
'./src/index.js'
]
};

var path = require('path');
var webpack = require('webpack');

module.exports = {
output: {
filename: 'bundle.js'
},
entry: [
'./src/index.js'
]
};

This really couldn't be much simpler - it's just saying take the entry point (our src/index.js file) as input, and save the output into a file called bundle.js. Webpack takes those entry file inputs, figures out all of the require('...') statements and fetches all of the dependencies as required, outputting our bundle.js file.

To run Webpack, we simply use the webpack command in our terminal, which will do something like this:

As we can see, we now have a 1.75kb file called bundle.js that we can serve up in our project. That's a little heavier than our index.js and App.js files combined, because there is a little Webpack plumbing that gets included into the file too.

Now finally we'll create a very simple index.html file that loads our bundle.js and renders our app:

<html>
<head>
<meta charset="utf-8">
</head>
<body>
<div id="main"></div>
<script type="text/javascript" src="bundle.js" charset="utf-8"></script>
</body>
</html>

<html>
<head>
<meta charset="utf-8">
</head>
<body>
<div id="main"></div>
<script type="text/javascript" src="bundle.js" charset="utf-8"></script>
</body>
</html>

Can't get much simpler than that. We don't have a web server set up yet, but we don't actually need one. As we have no backend we can just load the index.html file directly into the browser, either by dragging it in from your OS's file explorer program, or entering the address manually. For me, I can enter file:///Users/ed/Code/myproject/index.html into my browser's address bar, and be greeted with the following:

Great! That's our component being rendered and output into the DOM as desired. Now we're ready to move onto using React and ES6.

React and ES6

React can be used either with or without ES6. Because this is the future, we desire to use the capabilities of ES6, but we can't do that directly because most browsers currently don't support it. This is where babel comes in.

Babel (which you'll often hear pronounced "babble" instead of the traditional "baybel") a transpiler, which takes one version of the JavaScript language and translates it into another. In our case, it will be translating the ES6 version of JavaScript into an earlier version that is guaranteed to run in browsers. We'll start by adding a few new npm package dependencies:

npm install babel-core --save-dev
npm install babel-loader --save-dev
npm install babel-preset-es2015 --save-dev
npm install babel-preset-react --save-dev
npm install babel-plugin-transform-runtime --save-dev

npm install babel-polyfill --save
npm install babel-runtime --save
npm install babel-core --save-dev
npm install babel-loader --save-dev
npm install babel-preset-es2015 --save-dev
npm install babel-preset-react --save-dev
npm install babel-plugin-transform-runtime --save-dev

npm install babel-polyfill --save
npm install babel-runtime --save

This is quite a substantial number of new dependencies. Because babel can convert between many different flavors of JS, once we've specified the babel-core and babel-loader packages, we also need to specify babel-preset-es2015 to enable ES6 support, and babel-preset-react to enable React's JSX syntax. We also bring in a polyfill that makes available new APIs like Object.assign that babel would not usually bring to the browser as it requires some manipulation of the browser APIs, which is something one has to opt in to.

Once we have these all installed, however, we're ready to go. The first thing we'll need to do is update our webpack.config.js file to enable babel support:

var path = require('path');
var webpack = require('webpack');

module.exports = {
module: {
loaders: [
{
loader: "babel-loader",
// Skip any files outside of your project's `src` directory
include: [
path.resolve(__dirname, "src"),
],
// Only run `.js` and `.jsx` files through Babel
test: /\.jsx?$/,
// Options to configure babel with
query: {
plugins: ['transform-runtime'],
presets: ['es2015', 'react'],
}
}
]
},
output: {
filename: 'bundle.js'
},
entry: [
'./src/index.js'
]
};

var path = require('path');
var webpack = require('webpack');

module.exports = {
module: {
loaders: [
{
loader: "babel-loader",
// Skip any files outside of your project's `src` directory
include: [
path.resolve(__dirname, "src"),
],
// Only run `.js` and `.jsx` files through Babel
test: /\.jsx?$/,
// Options to configure babel with
query: {
plugins: ['transform-runtime'],
presets: ['es2015', 'react'],
}
}
]
},
output: {
filename: 'bundle.js'
},
entry: [
'./src/index.js'
]
};

Hopefully the above is clear enough - it's the same as last time, with the exception of the new module object, which contains a loader configuration that we've configured to convert any file that ends in .js or .jsx in our src directory into browser-executable JavaScript.

Next we'll update our App.js to look like this:

import React, {Component} from 'react';

class App extends Component {
render() {
return (<h1>This is React!</h1>);
}
}
export default App;

import React, {Component} from 'react';

class App extends Component {
render() {
return (<h1>This is React!</h1>);
}
}
export default App;

Cool - new syntax! We've switched from require('') to import, though this does essentially the same thing. We've also switched from module.exports = to export default , which is again doing the same thing (though we can export multiple things this way).

We're also using the ES6 class syntax, in this case creating a class called App that extends React's Component class. It only implements a single method - render - which returns a very similar HTML string to our earlier component, but this time using inline JSX syntax instead of just returning a string.

Now all that remains is to update our index.js file to use the new Component:

import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';

ReactDOM.render(<App />, document.getElementById("main"));
import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';

ReactDOM.render(<App />, document.getElementById("main"));

Again we're using the import syntax to our advantage here, and this time we're using ReactDOM.render instead of document.write to place the rendered HTML into the DOM. Once we run the webpack command again and refresh our browser window, we'll see a screen like this:

Next Steps

We'll round out by doing a few small things to improve our workflow. First off, it's annoying to have to switch back to the terminal to run webpack every time we change any code, so let's update our webpack.config.js with a few new options:

module.exports = {
//these remain unchanged
module: {...},
output: {...},
entry: [...],

//these are new
watch: true,
colors: true,
progress: true
};
module.exports = {
//these remain unchanged
module: {...},
output: {...},
entry: [...],

//these are new
watch: true,
colors: true,
progress: true
};

Now we just run webpack once and it'll stay running, rebuilding whenever we save changes to our source files. This is generally much faster - on my 2 year old MacBook Air it takes about 5 seconds to run webpack a single time, but when using watch mode each successive build is on the order of 100ms. Usually this means that I can save my change in my text editor, and by the time I've switched to the browser the new bundle.js has already been created so I can immediately refresh to see the results of my changes.

The last thing we'll do is add a second React component to be consumed by the first. This one we'll call src/Paragraph.js, and it contains the following:

import React, {Component} from 'react';

export default class Paragraph extends Component {
render() {
return (<p>{this.props.text}</p>);
}
}
import React, {Component} from 'react';

export default class Paragraph extends Component {
render() {
return (<p>{this.props.text}</p>);
}
}

This is almost identical to our App, with a couple of small tweaks. First, notice that we've moved the export default inline with the class declaration to save on space, and then secondly this time we're using {this.props} to access a configured property of the Paragraph component. Now, to use the new component we'll update App.js to look like the following:

import React, {Component} from 'react';
import Paragraph from './Paragraph';

export default class App extends Component {
render() {
return (
<div className="my-app">
<h1>This is React!!!</h1>
<Paragraph text="First Paragraph" />
<Paragraph text="Second Paragraph" />
</div>
);
}
}
import React, {Component} from 'react';
import Paragraph from './Paragraph';

export default class App extends Component {
render() {
return (
<div className="my-app">
<h1>This is React!!!</h1>
<Paragraph text="First Paragraph" />
<Paragraph text="Second Paragraph" />
</div>
);
}
}

Again a few small changes here. First, note that we're now importing the Paragraph component and then using it twice in our render() function - each time with a different text property, which is what is read by {this.props.text} in the Paragraph component itself. Finally, React requires that we return a single root element for each rendered Component, so we wrap our <h1> and <Paragraph> tags into an enclosing <div>

By the time you hit save on those changes, webpack should already have built a new bundle.js for you, so head back to your browser, hit refresh and you'll see this:

That's about as far as we'll take things today. The purpose of this article was to get you to a point where you can start building a React application, instead of figuring out how to set up all the prerequisite plumbing; hopefully it's clear enough how to continue from here.

You can find a starter repository containing all of the above over on GitHub. Feel free to clone it as the starting point for your own project, or just look through it to see how things fit together.

In the next article, we'll look at how to add some unit testing to our project so that we can make sure our Components are behaving as they should. Until then, happy Reacting!

Continue reading

Jasmine and Jenkins Continuous Integration

I use Jasmine as my JavaScript unit/behavior testing framework of choice because it's elegant and has a good community ecosystem around it. I recently wrote up how to get Jasmine-based autotesting set up with Guard, which is great for development time testing, but what about continuous integration?

Well, it turns out that it's pretty difficult to get Jasmine integrated with Jenkins. This is not because of an inherent problem with either of those two, it's just that no-one got around to writing an open source integration layer until now.

The main problem is that Jasmine tests usually expect to run in a browser, but Jenkins needs results to be exposed in .xml files. Clearly we need some bridge here to take the headless browser output and dump it into correctly formatted .xml files. Specifically, these xml files need to follow the JUnit XML file format for Jenkins to be able to process them. Enter guard-jasmine.

guard-jasmine

In my previous article on getting Jasmine and Guard set up, I was using the jasmine-headless-webkit and guard-jasmine-headless-webkit gems to provide the glue. Since then I've replaced those 2 gems with a single gem - guard-jasmine, written by Michael Kessler, the Guard master himself. This simplifies our dependencies a little, but doesn't buy us the .xml file functionality we need.

For that, I had to hack on the gem itself (which involved writing coffeescript for the first time, which was not a horrible experience). The guard-jasmine gem now exposes 3 additional configurations:

  • junit - set to true to save output to xml files (false by default)
  • junit_consolidate - rolls nested describes up into their parent describe blocks (true by default)
  • junit_save_path - optional path to save the xml files to

The JUnit Xml reporter itself borrows heavily from larrymyers' excellent jasmine-reporters project. Aside from a few changes to integrate it into guard-jasmine it's the same code, so all credit goes to to Larry and Michael.

Sample usage:

In your Guardfile:

guard :jasmine, :junit => true, :junit_save_path => 'reports' do
watch(%r{^spec/javascripts/.+$}) { 'spec/javascripts' }
watch(%r{^spec/javascripts/fixtures/.+$}) { 'spec/javascripts' }
watch(%r{^app/assets/javascripts/(.+?)\.(js\.coffee|js|coffee)(?:\.\w+)*$}) { 'spec/javascripts' }
end
guard :jasmine, :junit => true, :junit_save_path => 'reports' do
watch(%r{^spec/javascripts/.+$}) { 'spec/javascripts' }
watch(%r{^spec/javascripts/fixtures/.+$}) { 'spec/javascripts' }
watch(%r{^app/assets/javascripts/(.+?)\.(js\.coffee|js|coffee)(?:\.\w+)*$}) { 'spec/javascripts' }
end

This will just run the full set of Jasmine tests inside your spec/javascripts directory whenever any test, source file or asset like CSS files change. This is generally the configuration I use because the tests execute so fast I can afford to have them all run every time.

In the example above we set the :junit_save_path to 'reports', which means it will save all of the .xml files into the reports directory. It is going to output 1 .xml file for each Jasmine spec file that is run. In each case the name of the .xml file created is based on the name of the top-level describe block in your spec file.

To test that everything's working, just run bundle exec guard as you normally would, and check to see that your reports folder now contains a bunch of .xml files. If it does, everything went well.

Jenkins Settings

Once we've got the .xml files outputting correctly, we just need to tell Jenkins where to look. In your Jenkins project configuration screen, click the Add Build Step button and add a "Publish JUnit test result report" step. Enter 'reports/*.xml' as the Test report XMLs field.

If you've already got Jenkins running your test script then you're all done. Next time a build is triggered the script should run the tests and export the .xml files. If you don't already have Jenkins set up to run your tests, but you did already set up Guard as per my previous article, you can actually use the same command to run the tests on Jenkins.

After a little experimentation, people tend to come up with a build command like this:

bash -c ' bundle install --quiet \
&& bundle exec guard '
bash -c ' bundle install --quiet \
&& bundle exec guard '

If you're using rvm and need to guarantee a particular version you may need to prepend an rvm install command before bundle install is called. This should just run guard, which will dump the files out as expected for Jenkins to pick up.

To clean up, we'll just add a second post-build action, this time choosing the "Execute a set of scripts" option and entering the following:

kill -9 `cat guard.pid`
kill -9 `cat guard.pid`

This just kills the Guard process, which ordinarily stays running to power your autotest capabilities. Once you run a new build you should see a chart automatically appear on your Jenkins project page telling you full details of how many tests failed over time and in the current build.

Getting it

Update: The Pull Request is now merged into the main guard-jasmine repo so you can just use gem 'guard-jasmine' in your Gemfile

This is hot off the presses but I wanted to write it up while it's still fresh in my mind. At the time of writing the pull request is still outstanding on the guard-jasmine repository, so to use the new options you'll need to temporarily use my guard-jasmine fork. In your Gemfile:
gem 'guard-jasmine'
gem 'guard-jasmine'
Once the PR is merged and a new version issued you should switch back to the official release channel. It's working well for me but it's fresh code so may contains bugs - YMMV. Hopefully this helps save some folks a little pain!
Continue reading

Autotesting JavaScript with Jasmine and Guard

One of the things I really loved about Rails in the early days was that it introduced me to the concept of autotest - a script that would watch your file system for changes and then automatically execute your unit tests as soon as you change any file.

Because the unit test suite typically executes quickly, you'd tend to have your test results back within a second or two of hitting save, allowing you to remain in the editor the entire time and only break out the browser for deeper debugging - usually the command line output and OS notifications (growl at the time) would be enough to set you straight.

This was a fantastic way to work, and I wanted to get there again with JavaScript. Turns out it's pretty easy to do this. Because I've used a lot of ruby I'm most comfortable using its ecosystem to achieve this, and as it happens there's a great way to do this already.

Enter Guard

Guard is a simple ruby gem that scans your file system for changes and runs the code of your choice whenever a file you care about is saved. It has a great ecosystem around it which makes automating filesystem-based triggers both simple and powerful. Let's start by making sure we have all the gems we need:

gem install jasmine jasmine-headless-webkit guard-jasmine-headless-webkit guard \
guard-livereload terminal-notifier-guard --no-rdoc --no-ri
gem install jasmine jasmine-headless-webkit guard-jasmine-headless-webkit guard \
guard-livereload terminal-notifier-guard --no-rdoc --no-ri

This just installs a few gems that we're going to use for our tests. First we grab the excellent Jasmine JavaScript BDD test framework via its gem - you can use the framework of your just but I find Jasmine both pleasant to deal with and it generally Just Works. Next we're going to add the 'jasmine-headless-webkit' gem and its guard twin, which use phantomjs to run your tests on the command line, without needing a browser window.

Next up we grab guard-livereload, which enables Guard to act as a livereload server, automatically running your full suite in the browser each time your save a file. This might sound redundant - our tests are already going to be executed in the headless webkit environment, so why bother running them in the browser too? Well, the browser Jasmine runner tends to give a lot more information when something goes wrong - stack traces and most importantly a live debugger.

Finally we add the terminal-notifier-guard gem, which just allows guard to give us a notification each time the tests finish executing. Now we've got our dependencies in line it's time to set up our environment. Thankfully both jasmine and guard provide simple scripts to get started:

jasmine init
guard init
jasmine init
guard init

And we're ready to go! Let's test out our setup by running guard:

guard
guard

What you should see at this point is something like this:

We see guard starting up, telling us it's going to use TerminalNotifier to give us an OS notification every time the tests finish running, and that it's going to use JasmineHeadlessWebkit to run the tests without a browser. You'll see that 5 tests were run in about 5ms, and you should have seen an OS notification flash up telling you the same thing. This is great for working on a laptop where you don't have the screen real estate to keep a terminal window visible at all times.

What about those 5 tests? They're just examples that were generated by jasmine init. You can find them inside the spec/javascripts directory and by default there's just 1 - PlayerSpec.js.

Now try editing that file and hitting save - nothing happens. The reason for this is that the Guardfile generated by guard init isn't quite compatible out of the box with the Jasmine folder structure. Thankfully this is trivial to fix - we just need to edit the Guardfile.

If you open up the Guardfile in your editor you'll see it has about 30 lines of configuration. A large amount of the file is comments and optional configs, which you can delete if you like. Guard is expecting your spec files to have the format 'my_spec.js' - note the '_spec' at the end.

To get it working the easiest way is to edit the 'spec_location' variable (on line 7 - just remove the '_spec'), and do the same to the last line of the guard 'jasmine-headless-webkit' do block. You should end up with something like this:


spec_location = "spec/javascripts/%s"

guard 'jasmine-headless-webkit' do
watch(%r{^app/views/.*\.jst$})
watch(%r{^public/javascripts/(.*)\.js$}) { |m| newest_js_file(spec_location % m[1]) }
watch(%r{^app/assets/javascripts/(.*)\.(js|coffee)$}) { |m| newest_js_file(spec_location % m[1]) }
watch(%r{^spec/javascripts/(.*)\..*}) { |m| newest_js_file(spec_location % m[1]) }
end


spec_location = "spec/javascripts/%s"

guard 'jasmine-headless-webkit' do
watch(%r{^app/views/.*\.jst$})
watch(%r{^public/javascripts/(.*)\.js$}) { |m| newest_js_file(spec_location % m[1]) }
watch(%r{^app/assets/javascripts/(.*)\.(js|coffee)$}) { |m| newest_js_file(spec_location % m[1]) }
watch(%r{^spec/javascripts/(.*)\..*}) { |m| newest_js_file(spec_location % m[1]) }
end

Once you save your Guardfile, there's no need to restart guard, it'll notice the change to the Guardfile and automatically restart itself. Now when you save PlayerSpec.js again you'll see the terminal immediately run your tests and show your the notification that all is well (assuming your tests still pass!).

So what are those 4 lines inside the guard 'jasmine-headless-webkit' do block? As you've probably guessed they're just the set of directories that guard should watch. Whenever any of the files matched by the patterns on those 4 lines change, guard will run its jasmine-headless-webkit command, which is what runs your tests. These are just the defaults, so if your JS files are not found inside those folders jus update it to point to the right place.

Livereload

The final part of the stack that I use is livereload. Livereload consists of two things - a browser plugin (available for Chrome, Firefox and others), and a server, which have actually already set up with Guard. First you'll need to install the livereload browser plugin, which is extremely simple.

Because the livereload server is already running inside guard, all we need to do is give our browser a place to load the tests from. Unfortunately the only way I've found to do this is to open up a second terminal tab and in the same directory run:

rake jasmine
rake jasmine

This sets up a lightweight web server that runs on http://localhost:8888. If you go to that page in your browser now you should see something like this:

Just hit the livereload button in your browser (once you've installed the plugin), edit your file again and you'll see the browser automatically refreshes itself and runs your tests. This step is optional but I find it extremely useful to get a notification telling me my tests have started failing, then be able to immediately tab into the browser environment to get a full stack trace and debugging environment.

That just about wraps up getting autotest up and running. Next time you come back to your code just run guard and rake jasmine and you'll get right back to your new autotesting setup. And if you have a way to have guard serve the browser without requiring the second tab window please share in the comments!

Continue reading

Building a data-driven image carousel with Sencha Touch 2

This evening I embarked on a little stellar voyage that I'd like to share with you all. Most people with great taste love astronomy and Sencha Touch 2, so why not combine them in a fun evening's web app building?

NASA has been running a small site called APOD (Astronomy Picture Of the Day) for a long time now, as you can probably tell by the awesome web design of that page. Despite its 1998-era styling, this site incorporates some pretty stunning images of the universe and is begging for a mobile app interpretation.

We're not going to go crazy, in fact this whole thing only took about an hour to create, but hopefully it's a useful look at how to put something like this together. In this case, we're just going to write a quick app that pulls down the last 20 pictures and shows them in a carousel with an optional title.

Here's what it looks like live. You'll need a webkit browser (Chrome or Safari) to see this, alternatively load up http://code.edspencer.net/apod on a phone or tablet device:

The full source code for the app is up on github, and we'll go through it bit by bit below.

The App

Our app consists of 5 files:

  • index.html, which includes our JavaScript files and a little CSS
  • app.js, which boots our application up
  • app/model/Picture.js, which represents a single APOD picture
  • app/view/Picture.js, which shows a picture on the page
  • app/store/Pictures.js, which fetches the pictures from the APOD RSS feed

The whole thing is up on github and you can see a live demo at http://code.edspencer.net/apod. To see what it's doing tap that link on your phone or tablet, and to really feel it add it to your homescreen to get rid of that browser chrome.

The Code

Most of the action happens in app.js, which for your enjoyment is more documentation than code. Here's the gist of it:

/*
* This app uses a Carousel and a JSON-P proxy so make sure they're loaded first
*/
Ext.require([
'Ext.carousel.Carousel',
'Ext.data.proxy.JsonP'
]);

/**
* Our app is pretty simple - it just grabs the latest images from NASA's Astronomy Picture Of the Day
* (http://apod.nasa.gov/apod/astropix.html) and displays them in a Carousel. This file drives most of
* the application, but there's also:
*
* * A Store - app/store/Pictures.js - that fetches the data from the APOD RSS feed
* * A Model - app/model/Picture.js - that represents a single image from the feed
* * A View - app/view/Picture.js - that displays each image
*
* Our application's launch function is called automatically when everything is loaded.
*/
Ext.application({
name: 'apod',

models: ['Picture'],
stores: ['Pictures'],
views: ['Picture'],

launch: function() {
var titleVisible = false,
info, carousel;

/**
* The main carousel that drives our app. We're just telling it to use the Pictures store and
* to update the info bar whenever a new image is swiped to
*/
carousel = Ext.create('Ext.Carousel', {
store: 'Pictures',
direction: 'horizontal',

listeners: {
activeitemchange: function(carousel, item) {
info.setHtml(item.getPicture().get('title'));
}
}
});

/**
* This is just a reusable Component that we pin to the top of the page. This is hidden by default
* and appears when the user taps on the screen. The activeitemchange listener above updates the
* content of this Component whenever a new image is swiped to
*/
info = Ext.create('Ext.Component', {
cls: 'apod-title',
top: 0,
left: 0,
right: 0
});

//add both of our views to the Viewport so they're rendered and visible
Ext.Viewport.add(carousel);
Ext.Viewport.add(info);

/**
* The Pictures store (see app/store/Pictures.js) is set to not load automatically, so we load it
* manually now. This loads data from the APOD RSS feed and calls our callback function once it's
* loaded.
*
* All we do here is iterate over all of the data, creating an apodimage Component for each item.
* Then we just add those items to the Carousel and set the first item active.
*/
Ext.getStore('Pictures').load(function(pictures) {
var items = [];

Ext.each(pictures, function(picture) {
if (!picture.get('image')) {
return;
}

items.push({
xtype: 'apodimage',
picture: picture
});
});

carousel.setItems(items);
carousel.setActiveItem(0);
});

/**
* The final thing is to add a tap listener that is called whenever the user taps on the screen.
* We do a quick check to make sure they're not tapping on the carousel indicators (tapping on
* those indicators moves you between items so we don't want to override that), then either hide
* or show the info Component.
*
* Note that to hide or show this Component we're adding or removing the apod-title-visible class.
* If you look at index.html you'll see the CSS rules style the info bar and also cause it to fade
* in and out when you tap.
*/
Ext.Viewport.element.on('tap', function(e) {
if (!e.getTarget('.x-carousel-indicator')) {
if (titleVisible) {
info.element.removeCls('apod-title-visible');
titleVisible = false;
} else {
info.element.addCls('apod-title-visible');
titleVisible = true;
}
}
});
}
});
/*
* This app uses a Carousel and a JSON-P proxy so make sure they're loaded first
*/
Ext.require([
'Ext.carousel.Carousel',
'Ext.data.proxy.JsonP'
]);

/**
* Our app is pretty simple - it just grabs the latest images from NASA's Astronomy Picture Of the Day
* (http://apod.nasa.gov/apod/astropix.html) and displays them in a Carousel. This file drives most of
* the application, but there's also:
*
* * A Store - app/store/Pictures.js - that fetches the data from the APOD RSS feed
* * A Model - app/model/Picture.js - that represents a single image from the feed
* * A View - app/view/Picture.js - that displays each image
*
* Our application's launch function is called automatically when everything is loaded.
*/
Ext.application({
name: 'apod',

models: ['Picture'],
stores: ['Pictures'],
views: ['Picture'],

launch: function() {
var titleVisible = false,
info, carousel;

/**
* The main carousel that drives our app. We're just telling it to use the Pictures store and
* to update the info bar whenever a new image is swiped to
*/
carousel = Ext.create('Ext.Carousel', {
store: 'Pictures',
direction: 'horizontal',

listeners: {
activeitemchange: function(carousel, item) {
info.setHtml(item.getPicture().get('title'));
}
}
});

/**
* This is just a reusable Component that we pin to the top of the page. This is hidden by default
* and appears when the user taps on the screen. The activeitemchange listener above updates the
* content of this Component whenever a new image is swiped to
*/
info = Ext.create('Ext.Component', {
cls: 'apod-title',
top: 0,
left: 0,
right: 0
});

//add both of our views to the Viewport so they're rendered and visible
Ext.Viewport.add(carousel);
Ext.Viewport.add(info);

/**
* The Pictures store (see app/store/Pictures.js) is set to not load automatically, so we load it
* manually now. This loads data from the APOD RSS feed and calls our callback function once it's
* loaded.
*
* All we do here is iterate over all of the data, creating an apodimage Component for each item.
* Then we just add those items to the Carousel and set the first item active.
*/
Ext.getStore('Pictures').load(function(pictures) {
var items = [];

Ext.each(pictures, function(picture) {
if (!picture.get('image')) {
return;
}

items.push({
xtype: 'apodimage',
picture: picture
});
});

carousel.setItems(items);
carousel.setActiveItem(0);
});

/**
* The final thing is to add a tap listener that is called whenever the user taps on the screen.
* We do a quick check to make sure they're not tapping on the carousel indicators (tapping on
* those indicators moves you between items so we don't want to override that), then either hide
* or show the info Component.
*
* Note that to hide or show this Component we're adding or removing the apod-title-visible class.
* If you look at index.html you'll see the CSS rules style the info bar and also cause it to fade
* in and out when you tap.
*/
Ext.Viewport.element.on('tap', function(e) {
if (!e.getTarget('.x-carousel-indicator')) {
if (titleVisible) {
info.element.removeCls('apod-title-visible');
titleVisible = false;
} else {
info.element.addCls('apod-title-visible');
titleVisible = true;
}
}
});
}
});

This is pretty simple stuff and you can probably just follow the comments to see what's going on. Basically though the app.js is responsible for launching our application, creating the Carousel and info Components, and setting up a couple of convenient event listeners.

We also had a few other files:

Picture Model

Found in app/model/Picture.js, our model is mostly just a list of fields sent back in the RSS feed. There is one that's somewhat more complicated than the rest though - the 'image' field. Ideally, the RSS feed would have sent back the url of the image in a separate field and we could just pull it out like any other, but alas it is embedded inside the main content.

To get around this, we just specify a convert function that grabs the content field, finds the first image url inside of it and pulls it out. To make sure it looks good on any device we also pass it through Sencha IO src, which resizes the image to fit the screen size of whatever device we happen to be viewing it on:

/**
* Simple Model that represents an image from NASA's Astronomy Picture Of the Day. The only remarkable
* thing about this model is the 'image' field, which uses a regular expression to pull its value out
* of the main content of the RSS feed. Ideally the image url would have been presented in its own field
* in the RSS response, but as it wasn't we had to use this approach to parse it out
*/
Ext.define('apod.model.Picture', {
extend: 'Ext.data.Model',

config: {
fields: [
'id', 'title', 'link', 'author', 'content',
{
name: 'image',
type: 'string',
convert: function(value, record) {
var content = record.get('content'),
regex = /img src=\"([a-zA-Z0-9\_\.\/\:]*)\"/,
match = content.match(regex),
src = match[1];

if (src != "" && !src.match(/\.gif$/)) {
src = "http://src.sencha.io/screen.width/" + src;
}

return src;
}
}
]
}
});
/**
* Simple Model that represents an image from NASA's Astronomy Picture Of the Day. The only remarkable
* thing about this model is the 'image' field, which uses a regular expression to pull its value out
* of the main content of the RSS feed. Ideally the image url would have been presented in its own field
* in the RSS response, but as it wasn't we had to use this approach to parse it out
*/
Ext.define('apod.model.Picture', {
extend: 'Ext.data.Model',

config: {
fields: [
'id', 'title', 'link', 'author', 'content',
{
name: 'image',
type: 'string',
convert: function(value, record) {
var content = record.get('content'),
regex = /img src=\"([a-zA-Z0-9\_\.\/\:]*)\"/,
match = content.match(regex),
src = match[1];

if (src != "" && !src.match(/\.gif$/)) {
src = "http://src.sencha.io/screen.width/" + src;
}

return src;
}
}
]
}
});

Pictures Store

Our Store is even simpler than our Model. All it does is load the APOD RSS feed over JSON-P (via Google's RSS Feed API) and decode the data with a very simple JSON Reader. This automatically pulls down the images and runs them through our Model's convert function:

/**
* Grabs the APOD RSS feed from Google's Feed API, passes the data to our Model to decode
*/
Ext.define('apod.store.Pictures', {
extend: 'Ext.data.Store',

config: {
model: 'apod.model.Picture',

proxy: {
type: 'jsonp',
url: 'https://ajax.googleapis.com/ajax/services/feed/load?v=1.0&q=http://www.acme.com/jef/apod/rss.xml&num=20',

reader: {
type: 'json',
rootProperty: 'responseData.feed.entries'
}
}
}
});
/**
* Grabs the APOD RSS feed from Google's Feed API, passes the data to our Model to decode
*/
Ext.define('apod.store.Pictures', {
extend: 'Ext.data.Store',

config: {
model: 'apod.model.Picture',

proxy: {
type: 'jsonp',
url: 'https://ajax.googleapis.com/ajax/services/feed/load?v=1.0&q=http://www.acme.com/jef/apod/rss.xml&num=20',

reader: {
type: 'json',
rootProperty: 'responseData.feed.entries'
}
}
}
});

Tying it all together

Our app.js loads our Model and Store, plus a really simple Picture view that is basically just an Ext.Img. All it does then is render the Carousel and Info Component to the screen and tie up a couple of listeners.

In case you weren't paying attention before, the info component is just an Ext.Component that we rendered up in app.js as a place to render the title of the image you're currently looking at. When you swipe between items in the carousel the activeitemchange event is fired, which we listen to near the top of app.js. All our activeitemchange listener does is update the HTML of the info component to the title of the image we just swiped to.

But what about the info component itself? Well at the bottom of app.js we added a tap listener on Ext.Viewport that hides or shows the info Component whenever you tap anywhere on the screen (except if you tap on the Carousel indicator icons). With a little CSS transition loveliness we get a nice fade in/out transition when we tap the screen to reveal the image title. Here's that tap listener again:

/**
* The final thing is to add a tap listener that is called whenever the user taps on the screen.
* We do a quick check to make sure they're not tapping on the carousel indicators (tapping on
* those indicators moves you between items so we don't want to override that), then either hide
* or show the info Component.
*/
Ext.Viewport.element.on('tap', function(e) {
if (!e.getTarget('.x-carousel-indicator')) {
if (titleVisible) {
info.element.removeCls('apod-title-visible');
titleVisible = false;
} else {
info.element.addCls('apod-title-visible');
titleVisible = true;
}
}
});
/**
* The final thing is to add a tap listener that is called whenever the user taps on the screen.
* We do a quick check to make sure they're not tapping on the carousel indicators (tapping on
* those indicators moves you between items so we don't want to override that), then either hide
* or show the info Component.
*/
Ext.Viewport.element.on('tap', function(e) {
if (!e.getTarget('.x-carousel-indicator')) {
if (titleVisible) {
info.element.removeCls('apod-title-visible');
titleVisible = false;
} else {
info.element.addCls('apod-title-visible');
titleVisible = true;
}
}
});

The End of the Beginning

This was a really simple app that shows how easy it is to put these things together with Sencha Touch 2. Like with most stories though there's more to come so keep an eye out for parts 2 and 3 of this intergalactic adventure.

Continue reading

Proxies in Ext JS 4

One of the classes that has a lot more prominence in Ext JS 4 is the data Proxy. Proxies are responsible for all of the loading and saving of data in an Ext JS 4 or Sencha Touch application. Whenever you're creating, updating, deleting or loading any type of data in your app, you're almost certainly doing it via an Ext.data.Proxy.

If you've seen January's Sencha newsletter you may have read an article called Anatomy of a Model, which introduces the most commonly-used Proxies. All a Proxy really needs is four functions - create, read, update and destroy. For an AjaxProxy, each of these will result in an Ajax request being made. For a LocalStorageProxy, the functions will create, read, update or delete records from HTML5 localStorage.

Because Proxies all implement the same interface they're completely interchangeable, so you can swap out your data source - at design time or run time - without changing any other code. Although the local Proxies like LocalStorageProxy and MemoryProxy are self-contained, the remote Proxies like AjaxProxy and ScriptTagProxy make use of Readers and Writers to encode and decode their data when communicating with the server.

Ext.data.Proxy Reader and Writer

Whether we are reading data from a server or preparing data to be sent back, usually we format it as either JSON or XML. Both of our frameworks come with JSON and XML Readers and Writers which handle all of this for you with a very simple API.

Using a Proxy with a Model

Proxies are usually used along with either a Model or a Store. The simplest setup is just with a model:

var User = Ext.regModel('User', {
fields: ['id', 'name', 'email'],

proxy: {
type: 'rest',
url : '/users',
reader: {
type: 'json',
root: 'users'
}
}
});
var User = Ext.regModel('User', {
fields: ['id', 'name', 'email'],

proxy: {
type: 'rest',
url : '/users',
reader: {
type: 'json',
root: 'users'
}
}
});

Here we've created a User model with a RestProxy. RestProxy is a special form of AjaxProxy that can automatically figure out Restful urls for our models. The Proxy that we set up features a JsonReader to decode any server responses - check out the recent data package post on the Sencha blog to see Readers in action.

When we use the following functions on the new User model, the Proxy is called behind the scenes:

var user = new User({name: 'Ed Spencer'});

//CREATE: calls the RestProxy's create function because the user has never been saved
user.save();

//UPDATE: calls the RestProxy's update function because it has been saved before
user.set('email', 'ed@sencha.com');

//DESTROY: calls the RestProxy's destroy function
user.destroy();

//READ: calls the RestProxy's read function
User.load(123, {
success: function(user) {
console.log(user);
}
});
var user = new User({name: 'Ed Spencer'});

//CREATE: calls the RestProxy's create function because the user has never been saved
user.save();

//UPDATE: calls the RestProxy's update function because it has been saved before
user.set('email', 'ed@sencha.com');

//DESTROY: calls the RestProxy's destroy function
user.destroy();

//READ: calls the RestProxy's read function
User.load(123, {
success: function(user) {
console.log(user);
}
});

We were able to perform all four CRUD operations just by specifying a Proxy for our Model. Notice that the first 3 calls are instance methods whereas the fourth (User.load) is static on the User model. Note also that you can create a Model without a Proxy, you just won't be able to persist it.

Usage with Stores

In Ext JS 3.x, most of the data manipulation was done via Stores. A chief purpose of a Store is to be a local subset of some data plus delta. For example, you might have 1000 products in your database and have 25 of them loaded into a Store on the client side (the local subset). While operating on that subset, your user may have added, updated or deleted some of the Products. Until these changes are synchronized with the server they are known as a delta.

In order to read data from and sync to the server, Stores also need to be able to call those CRUD operations. We can give a Store a Proxy in the same way:

var store = new Ext.data.Store({
model: 'User',
proxy: {
type: 'rest',
url : '/users',
reader: {
type: 'json',
root: 'users'
}
}
});
var store = new Ext.data.Store({
model: 'User',
proxy: {
type: 'rest',
url : '/users',
reader: {
type: 'json',
root: 'users'
}
}
});

We created the exact same Proxy for the Store because that's how our server side is set up to deliver data. Because we'll usually want to use the same Proxy mechanism for all User manipulations, it's usually best to just define the Proxy once on the Model and then simply tell the Store which Model to use. This automatically picks up the User model's Proxy:

//no need to define proxy - this will reuse the User's Proxy
var store = new Ext.data.Store({
model: 'User'
});
//no need to define proxy - this will reuse the User's Proxy
var store = new Ext.data.Store({
model: 'User'
});

Store invokes the CRUD operations via its load and sync functions. Calling load uses the Proxy's read operation, which sync utilizes one or more of create, update and destroy depending on the current Store delta.

//CREATE: calls the RestProxy's create function to create the Tommy record on the server
store.add({name: 'Tommy Maintz'});
store.sync();

//UPDATE: calls the RestProxy's update function to update the Tommy record on the server
store.getAt(1).set('email', 'tommy@sencha.com');
store.sync();

//DESTROY: calls the RestProxy's destroy function
store.remove(store.getAt(1));
store.sync();

//READ: calls the RestProxy's read function
store.load();
//CREATE: calls the RestProxy's create function to create the Tommy record on the server
store.add({name: 'Tommy Maintz'});
store.sync();

//UPDATE: calls the RestProxy's update function to update the Tommy record on the server
store.getAt(1).set('email', 'tommy@sencha.com');
store.sync();

//DESTROY: calls the RestProxy's destroy function
store.remove(store.getAt(1));
store.sync();

//READ: calls the RestProxy's read function
store.load();

Store has used the exact same CRUD operations on the shared Proxy. In all of the examples above we have used the exact same RestProxy instance from three different places: statically on our Model (User.load), as a Model instance method (user.save, user.destroy) and via a Store instance (store.load, store.sync):

Data Proxy Reuse

Of course, most Proxies have their own private methods to do the actual work, but all a Proxy needs to do is implement those four functions to be usable with Ext JS 4 and Sencha Touch. This means it's easy to create new Proxies, as James Pearce did in a recent Sencha Touch example where he needed to read address book data from a mobile phone. Everything he does to set up his Proxy in the article (about 1/3rd of the way down) works the same way for Ext JS 4 too.

Continue reading

Ext JS 4: The Class Definition Pipeline

Last time, we looked at some of the features of the new class system in Ext JS 4, and explored some of the code that makes it work. Today we're going to dig a little deeper and look at the class definition pipeline - the framework responsible for creating every class in Ext JS 4.

As I mentioned last time, every class in Ext JS 4 is an instance of Ext.Class. When an Ext.Class is constructed, it hands itself off to a pipeline populated by small, focused processors, each of which handles one part of the class definition process. We ship a number of these processors out of the box - there are processors for handling mixins, setting up configuration functions and handling class extension.

The pipeline is probably best explained with a picture. Think of your class starting its definition journey at the bottom left, working its way up the preprocessors on the left hand side and then down the postprocessors on the right, until finally it reaches the end, where it signals its readiness to a callback function:

The distinction between preprocessors and postprocessors is that a class is considered ‘ready’ (e.g. can be instantiated) after the preprocessors have all been executed. Postprocessors typically perform functions like aliasing the class name to an xtype or back to a legacy class name - things that don't affect the class' behavior.

Each processor runs asynchronously, calling back to the Ext.Class constructor when it is ready - this is what enables us to extend classes that don’t exist on the page yet. The first preprocessor is the Loader, which checks to see if all of the new Class’ dependencies are available. If they are not, the Loader can dynamically load those dependencies before calling back to Ext.Class and allowing the next preprocessor to run. We'll take another look at the Loader in another post.

After running the Loader, the new Class is set up to inherit from the declared superclass by the Extend preprocessor. The Mixins preprocessor takes care of copying all of the functions from each of our mixins, and the Config preprocessor handles the creation of the 4 config functions we saw last time (e.g. getTitle, setTitle, resetTitle, applyTitle - check out yesterday's post to see how the Configs processor helps out).

Finally, the Statics preprocessor looks for any static functions that we set up on our new class and makes them available statically on the class. The processors that are run are completely customizable, and it’s easy to add custom processors at any point. Let's take a look at that Statics preprocessor as an example:

//Each processor is passed three arguments - the class under construction,
//the configuration for that class and a callback function to call when the processor has finished
Ext.Class.registerPreprocessor('statics', function(cls, data, callback) {
if (Ext.isObject(data.statics)) {
var statics = data.statics,
name;

//here we just copy each static function onto the new Class
for (name in statics) {
if (statics.hasOwnProperty(name)) {
cls[name] = statics[name];
}
}
}

delete data.statics;

//Once the processor's work is done, we just call the callback function to kick off the next processor
if (callback) {
callback.call(this, cls, data);
}
});

//Changing the order that the preprocessors are called in is easy too - this is the default
Ext.Class.setDefaultPreprocessors(['extend', 'mixins', 'config', 'statics']);
//Each processor is passed three arguments - the class under construction,
//the configuration for that class and a callback function to call when the processor has finished
Ext.Class.registerPreprocessor('statics', function(cls, data, callback) {
if (Ext.isObject(data.statics)) {
var statics = data.statics,
name;

//here we just copy each static function onto the new Class
for (name in statics) {
if (statics.hasOwnProperty(name)) {
cls[name] = statics[name];
}
}
}

delete data.statics;

//Once the processor's work is done, we just call the callback function to kick off the next processor
if (callback) {
callback.call(this, cls, data);
}
});

//Changing the order that the preprocessors are called in is easy too - this is the default
Ext.Class.setDefaultPreprocessors(['extend', 'mixins', 'config', 'statics']);

What happens above is pretty straightforward. We're registering a preprocessor called 'statics' with Ext.Class. The function we provide is called whenever the 'statics' preprocessor is invoked, and is passed the new Ext.Class instance, the configuration for that class, and a callback to call when the preprocessor has finished its work.

The actual work that this preprocessor does is trivial - it just looks to see if we declared a 'statics' property in our class configuration and if so copies it onto the new class. For example, let's say we want to create a static getNextId function on a class:

Ext.define('MyClass', {
statics: {
idSeed: 1000,
getNextId: function() {
return this.idSeed++;
}
}
});
Ext.define('MyClass', {
statics: {
idSeed: 1000,
getNextId: function() {
return this.idSeed++;
}
}
});

Because of the Statics preprocessor, we can now call the function statically on the Class (e.g. without creating an instance of MyClass):

MyClass.getNextId(); //1000
MyClass.getNextId(); //1001
MyClass.getNextId(); //1002
... etc
MyClass.getNextId(); //1000
MyClass.getNextId(); //1001
MyClass.getNextId(); //1002
... etc

Finally, let's come back to that callback at the bottom of the picture above. If we supply one, a callback function is run after all of the processors have run. At this point the new class is completely ready for use in your application. Here we create an instance of MyClass using the callback function, guaranteeing that the dependency on Ext.Window has been honored:

Ext.define('MyClass', {
extend: 'Ext.Window'
}, function() {
//this callback is called when MyClass is ready for use
var cls = new MyClass();
cls.setTitle('Everything is ready');
cls.show();
});
Ext.define('MyClass', {
extend: 'Ext.Window'
}, function() {
//this callback is called when MyClass is ready for use
var cls = new MyClass();
cls.setTitle('Everything is ready');
cls.show();
});

That's it for today. Next time we'll look at some of the new features in the part of Ext JS 4 that is closest to my heart - the data package.

Continue reading

Sencha Touch tech talk at Pivotal Labs

I recently gave an introduction to Sencha Touch talk up at Pivotal Labs in San Francisco. The guys at Pivotal were kind enough to record this short talk and share it with the world - it's under 30 minutes and serves as a nice, short introduction to Sencha Touch:

UPDATE: Pivotal got acquired, this link broke. The world moved on.

The slides are available on slideshare and include the code snippets I presented. The Dribbble example used in the talk is very similar to the Kiva example that ships with the Sencha Touch SDK, so I recommend checking that out if you want to dive in further.

Continue reading

Using the Ext JS PivotGrid

One of the new components we just unveiled for the Ext JS 3.3 beta is PivotGrid. PivotGrid is a powerful new component that reduces and aggregates large datasets into a more understandable form.

A classic example of PivotGrid's usefulness is in analyzing sales data. Companies often keep a database containing all the sales they have made and want to glean some insight into how well they are performing. PivotGrid gives the ability to rapidly summarize this large and unwieldy dataset - for example showing sales count broken down by city and salesperson.

A simple example

We created an example of this scenario in the 3.3 beta release. Here we have a fictional dataset containing 300 rows of sales data (see the raw data). We asked PivotGrid to break the data down by Salesperson and Product, showing us how they performed over time. Each cell contains the sum of sales made by the given salesperson/product combination in the given city and year.

Let's see how we create this PivotGrid:

var SaleRecord = Ext.data.Record.create([
{name: 'person', type: 'string'},
{name: 'product', type: 'string'},
{name: 'city', type: 'string'},
{name: 'state', type: 'string'},
{name: 'month', type: 'int'},
{name: 'quarter', type: 'int'},
{name: 'year', type: 'int'},
{name: 'quantity', type: 'int'},
{name: 'value', type: 'int'}
]);

var myStore = new Ext.data.Store({
url: 'salesdata.json',
autoLoad: true,
reader: new Ext.data.JsonReader({
root: 'rows',
idProperty: 'id'
}, SaleRecord)
});

var pivotGrid = new Ext.grid.PivotGrid({
title : 'Sales Performance',
store : myStore,
aggregator: 'sum',
measure : 'value',

leftAxis: [
{dataIndex: 'person', width: 80},
{dataIndex: 'product', width: 90}
],

topAxis: [
{dataIndex: 'year'},
{dataIndex: 'city'}
]
});
var SaleRecord = Ext.data.Record.create([
{name: 'person', type: 'string'},
{name: 'product', type: 'string'},
{name: 'city', type: 'string'},
{name: 'state', type: 'string'},
{name: 'month', type: 'int'},
{name: 'quarter', type: 'int'},
{name: 'year', type: 'int'},
{name: 'quantity', type: 'int'},
{name: 'value', type: 'int'}
]);

var myStore = new Ext.data.Store({
url: 'salesdata.json',
autoLoad: true,
reader: new Ext.data.JsonReader({
root: 'rows',
idProperty: 'id'
}, SaleRecord)
});

var pivotGrid = new Ext.grid.PivotGrid({
title : 'Sales Performance',
store : myStore,
aggregator: 'sum',
measure : 'value',

leftAxis: [
{dataIndex: 'person', width: 80},
{dataIndex: 'product', width: 90}
],

topAxis: [
{dataIndex: 'year'},
{dataIndex: 'city'}
]
});

The first half of this ought to be very familiar - we just set up a normal Record and Store. This is all we need to load our sample data so that it's ready for pivoting. This is all exactly the same code as for our other Store-bound components like Grid and DataView so it's easy to take an existing Grid and turn it into a PivotGrid.

The second half of the code creates the PivotGrid itself. There are 5 main components to a PivotGrid - the store, the measure, the aggregator, the left axis and the top axis. Taking these in turn:

  • Store - the Store we created above
  • Measure - the field in the data that we want to aggregate (in this case the sale value)
  • Aggregator - the function we use to combine data into the cells. See the docs for full details
  • Left Axis - the fields to break data down by on the left axis
  • Top Axis - the fields to break data down by on the top axis

The measure and the items in the axes must all be fields from the Store. The aggregator function can usually be passed in as a string - there are 5 aggregator functions built in: sum, count, min, max and avg.

Renderers

This is all we need to create a simple PivotGrid; now it's time to look at a few more advanced options. Let's start with renderers. Once the data for each cell has been calculated, the value is passed to an optional renderer function, which takes each value in turn and returns another value. One of the PivotGrid examples shows average heights in feet and inches but the calculated data is in decimal. Here's the renderer we use in that example:

new Ext.grid.PivotGrid({
store : myStore,
aggregator: 'avg',
measure : 'height',

//turns a decimal number of feet into feet and inches
renderer : function(value) {
var feet = Math.floor(value),
inches = Math.round((value - feet) * 12);

return String.format("{0}' {1}"", feet, inches);
},
//the rest of the config
});
new Ext.grid.PivotGrid({
store : myStore,
aggregator: 'avg',
measure : 'height',

//turns a decimal number of feet into feet and inches
renderer : function(value) {
var feet = Math.floor(value),
inches = Math.round((value - feet) * 12);

return String.format("{0}' {1}"", feet, inches);
},
//the rest of the config
});

Customising cell appearance

Another one of the PivotGrid examples uses a custom cell style. As with the renderer, each cell has the opportunity to alter itself with a custom function - here's the one we use in the countries example:

new Ext.grid.PivotGrid({
store : myStore,
aggregator: 'avg',
measure : 'height',

viewConfig: {
getCellCls: function(value) {
if (value < 20) {
return 'expense-low';
} else if (value < 75) {
return 'expense-medium';
} else {
return 'expense-high';
}
}
},
//the rest of the config
});
new Ext.grid.PivotGrid({
store : myStore,
aggregator: 'avg',
measure : 'height',

viewConfig: {
getCellCls: function(value) {
if (value < 20) {
return 'expense-low';
} else if (value < 75) {
return 'expense-medium';
} else {
return 'expense-high';
}
}
},
//the rest of the config
});

Reconfiguring at runtime

A lot of the power of PivotGrid is that it can be used by users of your application to summarize datasets any way they want. This is made possible by PivotGrid's ability to reconfigure itself at runtime. We present one final example of a PivotGrid that can be reconfigured at runtime. Here's how we perform the reconfiguration:

//the left axis can also be changed
pivot.topAxis.setDimensions([
{dataIndex: 'city', direction: 'DESC'},
{dataIndex: 'year', direction: 'ASC'}
]);

pivot.setMeasure('value');
pivot.setAggregator('avg');

pivot.view.refresh(true);
//the left axis can also be changed
pivot.topAxis.setDimensions([
{dataIndex: 'city', direction: 'DESC'},
{dataIndex: 'year', direction: 'ASC'}
]);

pivot.setMeasure('value');
pivot.setAggregator('avg');

pivot.view.refresh(true);

It's easy to change the axes, dimension, aggregator and measure at any time and then refresh the data. The calculations are all performed client side so there is no need for another round-trip to the server when reconfiguring. The example linked above gives an example interface for updating a PivotGrid, though anything that can make the API calls above could be used.

I hope you enjoy the new components in this Ext JS 3.3 beta and look forward to comments and suggestions. Although we're only at beta stage I think the additions are already quite robust so feel free to stress-test them.

Continue reading

Offline Apps with HTML5: A case study in Solitaire

One of my contributions to the newly-launched Sencha Touch mobile framework is the Touch Solitaire game. This is not the first time I have ventured into the dizzying excitement of Solitaire game development; you may remember the wonderful Ext JS Solitaire from 18 months ago. I'm sure you'll agree that the new version is a small improvement.

Solitaire

Solitaire is a nice example of a fun application that can be written with Sencha Touch. It makes use of the provided Draggables and Droppables, CSS-based animations, the layout manager and the brand new data package. The great thing about a game like this though is that it can be run entirely offline. Obviously this is simple with a native application, but what about a web app? Our goal is not just having the game able to run offline, but to save your game state locally too.

The answer comes in two parts:

Web Storage and the Sencha data package

HTML5 provides a brand new API called Web Storage for storing data locally. You can read all about it on my Web Storage post on Sencha's blog but the summary is that you can store string data locally in the browser and retrieve it later, even if the browser or the user's computer had been restarted in the meantime.

The crucial part of the sentence above is that we can only store string data. In the case of a game of Solitaire we need to store data on the elapsed time and number of moves as well as the location and status of each card. This doesn't sound like the kind of data we want to manually encode into a string, so thankfully the data package comes to the rescue.

The Sencha Touch data package is a complete rewrite of the package that has been so successful in powering Ext JS 3.x. It shares many of the same philosophies and adds the learning we have gained from developing Ext JS 3.x over the past year. One of the new capabilities it offers us is a Local Storage proxy, which automatically marshalls your model data into local storage and transparently restores it when you need it.

Using the new proxy is simple - all we need to do is set up a new Store, specifying the Proxy and the Model that will be saved to it. Models are the spiritual successor to Ext JS 3.x's Records. Now whenever we add, remove or update model instances in the store they are automatically saved to localStorage for us. Loading the store again is equally easy:

//set the store up
var gameStore = new Ext.data.Store({
proxy: new Ext.data.LocalStorageProxy({
id: 'solitaire-games'
}),
model: 'Game'
});

//saves all outstanding modifications, deletions or creations to localStorage
gameStore.sync();

//load our saved games
gameStore.read({
scope: this,
callback: function(records) {
//code to load the first record
}
});
//set the store up
var gameStore = new Ext.data.Store({
proxy: new Ext.data.LocalStorageProxy({
id: 'solitaire-games'
}),
model: 'Game'
});

//saves all outstanding modifications, deletions or creations to localStorage
gameStore.sync();

//load our saved games
gameStore.read({
scope: this,
callback: function(records) {
//code to load the first record
}
});

And just like that we can save and restore games with Web Storage. We can visit our app's webpage and start a game then come back later and find it automatically restored. But we still can't play offline, for that we need the application cache.

The HTML5 Application Cache Manifest

The application cache is one of the best features of HTML5. It provides a simple (though sometimes frustrating) way of telling the browser about all of the files your application relies on so that it can download them all ready for offline use. All you have to do is create what's known as a manifest file which lists all of the files the application needs - the Solitaire manifest looks like this:

CACHE MANIFEST
#rev49

resources/icon.png
resources/loading.png

resources/themes/wood/board.jpg
resources/themes/wood/cards.png

resources/css/ext-touch.css
resources/solitaire-notheme.css
resources/themes/wood/wood.css
resources/themes/metal/metal.css

ext-touch-debug.js
solitaire-all-debug.js
CACHE MANIFEST
#rev49

resources/icon.png
resources/loading.png

resources/themes/wood/board.jpg
resources/themes/wood/cards.png

resources/css/ext-touch.css
resources/solitaire-notheme.css
resources/themes/wood/wood.css
resources/themes/metal/metal.css

ext-touch-debug.js
solitaire-all-debug.js

We tell the browser about the manifest file by pointing to it in the tag's manifest atttibute. When the browser finds this file it downloads each of the listed assets so that they are ready for offline consumption. Note that it does not automatically include them on the page, you still need to do that yourself via the usual link and script tags. Here's a snippet of the Solitaire index.html file:

<!doctype html>
<html manifest="solitaire.manifest">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Solitaire</title>

<link rel="stylesheet" href="resources/css/ext-touch.css" type="text/css">
<link rel="stylesheet" href="resources/solitaire-notheme.css" type="text/css">
<link rel="stylesheet" href="resources/themes/wood/wood.css" type="text/css">

<script type="text/javascript" src="ext-touch-debug.js"></script>
<script type="text/javascript" src="solitaire-all-debug.js"></script>
<!doctype html>
<html manifest="solitaire.manifest">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Solitaire</title>

<link rel="stylesheet" href="resources/css/ext-touch.css" type="text/css">
<link rel="stylesheet" href="resources/solitaire-notheme.css" type="text/css">
<link rel="stylesheet" href="resources/themes/wood/wood.css" type="text/css">

<script type="text/javascript" src="ext-touch-debug.js"></script>
<script type="text/javascript" src="solitaire-all-debug.js"></script>

Note the manifest file definition in the html element at the top, and the fact that we still include our page resources the normal way. It sounds easy, but without a little setup first it can be a very frustrating experience. Usually your browser will try to cache as many files as possible, including the manifest file itself - we don't want this. As soon as your browser has a long-term cache of the manifest file it is extremely difficult to update your application - all of the files are already offline and won't be updated, and the browser won't even ask the server for an updated manifest file.

Preventing this behaviour turns out to be fairly easy, and the solution in its simplest form comes in the shape of a .htaccess file with contents like the following:

<Files solitaire.manifest>
ExpiresActive On
ExpiresDefault "access"
</Files>
<Files solitaire.manifest>
ExpiresActive On
ExpiresDefault "access"
</Files>

This directs Apache to tell the browser not to cache the manifest file at all, instead requesting the file from the server on every page load. Note that if the device is currently offline it will use the last manifest file it received.

This is half the battle won, but let's say you change one of your application files and reload - you'll find nothing happened. This is because when your browser asked the server for the manifest file it actually asked if the file had changed or not. As the manifest itself wasn't updated, the server responds with a 304 (Not Modified) and your browser keeps the old file.

To make the browser pick up on the change to the application file you need to update the manifest file itself. This is where the mysterious "#rev49" comes in on the manifest example file above. This is a suggestion from the excellent diveintohtml5 article on the subject - whenever you change any application files just bump up the revision number in the manifest file and your browser will know to download the updated files.

One final detail is that your Apache server probably isn't set up to server manifest files with the correct mime type, so be sure to add the following line to your Apache config and restart the server:

AddType text/cache-manifest .manifest
AddType text/cache-manifest .manifest

Wrapping it up

Offline access is a big deal for mobile apps and Sencha Touch makes them much easier to write. The benefit is not so much that the apps can run without an internet connection (many modern touch devices have a near-permanent connection to the internet already), but that web apps can now be treated as first-class citizens alongside native apps.

The fact that many devices allow your users to save your app to their home screen and load it as though it were native is an important step - you keep all of the advantages of web app deployment while gaining some of the benefits of native apps. As more and more native hardware APIs become available to web apps their importance will only grow.

If you want to check out Solitaire's offline support for yourself visit the application's site and save it to your iPad's home page. Try turning on airplane mode and loading the app and see how it behaves as though it were native. If you don't have an iPad, you can load the app in up-to-date versions of Chrome or Safari and get a similar experience.

Continue reading

Ext.ux.Exporter - export any Grid to Excel or CSV

Sometimes we want to print things, like grids or trees. The Ext JS printing plugin is pretty good for that. But what if we want to export them instead? Enter Ext.ux.Exporter.

Ext.ux.Exporter allows any store-based component (such as grids) to be exported, locally, to Excel or any other format. It does not require any server side programming - the export document is generated on the fly, entirely in JavaScript.

The extension serves as a base for exporting any kind of data, but comes bundled with a .xls export formatter suitable for exporting any Grid straight to Excel. Here's how to do that:

var grid = new Ext.grid.GridPanel({
store: someStore,
tbar : [
{
xtype: 'exportbutton',
store: someStore
}
],
//your normal grid config goes here
});
var grid = new Ext.grid.GridPanel({
store: someStore,
tbar : [
{
xtype: 'exportbutton',
store: someStore
}
],
//your normal grid config goes here
});

Clicking the Download button in the top toolbar iterates over the data in the store and creates an Excel file locally, before Base64 encoding it and redirecting the browser via a data url. If you have Excel or a similar program installed your browser should ask you to save the file or open it with Excel.

I put together a quick example of the plugin in action inside the repository, just clone or download the code and drag the examples/index.html file into your browser to run it.

The Exporter will work with any store or store-based component. It also allows export to any format - for example CSV or PDF. Although the Excel Formatter is probably the most useful, implementing a CSV or other Formatter should be trivial - check out the Excel Formatter example in the ExcelFormatter directory.

Continue reading

Jaml: beautiful HTML generation for JavaScript

Generating HTML with JavaScript has always been ugly. Hella ugly. It usually involves writing streams of hard-to-maintain code which just concatenates a bunch of strings together and spits them out in an ugly mess.

Wouldn't it be awesome if we could do something pretty like this:

div(
h1("Some title"),
p("Some exciting paragraph text"),
br(),

ul(
li("First item"),
li("Second item"),
li("Third item")
)
);
div(
h1("Some title"),
p("Some exciting paragraph text"),
br(),

ul(
li("First item"),
li("Second item"),
li("Third item")
)
);

And have it output something beautiful like this:

<div>
<h1>Some title</h1>
<p>Some exciting paragraph text</p>
<br />
<ul>
<li>First item</li>
<li>Second item</li>
<li>Third item</li>
</ul>
</div>
<div>
<h1>Some title</h1>
<p>Some exciting paragraph text</p>
<br />
<ul>
<li>First item</li>
<li>Second item</li>
<li>Third item</li>
</ul>
</div>

With Jaml, we can do exactly that. Jaml is a simple library inspired by the excellent Haml library for Ruby. It works by first defining a template using an intuitive set of tag functions, and then rendering it to appear as pretty HTML. Here's an example of how we'd do that with the template above:

Jaml.register('simple', function() {
div(
h1("Some title"),
p("Some exciting paragraph text"),
br(),

ul(
li("First item"),
li("Second item"),
li("Third item")
)
);
});

Jaml.render('simple');
Jaml.register('simple', function() {
div(
h1("Some title"),
p("Some exciting paragraph text"),
br(),

ul(
li("First item"),
li("Second item"),
li("Third item")
)
);
});

Jaml.render('simple');

All we need to do is call Jaml.register with a template name and the template source. Jaml then stores this for later use, allowing us to render it later using Jaml.render(). Rendering with Jaml gives us the nicely formatted, indented HTML displayed above.

So we've got a nice way of specifying reusable templates and then rendering them prettily, but we can do more. Usually we want to inject some data into our template before rendering it - like this:

Jaml.register('product', function(product) {
div({cls: 'product'},
h1(product.title),

p(product.description),

img({src: product.thumbUrl}),
a({href: product.imageUrl}, 'View larger image'),

form(
label({'for': 'quantity'}, "Quantity"),
input({type: 'text', name: 'quantity', id: 'quantity', value: 1}),

input({type: 'submit', value: 'Add to Cart'})
)
);
});
Jaml.register('product', function(product) {
div({cls: 'product'},
h1(product.title),

p(product.description),

img({src: product.thumbUrl}),
a({href: product.imageUrl}, 'View larger image'),

form(
label({'for': 'quantity'}, "Quantity"),
input({type: 'text', name: 'quantity', id: 'quantity', value: 1}),

input({type: 'submit', value: 'Add to Cart'})
)
);
});

In this example our template takes an argument, which we've called product. We could have called this anything, but in this case the template is for a product in an ecommerce store so product makes sense. Inside our template we have access to the product variable, and can output data from it.

Let's render it with a Product from our database:

//this is the product we will be rendering
var bsg = {
title : 'Battlestar Galactica DVDs',
thumbUrl : 'thumbnail.png',
imageUrl : 'image.png',
description: 'Best. Show. Evar.'
};

Jaml.render('product', bsg);
//this is the product we will be rendering
var bsg = {
title : 'Battlestar Galactica DVDs',
thumbUrl : 'thumbnail.png',
imageUrl : 'image.png',
description: 'Best. Show. Evar.'
};

Jaml.render('product', bsg);

The output from rendering this template with the product looks like this:

<div class="product">
<h1>Battlestar Galactica DVDs</h1>
<p>Best. Show. Evar.</p>
<img src="thumbnail.png" />
<a href="image.png">View larger image</a>
<form>
<label for="quantity">Quantity</label>
<input type="text" name="quantity" id="quantity" value="1"></input>
<input type="submit" value="Add to Cart"></input>
</form>
</div>
<div class="product">
<h1>Battlestar Galactica DVDs</h1>
<p>Best. Show. Evar.</p>
<img src="thumbnail.png" />
<a href="image.png">View larger image</a>
<form>
<label for="quantity">Quantity</label>
<input type="text" name="quantity" id="quantity" value="1"></input>
<input type="submit" value="Add to Cart"></input>
</form>
</div>

Cool - we've got an object oriented declaration of an HTML template which is cleanly separated from our data. How about we define another template, this time for a category which will contain our products:

Jaml.register('category', function(category) {
div({cls: 'category'},
h1(category.name),
p(category.products.length + " products in this category:"),

div({cls: 'products'},
Jaml.render('product', category.products)
)
);
});
Jaml.register('category', function(category) {
div({cls: 'category'},
h1(category.name),
p(category.products.length + " products in this category:"),

div({cls: 'products'},
Jaml.render('product', category.products)
)
);
});

Our category template references our product template, achieving something rather like a partial in Ruby on Rails. This obviously allows us to keep our templates DRY and to easily render a hypothetical Category page like this:

//here's a second product
var snowWhite = {
title : 'Snow White',
description: 'not so great actually',
thumbUrl : 'thumbnail.png',
imageUrl : 'image.png'
};

//and a category
var category = {
name : 'Doovde',
products: [bsg, snowWhite]
}

Jaml.render('category', category);
//here's a second product
var snowWhite = {
title : 'Snow White',
description: 'not so great actually',
thumbUrl : 'thumbnail.png',
imageUrl : 'image.png'
};

//and a category
var category = {
name : 'Doovde',
products: [bsg, snowWhite]
}

Jaml.render('category', category);

All we've done is render the 'category' template with our 'Doovde' category, which contains an array of products. These were passed into the 'product' template to produce the following output:

<div class="category">
<h1>Doovde</h1>
<p>2 products in this category:</p>
<div class="products"><div class="product">
<h1>Battlestar Galactica DVDs</h1>
<p>Best. Show. Evar.</p>
<img src="thumbnail.png" />
<a href="image.png">View larger image</a>
<form>
<label for="quantity">Quantity</label>
<input type="text" name="quantity" id="quantity" value="1"></input>
<input type="submit" value="Add to Cart"></input>
</form>
</div>
<div class="product">
<h1>Snow White</h1>
<p>not so great actually</p>
<img src="thumbnail.png" />
<a href="image.png">View larger image</a>
<form>
<label for="quantity">Quantity</label>
<input type="text" name="quantity" id="quantity" value="1"></input>
<input type="submit" value="Add to Cart"></input>
</form>
</div>
</div>
</div>
<div class="category">
<h1>Doovde</h1>
<p>2 products in this category:</p>
<div class="products"><div class="product">
<h1>Battlestar Galactica DVDs</h1>
<p>Best. Show. Evar.</p>
<img src="thumbnail.png" />
<a href="image.png">View larger image</a>
<form>
<label for="quantity">Quantity</label>
<input type="text" name="quantity" id="quantity" value="1"></input>
<input type="submit" value="Add to Cart"></input>
</form>
</div>
<div class="product">
<h1>Snow White</h1>
<p>not so great actually</p>
<img src="thumbnail.png" />
<a href="image.png">View larger image</a>
<form>
<label for="quantity">Quantity</label>
<input type="text" name="quantity" id="quantity" value="1"></input>
<input type="submit" value="Add to Cart"></input>
</form>
</div>
</div>
</div>

You can see live examples of all of the above at http://edspencer.github.com/jaml.

Jaml currently sports a few hacks and is not particularly efficient. It is presented as a proof of concept, though all the output above is true output from the library. As always, all of the code is up on Github, and contributions are welcome :)

Jaml would be suitable for emulating a Rails-style directory structure inside a server side JavaScript framework - each Jaml template could occupy its own file, with the template name coming from the file name. This is roughly how Rails and other MVC frameworks work currently, and it eliminates the need for the Jaml.register lines. Alternatively, the templates could still be stored server side and simply pulled down and evaluated for client side rendering.

Happy rendering!

Continue reading

Making RowEditor use your column renderers

The RowEditor plugin is one of my favourite Ext JS components. It basically allows any row on a grid to be turned into an adhoc form on the fly, saving you the effort of defining additional form components.

Recently I had a grid which had a few fields that don't have an editor, something like this:

var myGrid = new Ext.grid.GridPanel({
plugins: [new Ext.ux.grid.RowEditor()],
columns: [
{
header : "Username",
dataIndex: 'username',
editor : new Ext.form.TextField()
},
{
header : "Signup date",
dataIndex: 'created_at',
renderer : Ext.util.Format.dateRenderer('m/d/Y')
}
]
});
var myGrid = new Ext.grid.GridPanel({
plugins: [new Ext.ux.grid.RowEditor()],
columns: [
{
header : "Username",
dataIndex: 'username',
editor : new Ext.form.TextField()
},
{
header : "Signup date",
dataIndex: 'created_at',
renderer : Ext.util.Format.dateRenderer('m/d/Y')
}
]
});

Simple stuff - we just show a username and a signup date, which is altered by a renderer. When we double-click a row it turns into an editable row, and we get a textfield allowing us to edit the username. Unfortunately, while in edit mode our date renderer is ignored, and the raw value displayed instead.

Thankfully, we can fix this by altering RowEditor's source code. The method we need to change is startEditing, which sadly suffers from long method syndrome. About halfway into that method there's a for loop, which we're going to alter to look like this:

for (var i = 0, len = cm.getColumnCount(); i < len; i++){
val = this.preEditValue(record, cm.getDataIndex(i));
f = fields[i];

//our changes start here
var column = cm.getColumnById(cm.getColumnId(i));

val = column.renderer.call(column, val, {}, record);
//our changes end here

f.setValue(val);
this.values[f.id] = Ext.isEmpty(val) ? '' : val;
}
for (var i = 0, len = cm.getColumnCount(); i < len; i++){
val = this.preEditValue(record, cm.getDataIndex(i));
f = fields[i];

//our changes start here
var column = cm.getColumnById(cm.getColumnId(i));

val = column.renderer.call(column, val, {}, record);
//our changes end here

f.setValue(val);
this.values[f.id] = Ext.isEmpty(val) ? '' : val;
}

We didn't really have to do much, just grab the renderer for the column and pass it the default value and the record which was found earlier in the method.

For the curious, the empty object we pass in as the second argument to the renderer is what would usually be the 'meta' object (see the renderer documentation on the Column class). Under the covers, RowEditor actually creates an Ext.form.DisplayField instance for each column that you don't specify an editor for. This is why we use f.setValue(val); above. DisplayField doesn't have the same meta stuff as a normal cell would, so if you're looking to customise CSS via the metadata you'll have to do something like this instead:

columns: [
{
...
editor: new Ext.form.DisplayField({
cls: 'myCustomCSSClass',
style: 'border: 10px solid red;'
})
}
]
columns: [
{
...
editor: new Ext.form.DisplayField({
cls: 'myCustomCSSClass',
style: 'border: 10px solid red;'
})
}
]

Pretty easy. It's a shame we have to overwrite the source code as this makes the solution less future proof, but if you look at RowEditor's source code you'll see why a 45 line override would be equally unpleasant.

Continue reading

git: what to do if you commit to no branch

Using git, you'll sometimes find that you're not on any branch. This usually happens when you're using a submodule inside another project. Sometimes you'll make some changes to this submodule, commit them and then try to push them up to a remote repository:

ed$ git commit -m "My excellent commit"
[detached HEAD d2bdb98] My excellent commit
3 files changed, 3 insertions(+), 3 deletions(-)
ed$ git push origin master
Everything up-to-date
ed$ git commit -m "My excellent commit"
[detached HEAD d2bdb98] My excellent commit
3 files changed, 3 insertions(+), 3 deletions(-)
ed$ git push origin master
Everything up-to-date

Er, what? Everything is not up to date - I just made changes! The clue is in the first part of the commit response - [detached HEAD d2bdb98]. This just means that we've made a commit without actually being on any branch.

Luckily, this is easy to solve - all we need to do is checkout the branch we should have been on and merge in that commit SHA:

ed$ git checkout master
Previous HEAD position was d2bdb98... My excellent commit
Switched to branch 'master'
ed$ git merge d2bdb98
Updating 88f218b..d2bdb98
Fast forward
ext-mvc-all-min.js | 2 +-
ext-mvc-all.js | 2 +-
view/FormWindow.js | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
ed$ git checkout master
Previous HEAD position was d2bdb98... My excellent commit
Switched to branch 'master'
ed$ git merge d2bdb98
Updating 88f218b..d2bdb98
Fast forward
ext-mvc-all-min.js | 2 +-
ext-mvc-all.js | 2 +-
view/FormWindow.js | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)

Once we got onto the master branch, we just called git merge with the SHA reference for the commit we just made (d2bd98), which applied our commit to the master branch. The output tells us that the commit was applied, and now we can push up to our remote repository as normal:

ed$ git push origin master
Counting objects: 11, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 692 bytes, done.
Total 6 (delta 4), reused 0 (delta 0)
To git@github.com:extmvc/extmvc.git
88f218b..d2bdb98 master -> master
ed$ git push origin master
Counting objects: 11, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 692 bytes, done.
Total 6 (delta 4), reused 0 (delta 0)
To git@github.com:extmvc/extmvc.git
88f218b..d2bdb98 master -> master

This had me puzzled for a while so hopefully it'll save someone banging their head against a nearby wall.

Continue reading