Tuesday, June 13, 2017

Parallelizing Javascript Workloads - Building High Performance Graphs with Plotly.js and webpack-worker

Recently, we’ve been playing around with plotting various graphs in a react application with Plotly.js. It became clear pretty quickly that working with large data sets on the main application thread is simply not an option. Dragging a date range slider to filter graphs is jarring, not the smooth experience we’re after.

html5

Enter WebWorkers.

WebWorkers allow offloading CPU intensive tasks to other threads and are supported by pretty much all modern web browsers. However, they are a very low level mechanism, exposing only a simple message passing API for communication. The webpack-worker package provides us with cleaner and more intuitive abstraction based on promises.

Setting the Scene

To demonstrate, we’re going to build a graph that shows the top 10 movers from some historical stock market data available at http://pages.swcp.com/stocks/. We’re also going to add a date range slider to dynamically pick the date range to analyze.

The data consists of a year of stock prices for 242 stocks in CSV format, one line per stock, per day. For example:

20090916,AMZN,85.97,90.98,85.9,90.7,131142

That’s the date, stock symbol, opening, high, low, and closing prices, I'm not sure what the last column refers to. We’re going to duplicate the data so we effectively have 5 years worth of data. This destroys any integrity in the results, but we’re not here to analyze stocks!

There is a considerable amount of work for the app to do to convert this style data into the format we need, particularly when working over many years worth of data. The details of the implementation aren’t relevant to this post, but the complete sample is available in the github repository.

You can see a working example that compares using WebWorkers with running everything on the main UI thread here.

Defining Our Worker Code

The webpack-worker package provides simple APIs based around promises for declaring and consuming worker code. It provides models for both single long running processes and APIs that can be called multiple times. We’re going to use the API model as we will need to recalculate the graph data as the user drags the slider.

Based on our requirements above, our worker might look something like:

import api from 'webpack-worker/api'

// initialization arguments can be passed from the client
api(url => {
  // perform any async initialization required by returning a promise
  fetch(url)
    .then(response => response.text())
    .then(text => {
      var data = parseFile(text)

      // at the end of the promise chain, return the api we want to expose
      return {
        topTenMovers: filter => {
          var filteredData = filterData(data, filter)
          var stocks = aggregateStocks(filteredData)
          var topTen = findTopTen(stocks)
          return mapToVectors(topTen)
        },
        // add other API functions here
      }
    })
  }
)

You’ll notice that this module doesn’t export anything. That’s because WebWorkers run in an isolated thread - this module is the entry point for the WebWorker. webpack-worker wraps the API returned and handles the low level communication for us. We’ll talk about how to use webpack to create a bundle for our worker a little bit later.

Consuming Our Worker

webpack-worker generates a client API for us, based off the API we specify in the worker. Note that currently only a single argument can be provided to API calls and initialization.

Let’s take a look at how our graph component might look without filters. We’ll add those next.

import React, { Component } from 'react'
import createClient from 'webpack-worker/client'
import Plotly from 'plotly.js/dist/plotly-basic.js'

export default class Graph extends Component {
  componentDidMount = () => {
    // create a client for our worker API, passing in the URL
    // the promise resolves once initialization has finished
    createClient(new Worker('/static/js/worker.bundle.js'), '/sp500hst.txt')
      .then(worker => {
        this.worker = worker
        this.renderGraph()
      })
  }

  renderGraph = (filter = {}) => {
    this.setState({ filter })
    // call our topTenMovers API function
    this.worker.topTenMovers(filter)
      .then(data => Plotly.newPlot(this.element, [data]))
  }

  // render a container for our graph and create a reference to it
  render = () => <div ref={element => this.element = element}></div>
}

Pretty simple stuff. You’ll notice we kept graph rendering logic separate in the renderGraph function. We’ll reuse this when we add a date filter. This ends up looking something like:

alt text

Dynamically Filtering

For a simple solution to our UI needs, we’ll use the react-input-range NPM package. Let’s add it to our component:

// ...
import InputRange from 'react-input-range'

export default class Graph extends Component {
  // set the default filter value - we are hard coding this range for simplicity
  state = {
    filter: {
      min: new Date(2009, 7, 21).getTime(),
      max: new Date(2014, 7, 21).getTime()
    }
  }
// ...
  render = () => (
    <div>
      <div className="filter">
        <InputRange  
          minValue={new Date(2009, 7, 21).getTime()}
          maxValue={new Date(2014, 7, 21).getTime()}
          value={this.state.filter}
          step={86400000} // one day
          onChange={filter => this.renderGraph(filter)}
          formatLabel={value => new Date(value).toLocaleDateString()} />
      </div>
      <div ref={element => this.element = element}></div>
    </div>
  )
}

This ends up looking something like:

alt text

Things are starting to look pretty good! Now, as we drag the date slider, the graph updates.

There’s a problem, though. As we drag the slider, it queues up a filter operation for every mouse move event onto our worker, which tirelessly attempts to fulfill all of our requests… long after we’ve finished dragging the slider.

Throttling

Fortunately, webpack-worker gives us a simple way to throttle our requests to produce the most responsive behavior. Applying it is simple:

import throttle from 'webpack-worker/throttle.mostRecent'
// ...
  componentDidMount = () => {
    createClient(new Worker('/static/js/worker.bundle.js'), '/sp500hst.txt')
      .then(client => {
        this.worker = throttle.applyTo(client) // apply throttling to API requests
        this.renderGraph(this.state.filter)
      })
  }
// ...

The applyTo function attaches a throttle to all function members of the supplied object. When throttled, if requests are made while the worker is already busy, only the most recent request is queued, all others are dropped.

When requests are dropped, the returned promise is rejected, so we need to handle the rejection in our renderGraph function:

  renderGraph = filter => {
    this.setState({ filter })
    this.worker.topTenMovers(filter)
      .then(data => Plotly.newPlot(this.element, [data]))
      // ignore any dropped packets
      // we need to handle any errors coming out of the worker here
      .catch(error => error.dropped || console.error(error))
  }

Configuring webpack

As discussed, worker processes are isolated threads that require their own entry point. Correspondingly, we can create entry points in our webpack configuration. This is a simple matter of creating a worker node in the entry hash, for example:

module.exports = {
  entry: {
    app: [/* app entry point here */],
    worker: require.resolve('../src/worker')
  },
  output: {
    filename: 'static/js/[name].bundle.js',
  }
}

If your app was created with create-react-app, there are a couple of additional steps that are described here.

Wrapping Up

You can see a working example of what we’ve discussed here contrasted with running everything on the main UI thread here.

The use case discussed here is purely to demonstrate the impact of using WebWorkers - webpack-worker can be used for executing any Javascript code within a WebWorker, not just calculating graph data. It’s also worth noting that webpack-worker will work with other bundling systems like browserify.

The full source to this sample is available in the webpack-worker repository here.

Labels: , , , , , , ,

Friday, March 11, 2016

Development Workflows for Azure Mobile Apps

Azure Mobile Apps offers a great development experience for all levels of development expertise and application complexity. The online editing experience works exceptionally well with a responsive UI, immediate updates, comprehensive intellisense and great git integration.

For more experienced developers, a (mostly) comprehensive local development experience is provided, with a simple transition between online and local development.

This article assumes a basic familiarity with creating Mobile Apps in the Azure management portal (https://portal.azure.com/) and with git repositories.

The Online Experience

The online development experience is centered around VS Online and is enabled for you when you access Easy Tables in the settings section of a Mobile App. If you're playing with Mobile Apps for the first time, we recommend that you complete the Quickstart section and create a Node.js backend for the Mobile App.

VS Online can be accessed by clicking the "Edit Script" button in Easy Tables or by navigating to http://your_mobile_app.scm.azurewebsites.net/dev/. VS Online provides a complete IDE with full intellisense for Azure Mobile Apps. Changes made to the application code are effective immediately without requiring a save or rebuild. The IDE is effective for small to medium complexity apps, even with multiple developers.

Using a Github Repository

It's important to note that these workflows do not involve using a local git repository on the mobile app server (as set up in continuous deployment in the portal). Changes are made directly to the wwwroot on the app server. For this reason, it's important to have separate development instances of your mobile app - we don't want to be making changes directly to production!

Here, we'll set up a remote git repository to push your changes to, allowing multiple developers to work on the same code, enabling a simple production deployment scenario and keeping your code safe, secure and versioned.

Configuring VS Online

  1. Create your github repository. Copy the git URL to the clipboard.
  2. Return to VS Online and open the console by pressing ctrl-shift-c.
  3. Type the following commands into the console:
    git init
    git remote add origin 
    git config --global user.email 
    git config --global user.name 
    git add -A
    git commit -m "Initial commit"
    git push -u origin master
You are now set up to use the git user interface built in to VS online, or standard git commands like add, commit and push in the console.

Adding Other Developers

Each developer will have their own mobile app instance and will sync changes using the git repository we created.

For this step, we recommend enabling the VS Online extension separately. Open the new mobile app in the portal, click the Tools icon and open the Extensions section. Follow the prompts to install VS Online.

Open VS Online for the app and type the following commands into the console:
git clone  .
git config --global user.email 
git config --global user.name 
npm install
The . at the end of the first command is important - it tells git to clone into the current directory. Note that the directory must be empty.

Configure your data connection, and you are ready to go. Again, the VS Online git user interface works great, or standard git commands can be used in the console. git pull can be used to sync changes that other developers have pushed to the repository.

Transitioning to Local Development

Transitioning to editing and running your project locally is much the same as the "Adding Other Developers" section above, except running the commands in a console on your local computer.
Ensure Node.js is installed on your computer and run the following commands:
git clone 
cd 
npm install
Starting the mobile app backend can then be done by running
node --debug ./server.js
The server by default runs on port 3000 - you must configure the mobile app URL in clients to be http://:3000/

For full intellisense and integrated debugging, we recommend using VS Code to edit your project.

Configuring a Production Environment

In this part, we are going to create our production mobile app server and set it up to automatically pick up changes that we want to publish from our development environment.

Setting Up Publishing (Continuous Deployment)

To set up automatic publishing, we will use a great feature of Azure Web Apps, continuous deployment. This monitors a git repository for changes and syncs automatically.

First, we want to create a new branch in our git repository - we don't want to publish every change to production, only when we pull the trigger. Open a VS Online console on a developer instance and enter the following commands:
git checkout -b publish
git push origin publish
git checkout master
Don't forget to enter the last line! It switches you back to working on your development branch.

Next, create your production mobile app server and configure the data connection. Once created, open the continuous deployment section in publish settings, select GitHub as the source and follow instructions to point at the new publish branch of your github project.

After clicking OK, watch your project sync and be ready for use!

Deploying

Deploying to production is now a simple matter of merging the master branch into the publish branch. For small changes, you can do this by opening the console of a developer instance and entering the following commands:
git checkout publish
git merge master
git push
git checkout master
However, for most changes, we highly recommend taking advantage of github's pull request functionality, allowing you to see your changes line by line and discuss them with other developers before you publish to production.

Reviewing Changes Before Publishing

After you have pushed your changes to the master branch, open your github repository in a browser and click the "New pull request" button. In the base branch dropdown, select our publish branch and then click "Create pull request".

Once the review is complete, click the "Merge pull request" button and the changes will be deployed to production. You can watch the deployment take place in the continuous deployment section of the production instance in the portal.

Final Thoughts

It's important to note that this guidance is intended to serve as a basic framework for a development workflow. It is by no means comprehensive - it does not include testing (more on this later) and does not take advantage of other great Azure features like staging slots.

Comments? Questions? Join the conversation on gitter!

Labels: , , , , , , ,

Saturday, May 17, 2014

test-studio – Unit Testing for nodejs How it Should be

It’s been a while! Freakin’ awesome to see you again.

We’ve been working on some cool stuff! The next evolution of tribe is coming and it’s going to change how we think about developing applications on the HTML/JS/CSS stack. But that’s for later.

What’s test-studio?

We were going to prepare a video showing the cool stuff it can do but quickly realised that was going to take a while, and we’re busting to show you. So, here’s a nice pretty picture instead.

 test-studio

Looks kinda like a web based front end for nodejs unit tests, complete with sweet debugging.

Why Would I Want to Use it?

It

  • can run tests individually, in groups or whole suites,
  • integrates sweetly with node-inspector and can step the debugger in to individual tests,
  • keeps your tests in sync automatically as you work,
  • captures console output and error details for each test,
  • installs in seconds using this npm package.

Basically the sorts of things you would expect from a good test client.

It currently only supports mocha, but we will add support for other test frameworks given demand. There are also a few other useful things we’ve added.

Other Useful Things?

To show you some of the goodies we’ve added, we’ll use some examples. The tests below are all focused around the following simple module.

    var i = 0;

    module.exports = {
        next: function (increment) {
            return i += (increment || 1);
        }
    }

Pretty basic stuff, increment a variable by the argument if supplied, or by 1 if not, and return the result.

sinon and chai Integration

We’ve integrated sinon and chai and exposed them as psuedo-globals so you don’t need to require them in every file.

    suite('counter', function () {
        test('counter increments by one if no argument is supplied', function () {
            var counter = require('./counter');
            expect(counter.next()).to.equal(1);
            expect(counter.next()).to.equal(2);
        });
    });

The chai assert function is also available. We’ll show you an example of sinon in a sec. You can, of course, still use whatever libraries you like.

Refreshing Modules

A common issue with testing nodejs modules is that they are cached and return the same instance for each request. When we add another test for our counter, we don’t want to depend on the first test we wrote being executed first. Enter require.refresh.

    test('counter increments by the supplied argument', function () {
        require.refresh('./counter');
        var counter = require('./counter');
        expect(counter.next(2)).to.equal(2);
        expect(counter.next(2)).to.equal(4);
    });

We would also add the refresh to the first test, or use a test setup method to refresh it for each test.

Stubbing Modules

Substituting test stubs for dependencies is fundamental to unit testing. test-studio gives you a simple way of achieving this.

    test('modules can be easily stubbed', function () {
        require.stub('./counter', {
            next: sinon.stub().returns(42)
        });

        var counter = require('./counter');
        expect(counter.next()).to.equal(42);
        expect(counter.next.callCount).to.equal(1);
    });

You can even stub out native node modules like fs and http. Stubbed modules are refreshed at the beginning of each test and last for the lifetime of each test.

Suite Hierarchies

test-studio breaks down your suite names at each dot character and arranges them in a hierarchy, even across separate files.

We haven’t quite finished our tree view for the UI, but it will allow much better organisation of tests and the ability to focus on individual branches of your test suite as you develop.

Known Issues

The current release of test-studio is a very “alpha” release and has not undergone extensive use in many real world applications. We’ve run it against existing suites of tests that use mocha, like mocha itself, express and connect, with an almost perfect pass rate for express and connect.

Occasionally, things can get a little bit in knots when you are running asynchronous tests that throw exceptions (i.e. fail) with the debugger attached. The “Restart Process” button restarts the process in which the tests actually execute and should restore operation to normal. Failing this, restart test-studio.

To see the complete list of issues and planned enhancements, head over to the github issues page for test-studio. Feel free to log new issues with the test-studio label.

Why Did We Create test-studio?

In a nutshell, running and debugging unit tests from the command line sucks.

Having used qunit for unit testing in the browser for some time, we became used to at least a little bit of UI sweetness. An improved UI even just for browser testing was on the radar, and when we saw the state of affairs with nodejs unit testing, it seemed like a perfect application for our new platform, tribe. Much more on this soon!

What’s Coming

We’re always working on test-studio for our own projects and there is some sweet stuff in the pipeline.

Vastly Improved UI

As well as the hierarchical view of tests we mentioned before, we’re working on improving the UI to make working with your tests as easy as possible.

Client Side (Browser) Testing

Get the same experience and debugging power with your client tests as well.

A full functional testing framework is on the horizon with a simple, extensible domain specific language for programmatically driving and making assertions about your UI.

Assertion Counting

Exception based assertions are by far the easiest to implement, but they suffer from a few problems when compared with assertion counting.

  • Execution stops when an exception is raised, meaning the rest of the assertions in a test are not verified.
  • When testing asynchronous code, tests pass if your assertions are never reached. Assertion counting addresses this by expecting at least one assertion by default.
  • Assertion counting gives you better reporting, showing the number of assertions as well as the type and result of each assertion. This can lead to improved flexibility around how you structure your tests.

Cloud9 Integration

We’ve only dabbled in the world of developing in the cloud, but the idea of a cloud hosted IDE for nodejs with all the functionality of a desktop IDE, including integrated testing and debugging, makes us froth at the mouth.

Dependency Graphs

We have a release scheduled for January 2132 that gives you a visual mapping of the dependency tree of your modules. But seriously, this is on the radar.

Other Various Goodies

Lots of other minor improvements and functionality enhancements are on our to do list, and we’re humbly listening to you, our users. If there’s something that would make your development life easier that we might be able to add, or you just want to say hi, give us a holler on twitter @tribejs.

Until Next Time…

This is as much for you as it is for making our own development experience better. Try it out, let us know what we need to take it to the next level.

Thursday, May 02, 2013

knockout.composite v0.2 – Introducing Tribe

*UPDATE* Tribe is here! Check out http://tribejs.com/ for guides and API reference.

We’d love to spend time giving you the juice on the freakin’ amazing new features we’ve been working on, but for now, you’ll have to make do with just a glimpse…

What is Tribe?

Tribe is a platform for building rich, highly interactive applications targeting desktop and mobile devices. It is comprised of a number of components:

  • Composite – The composite JavaScript framework you know and love, completely rewritten and supercharged with new features.
  • MessageHub – A SignalR based message gateway that seamlessly broadcasts messages to any device and publishes commands to internal services via message buses like NServiceBus.
  • Mobile – A set of skinnable panes and utilities for creating high performance, professional quality mobile device apps in minutes
  • Desktop – A set of reusable panes for common requirements such as dialog, grid, graph, tab panel, expander and more.
  • PubSub – A powerful, lightweight publish / subscribe engine with built-in lifetime management.

 

Composite – Concepts Crystallised

Tribe.Composite takes the concepts explored in knockout.composite, formalises the best concepts and adds new, much needed features

  • Memory management – memory management is fully automated. Subscribe to messages or document events in your models, when the pane is removed from the DOM, everything is cleaned up for you without needing a line of code.
  • Extensible rendering pipeline – each step of the rendering process is now expressed in a separate, loosely coupled unit. Panes have an extensible, formally defined life cycle from initial resource loading through to disposal.
  • Extensible resource loading strategies – when and how panes and other resources are loaded is now completely customisable. Out of the box, you can load panes completely on demand, completely preloaded or by component. We’ll even provide a mechanism for loading sets of panes as AMD components.
  • Hierarchical page structure – Composite tracks the creation and destruction of panes and maintains a hierarchy of “nodes"
  • Transitions – transitions are now a first class citizen of Composite. Transition any node, pane or element in or out using powerful, hardware accelerated CSS transitions.
  • Sagas – we’re working on creating a neat abstraction over user processes, vaguely modelled around the concept of message bus “sagas”. We’re still working on the best abstraction, but it’s looking sweet with simple syntax and semantics, and automatic state persistence.
  • Much improved everything else – API, testing, testability, extensibility. Everything is 100% better than v0.1!

 

MessageHub – Seamless, Secure Message Distribution

This is the really exciting stuff. MessageHub is a gateway built on Microsoft’s SignalR technology, giving you seamless, scalable and secure channel based communications between any connected device.

MessageHub also allows you to use simple conventions to map client messages to internal message classes and seamlessly authorise and send messages to internal services. We’ll give you an NServiceBus adapter out of the box, and building adapters for other buses is trivial.

MessageHub has built in message record and replay semantics, with both client and server side persistence. Let’s talk later about powerful event sourcing with built in client and server side fault tolerance.

 

End to End, Integrated Mobile and Desktop Platform

Our aim is to make building integrated systems with real time information sharing easy – not just apps. There are a bazillion frameworks out there for building “apps”, but this is just one piece of the puzzle.

Mobile devices are changing everything. The explosion of cheap devices has opened a world of opportunity for capturing data like never before, but the mobile device form factor is not appropriate for every task. Does anyone seriously ever use a word processor on their mobile phone?

The HTML / JS / CSS technology stack has evolved to the point where “traditional” desktop apps built with proprietary technologies offer very few advantages. There is little you can do with a desktop app that you can’t do with a web app. Combine this with the ability to maintain a single code base that targets browsers, iOS, Android and virtually any other device, it’s unbeatable.

Hybrid mobile apps have a bit of a bad reputation on the performance front, mostly thanks to Facebook’s botched effort at a hybrid app. We are going to bring you a native looking app that is virtually indistinguishable from a native app, even on old hardware. There are already plenty of examples out there, like Touralot (by knockout’s very own Steve Sanderson). Seriously, learning or acquiring skills in Objective C, Java, as well as the HTML/JS/CSS stack is a massive and entirely unnecessary investment.

Tribe is built from the ground up with scalability and performance in mind. 100 users or 1,000,000 users with consistently responsive user experience. We’ll give you some guidance on choosing server side technologies that will support this sort of scale.

But I Ramble…

Less talk, more create.

Details, samples, documentation, all coming soon. We can’t wait to show you.

If you haven’t seen knockout.composite v0.1, check it out here.

Monday, January 21, 2013

PackScript – Better. Because…

*UPDATE* Check out http://packscript.com/ for a complete reference and examples.

What is this PackScript Jiggery?

PackScript is a powerful open source (MIT license) resource build tool designed specifically for the combining and minification JavaScript, HTML and CSS files in modern, single page web applications. It was built from real need, specifically to support knockout.composite, and is focused around taking all of the pain out of managing all of the resources for your web app. You can find it on github.

PackScript is the next generation of build tools for the web and handles every aspect of building and optimising your web resources. Say goodbye to painful optimisation and to managing code for your debug, test and production environments, say hello to the optimum debugging experience in your browser. PackScript leaves you to do what you do best – code.

Next Generation Build? What makes PackScript better than everything else out there?

First of all, PackScript is much more than a minification tool. It actually delegates the task of compressing resources to separate tools. Microsoft AjaxMin is used by default, but adding support for others is simple. YUICompressor, Closure Compiler, uglifyjs, jsMin and minify are other tools that fall into this category.

So PackScript handles “bundling” together a bunch of files and compresses them. There are a number of existing tools out there that do this…

Server-side Solutions

A significant number of these tools rely on a server side component to dynamically combine and minify your resources. ASP.NET bundling, Carabiner, SquishIt and bundlejs are all examples of this kind of build tool. Generally, they do the job quite well, but they have some drawbacks.

The most obvious issue is that these solutions require a server side component, adding a dependency, tying you to a platform and consuming your valuable CPU cycles. Modern, single page web applications perform rendering tasks on the client, relying on server side processes purely for process and data access. Adding a dependency like this just adds unnecessary complexity.

PackScript produces static resources, as you code and as part of your build process, that can be deployed to any high-performance web server, anywhere, even to a global CDN.

The other not so obvious issue is one of debugging. These tools generally present your debugger with one behemoth file; a less than desirable experience. PackScript allows your files to be packaged in a way that splits your files back up and gives you the debugging experience you need. Even if you are using a server side rendering framework, you can still benefit from PackScript.

Client-side Solutions

Client side tools are not really combining and minifying tools, they optimise the loading of multiple files asynchronously. headjs and labjs are examples of these tools. PackScript will happily coexist with these sorts of tools if you really want to optimise your loading times.

At some point I intend to create an extension that will split your resources into an number of equal size chunks, load them with one of the above tools and piece them together on the client. I’m expecting this to have pretty significant impact on load times!

JavaScript Frameworks

If you’re using a framework like CommonJS or Dojo, they often come with their own set of resource management tools. Use them!

Having said this, PackScript can still offer you some benefits. Read on, and check out the features that PackScript offers.

RequireJS / AMD

RequireJS is arguably the best resource management build tool out there (until now…), popular enough and similar enough to PackScript that it deserves it’s own section.

RequireJS essentially allows you to declaratively define dependencies in your code. Dependencies can consist of individual JavaScript files or you can define named modules that can themselves have dependencies. Modules implement the Asynchronous Module Definition API. RequireJS will ensure that all dependencies (including nested dependencies) have been fulfilled.

AMD is a good standard that is gaining popularity. It takes a lot of the pain out of managing complex dependency trees, and RequireJS minifies your resources along the way. We think PackScript is better for many situations, and can actually coexist with an AMD approach.

Two main problems that RequireJS has are that it’s intrusive and that it introduces a degree of friction.

Intrusive

By intrusive, I mean that you have to write code in a specific way to satisfy RequireJS; modules must be defined and dependencies must be declared using the AMD API. This is not all bad, enforcing a consistent approach to this sort of stuff is not a bad idea, but it introduces a direct dependency on a third party product and unnecessarily ties you to an API.

PackScript manages your dependencies externally to your code and builds static, optimised files at build time. No code needs to be written to manage resources that will ever make it into your running application.

Friction

Friction is anything that consumes your time, most often repetitive and error prone tasks – a “force resisting relative movement”. As an example, creating an optimised set of resources with RequireJS involves creating a configuration file and executing the optimiser against that new file. We also need to make sure the build process includes creation of this resource set, so update the build configuration for each environment.

This sort of friction can (and should) usually be eliminated with some custom solutions – create an executable that scans our project folder for build profiles that conform to a specific convention and execute the optimiser against them with a common set of arguments.

PackScript is focused around eliminating friction. It builds your application in the background as you work and is designed to be easily integrated with any automated build process. In most cases, you can just add your resources to the project and move on. If not, add a simple configuration file and PackScript does the rest, even as you code.

Flexibility and Extensibility

PackScript also offers a great deal more flexibility and extensibility than RequireJS.

Don’t like the API? Change it.

Doing the same configuration over and over? Abstract it out.

Optimising the buggery out of your resources? Load multiple modules from one combined script at key points in the application lifecycle, or in the background after loading a set of bootstrapper resources.

Got a stack of icons that are loaded individually? Combine them into a single file and generate a CSS map for them. Add an icon to your project and it’s there. Frictionless.

PackScript works with any static resource and is easily extensible with either JavaScript or .NET. Combine this with the declarative dependency management of knockout.composite, and you get the best of it all.

What Are You Waiting For?

Head over to the feature overview for PackScript, check out an example, have a look at the core unit tests and integration tests, download it from github and have a play! It’s worth noting there are a number of issues on github.

What do you think? Have I missed anything? Is there another solution that’s not mentioned here?

Thursday, January 17, 2013

PackScript – An Example - Freedom From Script Tags!

*UPDATE* Check out http://packscript.com/ for a complete reference and examples.

We’re guessing you got here from the last post. If not, it’s probably worth reading for a quick intro, but hey, you’re smart, you’ll probably pick it up as we go along.

This example demonstrates how to use configuration files to define script and stylesheet outputs in both debug and production forms, create reusable functions and use templates.

As promised, we’re going to show you how we can work on our code in separate files and be able to edit, create, move etc. these files on the fly and have full debugging experience without needing to ever manage script or stylesheet references.

Chrome is currently the only browser that supports this technique using the sourceURL tag. FireBug claims to, but I haven’t been successful. You do use Chrome for web development, right?

The Sample

We’re going to build a silly little text animation thing. Head on over to the github repository and have a quick squiz at what’s there… Oh. Couldn’t be bothered? OK, well, it looks something like this.

PackScriptSample-files

At least that’s what it looks like in VS2012.

The Build folder is where our combined files are going to end up. They’re not included in the solution, otherwise search operations in Visual Studio return duplicate results. It will get generated as part of your build process, anyway!

The Tools folder contains a copy of the PackScript assemblies.

The Source folder isn’t really that interesting either. It’s got jQuery, some CSS, a controller file that handles a button click and a simple animate function. Enough to demonstrate what we’re trying to get at.

The html files contain a simple page from which to trigger our animation. They contain duplicate markup, the solution to which lies with our friend knockout.composite.

The Interesting Bits

Firstly, let’s have a look at the PackScript config file, pack.js.

pack({
    to: 'Build/site.js',
    include: 'Source/*.js',
    prioritise: 'jquery.js',
    template: 'debug'
});

pack({
    to: 'Build/site.min.js',
    include: 'Source/*.js',
    prioritise: 'jquery.js',
    minify: true
});

pack({
    to: 'Build/site.css',
    include: 'Source/*.css'
});

A little bit going on here, but pretty easy to follow.

First up, build site.js from all the JavaScript in the Source folder, put jQuery first and apply some template called ‘debug’ to them. Didn’t I see some debug template thingy before?

The production version, site.min.js, is pretty similar, except rather than applying the template, minify it all. We could have minified the CSS too, but you get the idea.

The Template

So it looks like there are two parts to the template. Since you’ve read the first part, you can already guess what each file is for. debug.template.js:

window.eval("<%= prepareContent(content, pathRelativeToConfig) %>");

Looks suspiciously like a very short template. What’s this prepareContent stuff?

this.prepareContent = function (content, path) {
    return content
        .replace(/\r/g, "")                 // exclude windows linefeeds
        .replace(/\\/g, "\\\\")             // double escape
        .replace(/\n/g, "\\n")              // replace literal newlines with control characters
        .replace(/\"/g, "\\\"")             // escape double quotes
        + "\\n\/\/@ sourceURL="             // append sourceURL tag
        + path.toString().replace(/\\/g, '/');
}

Ahh. Cool, we can put reusable functions into these “configuration” files, like debug.pack.js.

Note we attach our prepareContent function to the “this” object. PackScript restricts the scope of the execution of each file to help avoid conflicts, so to make something available globally, we need to explicitly attach it to the global object. Having said this, when you write them, organise these sorts of functions into pseudo-namespaces. Do it. Thank me later.

So what is actually going on here? The template is wrapping the script content in a call to window.eval. This gives us the chance to inject some content, in this case, a neat little tag that Chrome will interpret. There’s some other stuff in there to make it play nicely with an eval statement, pretty straightforward stuff.

The other magic variables used in the template are passed to the template engine by PackScript. Check out the github readme for an up to date list of the built in variables.

The Result?

You can see the “debug” version here and the “production” version here. Open DevTools for the debug version and you can see our scripts nicely in the source list.

PackScriptSample-debugger

The lack of an ‘S’ on Source is a tad conspicuous (DevTools bug?), but other than that, everything looks as expected. This might seem trivial with only three files (one of which you should probably load from a CDN anyway), but when you have hundreds of source files, it is sweetness to see them arrayed out in neat organisation.

Let’s add a file without adding a script reference.

PackScriptSample-debugger2

Cool. Looks like it works. Breakpoints even play the game. The production version works pretty much as you would expect.

So there you have it! Nicely separated code that you can mess with to your heart’s content, debug and production versions, combined CSS, and all without ever having to touch a script tag again*. Feels good.

* - OK, maybe occasionally.

Just To Make It Perfectly Clear…

None of this sample is necessarily “best practice”. It is presented purely to demonstrate some of the power of PackScript. knockout.composite does use a similar technique for this issue, but has a relatively sophisticated resource management system that takes all sorts of pain away.

PackScript is fully functional and under active development. I already have a list of planned enhancements as long as my arm (did someone say “source maps”?), so dip your toes in, give it a try, we think you’ll love it!

In the not-too-distant future, we will cobble together some sort of reference site that shows how to get some serious power out of knockout.composite and PackScript and how it fits very nicely with a SOA flavoured system.

In the meantime, we will be putting together some more posts on how it’s built (JavaScript core running in a Windows console), extensibility and some other advanced stuff.

What would you like to hear about most?

Labels: , , , , , , , , , , , ,

Tuesday, January 15, 2013

PackScript – Next Generation Build for the Web

*UPDATE* Check out http://packscript.com/ for a complete reference and examples.

What?

PackScript is a powerful open source (MIT license) resource build tool designed specifically for the combining and minification JavaScript, HTML and CSS files in modern, single page web applications. It was built from real need, specifically to support knockout.composite, and is focused around taking all of the pain out of managing all of the resources for your web app. You can find it on github.

If you’re pretty familiar with these sorts of tools, you might want to have a look at the post on a few reasons why we believe PackScript is better than all current alternatives.

The Story

Too much of my life as a web developer has been spent managing the multitude of JavaScript, HTML and CSS for whatever current project. Keeping track of where that JavaScript function is. Making sure script and link tags reference the right files from multitudes of scripts and stylesheets. Or worse… Keeping all of the JavaScript for a project in one behemoth file in a team environment.

Now remember we need to have different sets of files for each of our development environments. Facilitate debugging in the dev environment, minify but include some debugging or tracing stuff in the test environment, supercharge everything in production. Ok, that’s not so bad.

But wait… Our mobile app uses some (but not all) of the stuff from our desktop site, but needs some custom code to do all that “responsive” bollocks.

To top all this off, our JavaScript codebase is getting pretty big. We could combine it all into one massive file and load it up front, but it would be even better if we could load largish chunks of it as the user hits various parts of the app.

My head hurts. No more!

Powerful and Flexible Configuration API

PackScript is written primarily in JavaScript and runs in an instance of the Google V8 engine, courtesy of the excellent Noesis.Javascript library. It exposes a powerful and flexible JavaScript API for specifying input and output files and other options.

pack({
    to: 'site.min.js',
    include: '*.js',
    recursive: true,
    minify: true
});

PackScript scans the target folder recursively and, by default, picks up any file that ends with ‘pack.js’ and treats it as a configuration file. These files are executed in an instance of a V8 JavaScript engine.

The fact that the configuration is live JavaScript and not just static text allows us to do some pretty cool things.

Convention Over Configuration

PackScript allows you to define powerful conventions about the location and naming of your files. Tell your devs where to put and what to call your script files and forget about those script tags and AJAX calls.

GlobalExcludes = ['*.debug.js', 'admin.js'];
pack({
    to: 'site.js',
    include: '*.js',
    exclude: GlobalExcludes
});

The GlobalExcludes variable can be defined in a configuration file in another location and used throughout your project. You can also create reusable functions and extend the API to your liking by creating custom “transforms”; each property of the object passed to the pack method is a transform, like ‘include’ and ‘minify’.

We will have a reference site for knockout.composite and PackScript released soon that will show you some much more complex examples.

Painless, Seamless Debugging

PackScript is not fussy about your IDE and provides a memory-resident mode where file changes are detected and the appropriate outputs updated automatically. Keep your code neatly split up in separate files, edit and restructure them, refresh your browser and each file is there in your debugger, no messing with script references, all loaded using a single HTTP connection. Isn’t this how it should work?

The example in the next post will show you how you can keep your code in separate files and retain full debugging support (at least in Chrome) without needing to maintain script references. A production-ready script is produced at the same time.

Templates

PackScript gives you powerful and extensible templating for your content files. Out of the box, it provides support for underscore.js templates, but you can easily plug in support for other engines like handlebars, mustache or jsRender. More on this later.

By default, the target folder is scanned recursively for files with an extension of ‘.template.*’. A template for wrapping script files in HTML script tags might be called wrapScript.template.html and might look something like:

<script type="text/javascript">
<%=content%>
</script>

These templates can then be referenced in your pack configurations using the filename without extension.

pack({
    to: 'scripts.htm',
    include: '*.js',
    template: 'wrapScript'
});
Templates can be applied to any file type. A much more useful application of templates is explored in the next post.
Minification

Minification is provided out of the box courtesy of the Microsoft AjaxMin library and can be applied to any or all of your script and stylesheet files by setting the minify option to true. AjaxMin gives quite good compression for zero effort and exposes a .NET interface, so it was a good choice for the first supported minifier. Support for Closure Compiler and others is coming, and plugging in your own is easy.

Prioritisation

Simple prioritisation is offered by passing a file name or array of file names to the prioritise option. The files passed will be combined first in the order specified. This is enough to satisfy most scenarios, but will be expanded to allow for wildcards, etc.

Extensibility

PackScript offers a great deal of extensibility. Reusable functions will cater for many situations, but custom transforms can easily be created for simple APIs.

APIs implemented in .NET can also be exposed within JavaScript, again thanks to the awesome Noesis.Javascript library. The file access, minification and logging functions are implemented in this way. A simple marker interface is provided for this, more in a future post! If you’re interested in learning more, leave a comment or drop me a tweet.

What’s Coming?

Some extensions coming will include

  • SASS integration,
  • source map integration,
  • build information (build configuration, version, etc),
  • automatic checkout of files from TFS (though it’s arguable that it would be a better option to switch to git than to use it),
  • combining images and icons into a single file with CSS map generation,
  • HTML minification.
A Cool Trick

The console PackScript application also serves as a JavaScript console where you can interrogate configuration, test functions, etc. This feature is provided by a small wrapper for the Noesis.Javascript library - Noesis.Javascript.Extensions. The wrapper also adds support for dynamic invocation and return types to the JavaScript engine.

> Pack.options
{
  "configurationFileFilter": "*pack.config.js",
  "packFileFilter": "*pack.js",
  "templateFileExtension": ".template.*",
  "logLevel": "debug"
}
>

Until Next Time…

I hope this gives you a reasonable idea of the capabilities of PackScript. It’s worth noting at this point that it is a very early release of the product, and while it is well tested, there may be some reliability issues, and many more features on the way.

You can see a more detailed list of configuration options at the github repository. The unit tests and integration tests are also good sources of information.

Head on over to the next post for a neat practical application!

Labels: , , , , , , , , , , ,