Building AIM, Part 3: JavaScript Architecture

We needed to build an inventory system, one that was free from the restrictions of our legacy system. We wanted to build a system that could describe any piece of inventory: from cars to carpets, from houses to job listings. We needed an interface for our sellers to actually manage that inventory. That interface is the AutoConX Inventory Manager, which we call AIM.

This is the story of the JavaScript architecture for AIM.

I’m a strong proponent of progressive enhancement. AIM was built with that principle in mind, so none of the core functionality on the site relies on JavaScript to work. Some of the features, like the Camera, get major enhancements with JavaScript, but we’ve provided reasonable ground-floor experiences for devices lacking proper support.

How is the JS organized? Our scripts are broken into modules that (optimally) fulfill one task. We use RequireJS to asynchronously load those JavaScript modules. Some pages in AIM require no JavaScript at all, and on those pages we don’t even include a script tag. On pages that get enhancements, a sort of “handler” script is executed that manages the potential modules that page needs. The general pattern is to search for a CSS class prefixed with `js-` and to require those additional modules if the selector exists.

One of the most complex bits of functionality is the Camera. We wanted to give our users a first-class experience when adding photos to their inventory. The baseline experience of Camera is a file input field. The enhanced Camera ties together three other modules: Support, ViewFinder and CameraRoll. Support checks to see if the browser can support APIs like FileReader and Canvas. The camera then watches for changes to the file input. When a photo is taken or selected, the data is passed to the ViewFinder module which figures out how to render the image onto a canvas element on the page. ViewFinder ensures that the orientation is correct. The CameraRoll allows multiple images to be previewed, deleted and whether additional photos can be added or not (based on a max images property). CameraRoll also determines where the data for the processed images should be posted when the form is submitted. The page would be much too large to post in the traditional way, so we make multiple XHR POSTs instead. The Camera orchestrates all these parts and then submits the page when all images have been posted.

How do you test your code? Our JavaScript testing story is a bit more harrowing than our CFML one. In some respects, we didn’t need many tests. Many modules simply added DOM-manipulation enhancements for a better user interface. Those items didn’t need any testing, and I didn’t give much thought to it. That changed as the Camera grew in complexity and importance. I knew I needed a better idea of how well the parts worked, especially as I made enhancements. I chose QUnit because jQuery was so much a part of our codebase. The tricky thing in scaffolding the tests was to get everything to load in the right order. RequireJS, the spec, jQuery, QUnit and then the individual modules to test against. Once in place, there are very few things as psychically gratifying as a large number of tests turning green.

How do you process your JS files? We write browser-compatible JavaScript. The Gulp file will minify the JavaScript files, but we have no need of transpiling. When possible, we specify CDN versions of vendor scripts (like jQuery and various plugins), but when those links are unavailable, it falls back to locally-hosted versions in a `_backup` scripts folder. This hopefully adds up to the best JavaScript experience for our users.

%d bloggers like this: