We needed to build an inventory system, one that was free from the restrictions of our legacy system which could only describe automotive, agricultural and recreational inventory for dealerships across the United States. We wanted to build a system that could describe any piece of inventory: from cars to carpets, from houses to job listings. The process started with our database structure and maintenance areas. Then came a REST API to give us a nice separation of concerns. Once that was in place, we needed an interface for our sellers to actually manage that inventory. That interface is the AutoConX Inventory Manager, which we call AIM.
This is the story of the project architecture for AIM.
The beginning of most new projects is an empty IDE or text editor. It can be as intimidating as the blank sheet of paper you use to start a novel. The first choice I had to make was how to organize the project. A common tradition prior to then was for the project root to equal the web root with some files excluded from deployment. I knew that this project would benefit from a processing step. Our source folder would house all our project assets and code in a form that was developer-friendly, and our distribution folder would house the production-friendly versions of those files. I decided to create a
`src` folder and a
Should we commit our
`dist` folder? After thinking long on it, I decided “yes.” We have a small team, and this was a new project architecture for us. Our deployment process is far from automatic, and I worried how others would handle deployments if I wasn’t there. What if they don’t have any of the tooling installed? If the distribution folder is in source control, we’ll always have versions of production-ready code that can be deployed as long as someone can get to our BitBucket server. While the decision has produced headaches (especially when minified files are involved in complicated rebases), I still think it was the best choice for our team at the time.
npm project as a CFML app? I’ve fallen in love with Node-based tooling. Setting the project up with
npm seemed a no-brainer. We’d definitely be using a Node-based task runner (see below), and we would need a
package.json file to list all the plugins. There is a CFML CLI called CommandBox that has a sort of package definition component to it, but the project is still maturing. It wasn’t quite right as the de facto package manager for AIM. The two products work very well in tandem and using
npm run-script has simplified the CommandBox startup process by giving us shortcuts for launching and configuring our servers.
How do you handle deployment? Our deployment process relies on one guy copying files from development to production. It’s antiquated, but it’s what works for us. Pull requests are essential to that process being frictionless, but there is one caveat: the development server still needs the correct files. It was easy to forget to deploy the project to dev, and I knew it was time to figure out how githooks work for us. After playing with traditional techniques for a while, I settled on using Husky. It was easy to add to the project, and I could define my hooks in Gulp and commit them. The precommit hook runs CFML and visual regression tests, and the prepush hook also calls the deploy task.
What other project-wide tools are in use? We use CommandBox’s server.json specification to define CFML dependencies and server configuration details. There is one for the web root (the
`dist` folder) and one for the project root that serves as a test runner server. We also use EditorConfig to try to keep coding consistent across developers, which is better than the previous technique of simply yelling a lot during code reviews.