Once AIM was built, we needed a product to display the inventory that had been managed. The AutoConX Vertical (Responsive), or AVR, is a white-label product that allows publishers (newspaper or magazine) list inventory from sellers in their area. It’s a digital classified system that offers a lot of customization and flexibility.
This project was trickier than AIM because AIM was a brand new product. AVR, however, had to be a modern and responsive site that met all the publishers’ expectations from the legacy product. Publishers wouldn’t switch unless they saw real value in the new system. We had to build a product that we would put our own products on.
This is the story of the project architecture for AVR.
Note! If you’ve read my post about AIM Project Architecture, this will all be repeat. Feel free to move on to other sections of AVR architecture.
The beginning of most new projects is an empty IDE or text editor. It can be as intimidating as the blank sheet of paper you use to start a novel. The first choice I had to make was how to organize the project. A common tradition prior to then was for the project root to equal the web root with some files excluded from deployment. I knew that this project would benefit from a processing step. Our source folder would house all our project assets and code in a form that was developer-friendly, and our distribution folder would house the production-friendly versions of those files. I decided to create a
`src` folder and a
Should we commit our
`dist` folder? After thinking long on it, I decided “yes.” We have a small team, and this was a new project architecture for us. Our deployment process is far from automatic, and I worried how others would handle deployments if I wasn’t there. What if they don’t have any of the tooling installed? If the distribution folder is in source control, we’ll always have versions of production-ready code that can be deployed as long as someone can get to our BitBucket server. While the decision has produced headaches (especially when minified files are involved in complicated rebases), I still think it was the best choice for our team at the time.
npm project as a CFML app? I’ve fallen in love with Node-based tooling. Setting the project up with
npm seemed a no-brainer. We’d definitely be using a Node-based task runner (see below), and we would need a
package.json file to list all the plugins. There is a CFML CLI called CommandBox that has a sort of package definition component to it, but the project is still maturing. It wasn’t quite right as the de facto package manager for AIM. The two products work very well in tandem and using
npm run-script has simplified the CommandBox startup process by giving us shortcuts for launching and configuring our servers.
How do you handle deployment? Our deployment process relies on one guy copying files from development to production. It’s antiquated, but it’s what works for us. Pull requests are essential to that process being frictionless, but there is one caveat: the development server still needs the correct files. It was easy to forget to deploy the project to dev, and I knew it was time to figure out how githooks work for us. After playing with traditional techniques for a while, I settled on using Husky. It was easy to add to the project, and I could define my hooks in Gulp and commit them. The precommit hook runs CFML and visual regression tests, and the prepush hook also calls the deploy task.
What other project-wide tools are in use? We use CommandBox’s server.json specification to define CFML dependencies and server configuration details. There is one for the web root (the
`dist` folder) and one for the project root that serves as a test runner server. We also use EditorConfig to try to keep coding consistent across developers, which is better than the previous technique of simply yelling a lot during code reviews.