Do you work on a project where the tasks of building, testing and deploying code are all done manually? Why is that? From my experience, the reason teams don’t automate is typically because they either lack incentive, or they lack the knowledge. Lacking incentive is really a poor excuse. There is plenty of incentive if you honestly look at the return on investment of implementing a CI/CD process. If your DevOps process is not automated with full integration testing, then it is not really complete.
There is a wise saying that states – “You have to slow down to speed up.” You can interpret that to mean that a little investment up front pays huge dividends over and over. This ability to centrally coordinate CI/CD really helps the agile environment iterate more rapidly. Productivity goes up because of this automation and team downtime is eliminated because integration bugs are not spread across the rest of the team. It is much more expensive in time and money to catch bugs in production, so there is a huge savings to be had here.
Ok, now that you have the incentive, let’s then assume that it really comes down to knowledge. After reading this, you will be informed and will no longer have any more excuses preventing you from implementing CI/CD.
Infinite number of paths
Doing CI/CD can of course be a massively complex undertaking. No one article can tell you everything you need to know and no one solution will work for all projects. We will only scratch the surface and only do so for a Node.JS project in a very narrow niche. We also narrow our focus down by relying on PaaS and not IaaS for our full-stack hosting approach.
The nice thing with PaaS is that you will not need to worry about low-level machine provisioning and teardown etc. PaaS is really convenient for testing and deployment. Many PaaS resources support staging and others are easy to spin up and then take down at a small cost.
If you choose not to go the route of setting up a complete CI/CD tool, you can still automate all of this on your own and launch things manually. For example, you can implement a series of Gulp tasks that would do a lot of what a CI/CD system would do. A future post will explore the usage of Gulp for smaller teams that don’t need the overhead of a bigger CI/CD system.
You must have developer tested code
As a preparatory step, a developer will have prepared some code and privately tested the new feature before submitting it to the CI/CD process. The testing should be as thorough as possible in the context of the rest of the integrated system. If that is not possible, then it can be proven as an isolated unit, perhaps with stubbed out or mocked functionality injected.
Once the developer has had their own private verification completed, they can do their code check-in that will then be staged to go into the pipeline for consideration. If you are using Git and GitHub, the code would be pushed to the branch you designate for the CI/CD to be triggered from. You would not push it into the master branch. The master branch should only be used for the code that is running in production.
CI/CD is all about increasing your iteration speed and the quality of everything written. Of course, you have to provide high quality comprehensive test suites to achieve this. Once your code is ready to commit, then the pipeline workflow of CI/CD takes place. So what does the cycle look like? Let’s go through each of the steps. Here is a simple diagram to show you the parts and steps that would make up a simple Node.js CI/CD system:
Step One: Build
Once the code is committed then the code to be verified needs to be staged in the overall application. CI/CD tools actually are able to listen to GitHub and will kick off immediately upon seeing a code push. The idea is that the CI/CD systems sees the push and does a local clone of the branch so you have a freshly integrated code version to now work from in the CI/CD system.
Step Two: Deploy to staging and run test suites
Comprehensive testing is central to this whole CI/CD working. This step will run a thorough suite of tests at all levels, such as Unit, integration, feature, load, performance and UI automation testing. A perfect pass is expected and you would get generated reports to show that all went well and also produce code coverage reporting and memory leak detection.
A deployable package for the Node.js app needs to be created first in order to test it. For AWS, this means that a zip file would be created and then uploaded through the AWS command line SDK. For Azure, you could use the MSBuild executable to package and deploy from the command line. In either case, the deployment would be to a staging slot that is identical to what is in production, except that it might not have all the load balancing and scaling capability. If your team is in favor of TIP (Testing in Production) you could deploy to a reserved hidden portion of your production environment.
For my Node.js project, I used the following to build and deploy to my Azure Web App:
msbuild <solution file name>.sln
/t:<name of project in your solution>
/p:PublishProfile=<name of publishing profile>
/p:Password=<password from your <your file>.PublishSettings file>
With the Node.js application staged, the CI/CD can then proceed to do the testing. One thing to remember is to also have in place a dedicated database for testing purposes hosted in PaaS that is always available. You do not need to continually deploy there if it is simply the location of the data and nothing needs to be preconfigured. You can run some cleanup script as part of the CI/CD step here if that is necessary and also some pre-population script if you require certain Documents to be in place. PaaS offerings for MongoDB, DocumentDB and many others are available as document-based databases.
As part of the test process, test failures need to be captured and capable of being debugged. The CI/CD system can send out status emails at any point to tell you what is happening. If the testing fails, you would be notified and the CI/CD process would not proceed any further. You would want failures automatically entered into an issue tracking system to officially track and resolve issues.
Step Three: Deployment to Production
Once the testing stage is complete, everything is ready to go into production. You can have this set up in the CI/CD to be automatic, or you can have the CI/CD set up to halt there and require you to manually click something in the CI/CD tool to approve this. This would probably not even require much. Your staging can simply be swapped into production.
Once in production, the CI/CD could run some simpler version of the test suites to “smoke test” everything. Remember that you need to have a way to roll back to the previous version if your production environment monitoring and alerting system detects problems. This will be a whole topic on its own for a future post.
The tools to carry this out
I hesitate to even mention any tools as there are so many to choose from and I would be leaving out one that that someone would say is their favorite. One thing you need to decide is if you want to choose a tool that you manage on your own, or if you want to use a PaaS hosted CI/CD tool. For example, Atlassian Bamboo has a PaaS solution where you pay a monthly fee to use their machines in the cloud that are already configured with everything you need. You can also pay a one-time price and install and manage Bamboo on your own. To do that, you would get your own cloud hosted virtual machine and install it there to keep it running to do your CI/CD.
Here is an image of what Atlassian Bamboo looks like:
Here are some possible CI tools that work with Node.js GitHub projects that you could explore further. I have included a few brief notes for things specific to each that might be important to know. Look up the Wikipedia page titled “Comparison of continuous integration software”.
Atlassian Bamboo (Has self-hosted install, and PaaS)
Jenkins (Self-managed only, Slave instances can be installed to do distributed build and testing)
Travis-CI (It is hosted for you. It has test machines supporting Node and even MongoDB)AWS CodePipeline (Integrates in with a lot of these other tools)
AWS CodeDeploy (Automates code deployments to any instance)
CircleCI (builds an image, starts a new container, and then runs tests inside that container)
CloudBees Jenkins (PaaS Platform version of Jenkins for AWS and Azure with extra things installed)
Docker Hub (As the name implies, for Docker container usage)
saucelabs (for running selenium tests)
Strider (Only works with Heroku deployments)
You may have also heard about the tools Puppet Labs and Chef. These are really not pure CI/CD tools, but are much broader in their capabilities in accomplishing machine configuration management. They really deal more with IaaS, on-premise and hybrid machine management and even supports things like storage and networking devices.
As was mentioned, this topic is a very large and complicated one that we only briefly touched on it. For example, some CI/CD systems can automate code branch testing and do merges automatically for you upon CI tests passing. To choose the CI/CD tool that is right for you, you need to study and compare each of the features they have. Then once you have chosen a tool, do the study to completely understand what it can do and what add-ons have been produced to augment its capabilities.
I will also point out that you might want to differentiate your normal feature development CI/CD process from what is necessary to handle a rushed “hot fix” out to production. Make accommodations for how to best release just a small targeted patch, which can be something as a simple as a file configuration change. Do not be fooled however into thinking that there is no risk with even the smallest of changes.