Continuous Integration & Continuous Deployment. What is it? How Do We Plan to Use it at Jet?

BY NIKHIL BARTHWAL AND NORA JONES

What is Continuous Integration & Continuous Deployment?
Let’s start by understanding what Continuous Integration (CI) and Continuous Deployment (CD) is. CI is a development practice that requires developers to integrate code into a shared repository continuously as it is being developed. Each check-in is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows teams to develop cohesive software more rapidly.
Continuous Deployment is a software development discipline in which you build software in such a way that software is released to production continuously as it is being developed. This is achieved by continuously integrating the software done by the development team, building executables, and running automated tests on those executables to detect problems. With this approach, anyone can get fast and automated feedback on the production-readiness of their systems anytime that somebody makes a change. Furthermore, these executables are deployed to production when all the tests pass.

How is it Typically Done?
One of the challenges of an automated build and test environment is you want your build to be fast so that you can get fast feedback, but comprehensive tests take a long time to run. Creating both build and deployment pipelines helps break up your build into stages. The build pipeline is concerned with building the code and testing it (CI), and the deployment pipeline is responsible for deployment of binaries to the production environment (CD). Each stage provides increasing confidence, usually at the cost of extra time. Early stages can find most problems yielding faster feedback, while later stages provide slower and more thorough probing. Build & Deployment pipelines are a central part of CI/CD process.

Usually the first stage of a deployment pipeline will do any compilation and provide binaries for later stages. The following stages will carry out series of unit and integration tests to verify the build. Stages can be automatic or require human authorization to proceed. They may be parallelized over many machines to speed up the build. Deploying into production is usually the final stage in a pipeline.

More broadly the build pipeline’s job is to detect any changes that will lead to problems in production. These can include performance, security, or usability issues. A build and deployment pipeline should enable collaboration between the various groups involved in delivering software and provide  visibility to everyone about the flow of changes in the system as well as a thorough audit trail.

Why is it Done?
There are several benefits of using a fully automated CI & CD pipeline. It enables you to bring out features faster and collaboratively close the gap between business expectations and DevTest activities. This enables you to make better trade-off decisions to optimize the business value of a release candidate and establishes a feedback loop that promotes incremental and continuous process improvement.
It also helps establish a safety net that helps you bring new features to market faster. While the testing should be done by individual developers, there is no way to enforce this. Having an automated pipeline ensures that the code deployed to production passed all tests. While individual developers could (and should) runs tests in their local environment, testing cannot be relied on as each environment may be set up differently. Having a common environment that mimics production will eliminate these false positives.

How Do We Do it at Jet?
Our CI & CD pipeline is built using Jenkins, although we plan to move to TFS in future. We have unit tests that run on the build agents and the integration test that run on the staging environment. We are building dashboards that would give us insights on the test results. But what is different about our build pipeline is the integration of static analysis of code and checks for coding standards built into the build pipeline.

We have sophisticated tools for static analysis of code. We have code scanners that parse the code to find out anti-patterns in code, which can potentially cause an outage. We plan to build entire tooling into our build pipeline to ensure that the code that passes through the pipeline is as free of defects as possible.

On top of this, we have code parsers that scan the code to ensure that our code conforms to our standards. Our team (Internal Tools & Productivity) has defined an internal coding standard, which is used as a set of F# recommendations including style, practices, and methods for code written here at Jet.

In conclusion, our build and deployment pipelines are designed to ensure that only the code of the highest quality passes through it.

2 comments

  1. Hi Nikhil,

    Interesting approach and fantastic write up on it.

    I was wondering if it was possible to get more information about the continuous delivery aspect of your pipeline and the toolset your using to achieve it.

    What happens in your deploy stage? Do are you executing some ARM templating that is spinning up a new virtual machine on azure and using ansible to bootstrap it?

    1. Yes, We do use ARM templates and we also use Ansible for bootstrapping. There is some custom tooling for CD part the DevOps team has developed. Our Static Analyzers are all custom made, on top of F# Compiler Services (Our backend is mostly in F#). We plan to bundle then Analyzers as SonarQube plugins, integrating it with our pipeline.

      At some point in future, we would be open sourcing much of our code for analyzers. Hope that answers your questions!

Leave a Reply

Your email address will not be published. Required fields are marked *