What is a CI/CD pipeline?
If you don’t know the answer to this question don’t feel bad, engineers and IT professional at all levels sometimes don’t know the answer to this question. In my daily job I often get asked, “What is a pipeline?” The follow up question is 9/10 times, “How do I create a pipeline?” Today I would like to shed some light on the pipeline topic, mainly focusing on the first question but also why it is important to application development.
This article was originally published on Medium. Link to the Medium article can be found here.
In simple terms a pipeline is a workflow, a workflow that application development teams use to release software.
Note: Not limited to application development teams
In order to understand “what is a pipeline?” we have to go back in time and understand how application development was done. In the past, at least in larger organizations, software developers tended to only focus on writing the code for the application. Once the application was coded, it was handed over to a team that tested the code and conducted Q/A. If the testers discovered “bugs” they would document the errors and send it back to the developers for correction. Once the application passed testing it was handed over to an operations team. The operations team was responsible for standing up the application and making sure it was highly available and fault tolerant — through the use of enterprise datacenter infrastructure solutions.
Let’s rewind here a bit. The operations team, a team traditionally composed of infrastructure analysts whose responsibility is to keep the gears of the data center turning. These fine men and women, the unsung heroes of IT, would often be handed over an application and through the means of standing up physical server racks and cabling would ensure the application came to life. This also included configuring the operating system, databases, the network, the storage, oh and let’s not forget disaster recovery responsibilities.
As you can quickly identify, in the past (and unfortunately still in some organizations) there was a segregation of duties and responsibilities. This approach affects business agility and operational efficiencies negatively. From the developers perspective, a lot of time was spent waiting on other teams to complete work. From the non-application development teams’ perspective, they were lacking context and simply kept another team’s “baby” alive. Troubleshooting was also difficult as the operations teams often times lacked context and understanding of the code. The flip side to this are the developers who also often times did not understand the infrastructure keeping the application alive. When things broke, as they always do, perfect little storms were created due to the knowledge gap between the various teams.
Now, at this point you might be asking yourself, “how does a pipeline improve the situation?” The answer, it doesn’t, at least not by itself! Enter DevOps!
In order to address the challenges that arise from the workflow of the past, serious changes have to be implemented. These changes range from behavioral changes, redefining responsibilities and duties, and upskilling. This is the goal of DevOps.
DevOps is the art of integrating all the required steps of an application development workflow and application lifecycle managementunder a single small team. This means the same team that writes the application is also responsible for testing the code, deploying the application and maintaining it while in production! This is often described as “from cradle to grave”. If you add the security lens into the equation you get DevSecOps- unfortunately security is often times an afterthought when it should be part of the entire software development lifecycle.
This is not as easy as it sounds, a lot of unique skills and knowledge are required for a team to truly practice DevSecOps. The ideal team that practices DevSecOps is highly cross-trained, meaning every member of the team can assume any responsibility. In reality, human behavior will always prevail and team members will be drawn to tasks that tailor to their strengths (think UI work, infrastructure, security, database, backend).
The adoption of the DevSecOps framework has benefited many teams and reduced the time it takes to deploy code to production. From personal experience I’ve seen application deployment that used to take six months now be completed in minutes! YES, MINUTES! This is great from a business perspective as it allows for new features to become available to the costumer quickly. Time to market is often times the deciding factor between failure or success.
Note: The practice of agile principles is also a large contributor to the the time reduced mentioned above, but that’s a different rabbit hole we won’t go down today
Okay, but how does this tie into pipelines? Glad you asked 😄
Continuous Integration/Continuous Delivery (CI/CD)
The pipeline is what allows application development teams to release software quickly. The pipeline includes every step required to release software to a production environment, however it is done in an automated fashion (should be automated). In a typical software development pipeline, there are various stages defined. Let’s use the image below to help explain this. The pipeline illustrated in the image below has four stages defined; build, test, staging, and production.
Each step plays a critical role in the application development workflow for releasing software to production. In order for the software or any changes to the software code to be deployed to a production environment, every step must pass successfully.
The first step builds the application binary needed to stand up the application in an environment. The test stage is where automated tests are executed against the software code. These tests should touch individual components of the code as well as the holistic purpose of the application. The goal here is to identify bugs and prevent break fixes in production. The other benefit of an automated test suite is the feedback loop is rapid compared to waiting for a team of testers to provide feedback.
The staging stage is where the infrastructure is stood up for the application to be hosted in production. This could also be a step where the code is further massaged into a format that is expected by the infrastructure and/or environment that will host the application. Sometimes, this task warrants its own pipeline! In public cloud environments, infrastructure can be stood up in a matter of minutes through Application Programming Interfaces (APIs). However, if infrastructure required by the application cannot be automated through APIs it most certainly complicates things — thus requiring manual intervention. This challenge is a big reason for why infrastructure vendors are starting to provide APIs for their products, also because customers are demanding it.
Lastly, the final step is to deploy to production. Although this pipeline example is overly simplified, the true nature of pipelines are represented. The workflow for releasing software is automated, this includes testing, code scanning, and operational deployment responsibilities.
A typical pipeline will trigger the test suite upon introduction to any code change, if the respective code change passes all the tests cases the code is allowed to be merged to the master source (source of truth) — this is continuous integration (CI). Continuous Delivery (CD) is what the pipeline as a whole provides — an automated workflow for building, testing and deploying software in a repeatable and rapid manner.
Creating a pipeline is not easy
This can be an arduous process depending on various factors; the type of application, the type of infrastructure required, the type of environment (public cloud, hybrid cloud, private cloud), the team’s skill level, the tools available the team, the APIs available, and many other factors. The art of creating pipelines is in high demand by all organizations. The market is full of products and services that aim at simplifying the challenges of creating a pipeline. In addition, organizations are all competing for talent that can help teams create pipelines — if you don’t believe me take a peek at the salaries offered to DevOps related roles.
There is no one size fit all type of pipeline, though there are general patterns to follow. Some pipelines are declarative in nature vs others are scripted.
The declarative pipelines are quickly growing in popularity due to the simplicity and many out-of-the-box features provided versus the challenge and time spent creating the features in a scripted pipeline
So what can you do to simplify the pipeline creation process? The best advice I can provide is to find talented individuals that can coach and teach other teams in the organization how to get up to speed quickly. Find individuals that are passionate about automation and are willing to share knowledge with everyone around them.
The CI/CD tools your organization choose are equally as important in reducing friction and challenges related to creating and maintaining pipelines. There is no easy button, there are services available that simplify the pipeline process significantly. Of course, at the end of the day it really comes down to technical leaders that are able to get everyone around them to embrace DevOps principles and teach others how to create pipelines — that’s just my 2¢.
A pipeline is nothing more than a workflow for releasing software in a repeatable and automated fashion. It’s what allows teams to practice DevSecOps principles which can result in business solutions being released to market quickly. Creating a pipeline does not equal DevOps, and DevOps does not equal pipelines. The two are simply concepts that complement one another, though one could make the argument they depend on one another.
The IT landscape can be equally as exciting as it can be scary and overwhelming. Surround yourself with technical role models that understand CI/CD and learn from them. Take a stab at it, and before you know it you’ll understand how all the pieces fit together and you’ll be teaching others.