In the previous post we created the Helm chart for our hello world blog-helm
application. The Helm chart contains all the information we need to deploy our application to a Kubernetes cluster. But so far, we always deploy the latest version. In a CD setup, we’d like to be able to deploy any version, from any feature branch, at any given point in time, to any environment (DTAP). Let’s start by looking at versioning.
We have various versions that we need to control in our application:
- The application version, as defined in
package.json
. - The version of the Docker image (or tag in Docker terms). We haven't used this so far.
- The version of the Helm chart, as defined in
Chart.yaml
. One important point to remember is that the Helm chart references the desired Docker image version invalues.yaml
.
To keep our sanity, it makes sense to use a single version number to describe all three versioned components. We’ll try out the following setup:
- The version defined in
package.json
is leading. This way, we allow the version to come from Git and go under the same code review process, just like everything else. - The version of the Docker image and the Helm chart will be the same. It will be equal to the application version for the master branch. For feature branches, we'll use the application version with the git SHA as a suffix.
So for the master branch we’ll have something like this:
With a small difference for feature branches:
To make all this happen, we need to write a bit of code in the commit stage.
We will need these commands:
git rev-parse HEAD
will output the git SHA IDgit rev-parse --abbrev-ref HEAD
will output the name of the current branch- and finally
cat package.json | grep version | cut -d\" -f 4
is a little bit of bash kung-fu that will give us the version out ofpackage.json
Let’s put these commands together in a small bash script:
GIT_SHA=$(git rev-parse HEAD)
GIT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
APP_VERSION=$(cat package.json | grep version | cut -d\" -f 4)
if [ "$GIT_BRANCH" = "master" ]; then
IMAGE_TAG="$APP_VERSION"
else
IMAGE_TAG="$APP_VERSION-$GIT_SHA"
fi
echo "Docker image tag will be $IMAGE_TAG"
It’s a good idea to commit this script in our source code, so that we don’t have large inline scripts within the build server. We can create a new folder, e.g. ci-scripts
, and store it there as version.sh
.
We’ll need to share this unique version with the next steps of the build but also with the deployment stage (which we haven’t seen yet). For the deployment stage, we can write the image tag in a small text file and publish it as an artifact. For the commit stage, it’s possible to create an environment variable from within a build step. Each build server does this differently, e.g. Bamboo has a “Inject Variables” build step and Jenkins has something similar. TeamCity, which I’m using in this example, supports some special echo
message during the build.
The final script looks like this (browse code at this point):
#!/bin/sh
set -x
set -e
GIT_SHA=$(git rev-parse HEAD)
GIT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
APP_VERSION=$(cat package.json | grep version | cut -d\" -f 4)
if [ "$GIT_BRANCH" = "master" ]; then
IMAGE_TAG="$APP_VERSION"
else
IMAGE_TAG="$APP_VERSION-$GIT_SHA"
fi
echo "Docker image tag will be $IMAGE_TAG"
# store image tag into a text file (artifact for deployment)
echo "$IMAGE_TAG" > image-tag.txt
# inject environment variable for next steps
echo "##teamcity[setParameter name='env.IMAGE_TAG' value='$IMAGE_TAG']"
Let’s go over the TeamCity configuration at this point. We have one extra build step, 4 in total:
- Determine version. This runs the new script,
ci-scripts/version.sh
, which figures out the Docker image version (and Helm chart version) we will use. - Build CI image. This builds the Docker image which includes all dependencies, including devDependencies. The only change here is that I'm now using TeamCity's Docker Build native step, instead of using a Command Line step.
- Run linting. No changes here, it uses the image built in the previous step to run linting. Notice that this image does not need to be versioned.
- Build production Docker image. Here we're using the environment variable
IMAGE_TAG
that is injected by the first build step. It looks like this:
Now, we need to package the Helm chart. That’s done with a simple command (which will become our fifth build step):
helm package --version $IMAGE_TAG ./helm/blog-helm
The command line utility helm will not be present on the build agent. As we discussed in Dockerize the build plan, we need to wrap helm into a Docker image so that we can use it. Luckily, someone else has already created a Docker image with helm. The build step in TeamCity looks like this:
The reason this works so seamlessly is that TeamCity mounts the current directory as a volume inside the Docker container. The integration is very nicely done by TeamCity, but it’s still important to understand what happens under the hood.
One last touch is to configure our artifacts in TeamCity:
The tgz file is the Helm chart and the txt file is the small text file that specifies the image version.
Small but important note: at this point, it’s not possible to override the Docker image tag during packaging of the Helm chart. There is an open issue for that on GitHub. We will be setting the correct image during deployment, but ideally the Helm chart should already be tied to the Docker image.
Demo time! Let’s see if everything works fine. First, I’ll bump the version on master branch to 1.0.1 to trigger a build. The build creates this artifacts:
If we download them, we’ll see that image-tag.txt
just contains “1.0.1”. The tgz file can be unzipped and there we’ll see that Chart.yaml
has the correct version:
apiVersion: v1
description: A Helm chart for Kubernetes
name: blog-helm
version: 1.0.1
We can try with a feature branch too. Our hello world page is serving plain text so far. It would be great if we change it into an HTML page with a large heading. That’s an easy change in index.js
:
app.get('/', (req, res) => res.send(`
<html>
<body>
<h1>Hello, world!</h1>
</body>
</html>
`));
I’ll also bump the version to 1.0.2. Here’s the result in TeamCity:
This time, we’re on a feature branch, so both the Docker image and the Helm chart will have the git SHA in their version.
We made it this far, let’s make one more step to deploy our app into Kubernetes using Helm from TeamCity. The TeamCity Agent will run a helm command, which talks to Tiller, which tells Kubernetes to do its job and run our dockerized app inside a pod.
For this to happen, Kubernetes needs to be able to find the Docker images that TeamCity builds during the commit stage. The best way would be to setup a Docker registry, which we’ll do on a future post. I’ve taken a shortcut however: I’m running TeamCity inside Kubernetes and TeamCity Agent is using Docker on Docker. This means that the TeamCity Agent is a dockerized application itself, but it’s using the Docker daemon of Kubernetes when it needs to runs docker (remember, we’ve dockerized the build plan). It’s very confusing, especially when volumes come into play. It feels a bit like the movie Inception, wondering on which level are you running currently:
We start by creating a new build configuration called Deploy Stage. We’d like to consume the artifacts from the Commit Stage:
We don’t have any dependency to the source code. All we need is the artifacts. We can deploy everything with one command:
helm upgrade --install blog-helm \
./blog-helm-${IMAGE_TAG}.tgz \
--set image.tag=$IMAGE_TAG \
--wait
We just need the IMAGE_TAG
environment variable, which we’ll populate from the image-tag.txt
artifact:
IMAGE_TAG=$(cat image-tag.txt)
echo "Using version $IMAGE_TAG"
helm upgrade --install blog-helm \
./blog-helm-${IMAGE_TAG}.tgz \
--set image.tag=$IMAGE_TAG \
--wait
Here’s how it looks like in TeamCity:
Let’s explain a bit the command:
upgrade --install blog-helm
specifies that we're interested in the Helm release namedblog-helm
. Helm uses its server-side component, Tiller, to keep tabs on releases. Theupgrade --install
part is equivalent to create or update. If the release is already there, it will upgrade it, otherwise it will create it.- the next parameter points to the Helm chart, which in this case is the tgz artifact
- the
--set image.tag=$IMAGE_TAG
will override the Docker image tag defined invalues.yaml
with our environment variable. Since this also as artifact of the Commit Stage, we're certain we're deploying the correct version. - the last part
--wait
is a nice feature of Helm, it waits until the new version is up and running.
One more networking shortcut: for helm to be able to reach Tiller (from within the Docker container from within TeamCity from within Kubernetes) I had to punch a hole in the cluster using a NodePort service. We’ll revisit this in a future post. Remember, Inception.
I can trigger a custom deployment to use my feature branch:
After the deployment finishes, we can see the results in the Kubernetes dashboard:
Notice how both the Helm chart version (indicated by the label “chart” in the top) and the Docker image tag (indicated in the replica set area) are aligned.
And, of course, the app is now sporting an H1 header:
Perhaps it’s worth to mention that you can use any other version strategy that makes sense. In this case, git leads. You can also turn it around and have the build server leading, ignoring what is specified in the code. Or you can mix and match, using for example the major.minor parts of semver from git and the patch from the build server. The important thing is to make sure you have one unique version identifier that you can use to link everything together.
To summarize, we’ve created a versioning scheme which allows us to deploy any feature branch we want. We also created a deployment stage in TeamCity, which deploys based solely on the build artifacts, which means we can deploy any older version we want to. So far, we have only one environment to deploy to. We’ll see in the next post how to support multiple deployment environments, moving towards a DTAP.