Reducing our Deployment times by 87%
You can streamline your deployment process, decreasing deploy times significantly and enabling faster rollbacks with confidence.
In this blog you will learn how to optimize your coding process for EMR systems in the healthcare industry by using Kubernetes, cutting out time-consuming steps like building and uploading Docker images.
We run a healthcare EMR application on k8s. We used to auto-deploy to prod/uat whenever a new commit was merged to the main/uat branch via Github Actions. Our deployment script had the following the set of actions:
- Build and tag the docker image.
- Upload the image to ECR.
- Update the k8s deployments with the new image.
Of the above steps, 1 & 2 took almost 6-7 minutes to finish. On every trigger, all requirements and dependencies were compiled/installed.
....
RUN apt-get install -y pgbouncer
&& pip install awscli
...
COPY ./requirements /requirements
...
RUN pip install —no-cache-dir -r /requirements/uat.txt && rm -rf /requirements
...
We decided to add a base dockerfile that’d be built only when the requirements changed/modified. The actual docker image that’d be used to deploy would refer to the base dockerfile and only contains steps to copy application code and other time-irrelevant steps.
We needed to figure out a way to detect if the requirements/dependencies changed, which would trigger the base image build.
We added jorgebg/stateful-action@v0.1 to our deploy script. This allows us to set state between github actions. Any state modification is written to the state branch by default.
name: UAT deployment
on:
push:
branches:
- uat
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: jorgebg/stateful-action@v0.1
- name: Check and build base docker image
run: |
echo “get existing hash”
FILE=.state/requirements-hash.txt
if [ -f “$FILE” ]; then
echo “$FILE exists.”;
read -r existing_hash<$FILE;
echo “Existing hash $existing_hash”;
new_base_hash=($(md5sum requirements/base.txt))
new_production_hash=($(md5sum requirements/production.txt))
echo “$new_base_hash$new_production_hash” > “$FILE”
read -r new_hash<$FILE;
echo “Current hash $new_hash”;
if [ “$new_hash” != “$existing_hash” ]; then
echo “Hash does not match, rebuilding docker image”
bash ./deploy_scripts/base.sh -I …
else
echo “Hash matches no rebuilding required”
fi
else
echo “File does not exist in state”
new_base_hash=($(md5sum requirements/base.txt))
new_production_hash=($(md5sum requirements/production.txt))
echo “$new_base_hash$new_production_hash” > “$FILE”
bash ./deploy_scripts/base.sh -I …
fi
On every trigger, we compared the hash of the requirements file with the hash stored in the state, and triggered a build of the base docker image if there was a mismatch. The new hash value is written to the state once the build finishes.
We reduced our deploy times from 6-7 minutes to less than a minute. This allows us to be more confident while pushing to production since we know rolling back/releasing a hotfix would take less than a minute.
Since, we do not add/modify dependencies to our django application frequently, it made sense to have a base docker image with all the dependencies already installed and use it as a base to the actual application docker image.
We reduced our deploy times from 6-7 minutes to less than a minute. This allows us to be more confident while pushing to production since we know rolling back/releasing a hotfix would take less than a minute.
Share
Latest Posts
November 30, 2022 | Rizwan Memon
Frontend Performance Testing: Best Practices
You have been working hard to deliver all the features on the project, and suddenly…
ReadMay 26, 2022 | Rucheta Gogte
Why Robot Framework
If you are trying to identify what are the best automation frameworks out there, we…
ReadJune 10, 2021 | Sreeraj Rajan
Connecting Django to RDS via pgbouncer using IAM auth
To connect Django to RDS via using IAM, you'll need an OIDC provider and a…
Read