Minimal GitOps-like Deployment Tool

If you have more than 1 server and a need to deploy multiple applications โ€˜somewhereโ€™ on that cluster without having to micromanage - donโ€™t reinvent the wheel, just use Kubernetes. Itโ€™s the industry standard for a reason, and youโ€™re not going to have a hard time finding help when you need it.

However, itโ€™s easy to forget that thereโ€™s a class of service/business where this doesnโ€™t make sense. Many small businesses (if theyโ€™re not using a PaaS like Fly), donโ€™t have the resources to run a Kubernetes cluster and 95% of the time, just need an app to run on a single VPS.

Sometimes, maybe during an event or a promotional period, it might make sense to add another server to the mix, load balancing traffic somewhere (DigitalOcean load balancers are pretty effective).

Assuming we have some nice way to spin up more servers running the app (a nice Ansible Playbook perhaps?), and assuming we have some nice CD pipeline to deploy the app, we need some way to make sure all the servers are running the latest version.

One pattern I like for this is a simplified GitOps pull model (think ArgoCD or Flux but held together with duct tape) where we:

  1. Have a repo which indicates what app releases should currently be deployed.
  2. Get your servers to query this repo to determine what it should be running and automatically update the app on the server to match.
  3. Make your deployment pipeline update the repo with a reference to the new release.

This way, your deployment pipeline doesnโ€™t need to know anything about the servers itโ€™s deploying to - it just needs to know how to update the repo with a reference to the new release.

In one case, I have a repo called โ€˜opsโ€™ which along with all the ansible playbooks, has a set of env.yaml files, one for each app which contain (along with other settings), a line like:

image: my-registry-image:latest

I can then run a script like the following on each server, which pulls down the ops repo, checks each app and updates the systemd service file to point to the latest release.

#!/bin/bash
set -euo pipefail

cd /srv/ops
git pull

# For each file in /srv/deploy
for file in /srv/deploy/*; do
  if [ -f "$file" ]; then
    # Read the file and split it into an array
    IFS=' ' read -r -a array <<< "$(cat "$file")"
    CONFIG_PATH=${array[0]}
    SERVICE=${array[1]}

    echo "Checking $SERVICE in $CONFIG_PATH for updates..."

    IMAGE=$(grep '^image:' /srv/ops/config/"$CONFIG_PATH"/env.yaml | awk '{print $2}')

    if [ -z "$IMAGE" ]; then
        echo "No image found in env.yaml"
        continue
    fi

    CURRENT_IMAGE=$(grep '^Environment=IMAGE=' /etc/systemd/system/"$SERVICE".service | awk '{split($0,a,"="); print a[3]}')

    if [ "$IMAGE" == "$CURRENT_IMAGE" ]; then
        echo "Image is already up to date"
        continue
    fi

    echo "Updating $SERVICE to $IMAGE"

    docker pull "$IMAGE"

    # Update the systemd service file
    sed -i'' -r 's@Environment=IMAGE=.+$@Environment=IMAGE='"$IMAGE"'@' /etc/systemd/system/"$SERVICE".service

    # Reload the systemd service
    systemctl daemon-reload

    # If the systemd service is running
    if [ "$(systemctl is-active "$SERVICE")" ]; then
        # Restart the systemd service
        echo "Restarting $SERVICE"
        systemctl restart "$SERVICE"

        #ย Wait for the service to start
        timeout 2m bash -c "until [[ \$(docker inspect -f \{\{.State.Running\}\} "$SERVICE" || echo false) == true ]]; do sleep 2; done;"

        # Run migrate
        echo "Running migrate"
        docker exec "$SERVICE" ./app/manage.py migrate
    fi
  fi
done

For cases where Kubernetes and a full GitOps solution is overkill, this is a nice way to get a simple deployment pipeline setup.

โ† Back to Posts

Enjoyed this post?

Check out more of my articles or get in touch if you have any questions!

Explore More Posts