Minimal GitOps-like Deployment Tool
If you have more than 1 server and a need to deploy multiple applications โsomewhereโ on that cluster without having to micromanage - donโt reinvent the wheel, just use Kubernetes. Itโs the industry standard for a reason, and youโre not going to have a hard time finding help when you need it.
However, itโs easy to forget that thereโs a class of service/business where this doesnโt make sense. Many small businesses (if theyโre not using a PaaS like Fly), donโt have the resources to run a Kubernetes cluster and 95% of the time, just need an app to run on a single VPS.
Sometimes, maybe during an event or a promotional period, it might make sense to add another server to the mix, load balancing traffic somewhere (DigitalOcean load balancers are pretty effective).
Assuming we have some nice way to spin up more servers running the app (a nice Ansible Playbook perhaps?), and assuming we have some nice CD pipeline to deploy the app, we need some way to make sure all the servers are running the latest version.
One pattern I like for this is a simplified GitOps pull model (think ArgoCD or Flux but held together with duct tape) where we:
- Have a repo which indicates what app releases should currently be deployed.
- Get your servers to query this repo to determine what it should be running and automatically update the app on the server to match.
- Make your deployment pipeline update the repo with a reference to the new release.
This way, your deployment pipeline doesnโt need to know anything about the servers itโs deploying to - it just needs to know how to update the repo with a reference to the new release.
In one case, I have a repo called โopsโ which along with all the ansible playbooks, has a set of env.yaml files, one for each app which contain (along with other settings), a line like:
image: my-registry-image:latestI can then run a script like the following on each server, which pulls down the ops repo, checks each app and updates the systemd service file to point to the latest release.
#!/bin/bash
set -euo pipefail
cd /srv/ops
git pull
# For each file in /srv/deploy
for file in /srv/deploy/*; do
  if [ -f "$file" ]; then
    # Read the file and split it into an array
    IFS=' ' read -r -a array <<< "$(cat "$file")"
    CONFIG_PATH=${array[0]}
    SERVICE=${array[1]}
    echo "Checking $SERVICE in $CONFIG_PATH for updates..."
    IMAGE=$(grep '^image:' /srv/ops/config/"$CONFIG_PATH"/env.yaml | awk '{print $2}')
    if [ -z "$IMAGE" ]; then
        echo "No image found in env.yaml"
        continue
    fi
    CURRENT_IMAGE=$(grep '^Environment=IMAGE=' /etc/systemd/system/"$SERVICE".service | awk '{split($0,a,"="); print a[3]}')
    if [ "$IMAGE" == "$CURRENT_IMAGE" ]; then
        echo "Image is already up to date"
        continue
    fi
    echo "Updating $SERVICE to $IMAGE"
    docker pull "$IMAGE"
    # Update the systemd service file
    sed -i'' -r 's@Environment=IMAGE=.+$@Environment=IMAGE='"$IMAGE"'@' /etc/systemd/system/"$SERVICE".service
    # Reload the systemd service
    systemctl daemon-reload
    # If the systemd service is running
    if [ "$(systemctl is-active "$SERVICE")" ]; then
        # Restart the systemd service
        echo "Restarting $SERVICE"
        systemctl restart "$SERVICE"
        #ย Wait for the service to start
        timeout 2m bash -c "until [[ \$(docker inspect -f \{\{.State.Running\}\} "$SERVICE" || echo false) == true ]]; do sleep 2; done;"
        # Run migrate
        echo "Running migrate"
        docker exec "$SERVICE" ./app/manage.py migrate
    fi
  fi
doneFor cases where Kubernetes and a full GitOps solution is overkill, this is a nice way to get a simple deployment pipeline setup.