Enhancements: Using GitHub Actions to Deploy

Integrating a version control system into your development cycle is just kind of one of those things that you do, right? I use GutHub for my version control, and it’s GitHub Actions to help with my deployment process.

There are 3 yaml files I have to get my local code deployed to my production server:

  • django.yaml
  • dev.yaml
  • prod.yaml

Each one serving it’s own purpose


The django.yaml file is used to run my tests and other actions on a GitHub runner. It does this in 9 distinct steps and one Postgres service.

The steps are:

  1. Set up Python 3.8 - setting up Python 3.8 on the docker image provided by GitHub
  2. psycopg2 prerequisites - setting up psycopg2 to use the Postgres service created
  3. graphviz prerequisites - setting up the requirements for graphviz which creates an image of the relationships between the various models
  4. Install dependencies - installs all of my Python package requirements via pip
  5. Run migrations - runs the migrations for the Django App
  6. Load Fixtures - loads data into the database
  7. Lint - runs black on my code
  8. Flake8 - runs flake8 on my code
  9. Run Tests - runs all of the tests to ensure they pass
name: Django CI

      - main
      - dev


    runs-on: ubuntu-18.04
        image: postgres:12.2
          POSTGRES_USER: postgres
          POSTGRES_PASSWORD: postgres
          POSTGRES_DB: github_actions
          - 5432:5432
        # needed because the postgres container does not provide a healthcheck
        options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5

    - uses: actions/checkout@v1
    - name: Set up Python 3.8
      uses: actions/setup-python@v1
        python-version: 3.8
    - uses: actions/cache@v1
        path: ~/.cache/pip
        key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
        restore-keys: |
          ${{ runner.os }}-pip-
    - name: psycopg2 prerequisites
      run: sudo apt-get install python-dev libpq-dev
    - name: graphviz prerequisites
      run: sudo apt-get install graphviz libgraphviz-dev pkg-config
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install psycopg2
        pip install -r requirements/local.txt
    - name: Run migrations
      run: python manage.py migrate
    - name: Load Fixtures
      run: |
        python manage.py loaddata fixtures/User.json
        python manage.py loaddata fixtures/Sport.json
        python manage.py loaddata fixtures/League.json
        python manage.py loaddata fixtures/Conference.json
        python manage.py loaddata fixtures/Division.json
        python manage.py loaddata fixtures/Venue.json
        python manage.py loaddata fixtures/Team.json
    - name: Lint
      run: black . --check
    - name: Flake8
      uses: cclauss/GitHub-Action-for-Flake8@v0.5.0
    - name: Run tests
      run: coverage run -m pytest


The code here does essentially they same thing that is done in the deploy.sh in my earlier post Automating the Deployment except that it pulls code from my dev branch on GitHub onto the server. The other difference is that this is on my UAT server, not my production server, so if something goes off the rails, I don’t hose production.

name: Dev CI

      - dev

    runs-on: ubuntu-18.04
      - name: deploy code
        uses: appleboy/ssh-action@v0.1.2
          host: ${{ secrets.SSH_HOST_TEST }}
          key: ${{ secrets.SSH_KEY_TEST }}
          username: ${{ secrets.SSH_USERNAME }}

          script: |
            rm -rf StadiaTracker
            git clone --branch dev git@github.com:ryancheley/StadiaTracker.git

            source /home/stadiatracker/venv/bin/activate

            cd /home/stadiatracker/

            rm -rf /home/stadiatracker/StadiaTracker

            cp -r /root/StadiaTracker/ /home/stadiatracker/StadiaTracker

            cp /home/stadiatracker/.env /home/stadiatracker/StadiaTracker/StadiaTracker/.env

            pip -q install -r /home/stadiatracker/StadiaTracker/requirements.txt

            python /home/stadiatracker/StadiaTracker/manage.py migrate

            mkdir /home/stadiatracker/StadiaTracker/static
            mkdir /home/stadiatracker/StadiaTracker/staticfiles

            python /home/stadiatracker/StadiaTracker/manage.py collectstatic --noinput -v0

            systemctl daemon-reload
            systemctl restart stadiatracker


Again, the code here does essentially they same thing that is done in the deploy.sh in my earlier post Automating the Deployment except that it pulls code from my main branch on GitHub onto the server.

name: Prod CI

      - main

    runs-on: ubuntu-18.04
      - name: deploy code
        uses: appleboy/ssh-action@v0.1.2
          host: ${{ secrets.SSH_HOST }}
          key: ${{ secrets.SSH_KEY }}
          username: ${{ secrets.SSH_USERNAME }}

          script: |
            rm -rf StadiaTracker
            git clone git@github.com:ryancheley/StadiaTracker.git

            source /home/stadiatracker/venv/bin/activate

            cd /home/stadiatracker/

            rm -rf /home/stadiatracker/StadiaTracker

            cp -r /root/StadiaTracker/ /home/stadiatracker/StadiaTracker

            cp /home/stadiatracker/.env /home/stadiatracker/StadiaTracker/StadiaTracker/.env

            pip -q install -r /home/stadiatracker/StadiaTracker/requirements.txt

            python /home/stadiatracker/StadiaTracker/manage.py migrate

            mkdir /home/stadiatracker/StadiaTracker/static
            mkdir /home/stadiatracker/StadiaTracker/staticfiles

            python /home/stadiatracker/StadiaTracker/manage.py collectstatic --noinput -v0

            systemctl daemon-reload
            systemctl restart stadiatracker

The general workflow is:

  1. Create a branch on my local computer with git switch -c branch_name
  2. Push the code changes to GitHub which kicks off the django.yaml workflow.
  3. If everything passes then I do a pull request from branch_name into dev.
  4. This kicks off the dev.yaml workflow which will update UAT
  5. I check UAT to make sure that everything works like I expect it to (it almost always does … and when it doesn’t it’s because I’ve mucked around with a server configuration which is the problem, not my code)
  6. I do a pull request from dev to main which updates my production server

My next enhancement is to kick off the dev.yaml process if the tests from django.yaml all pass, i.e. do an auto merge from branch_name to dev, but I haven’t done that yet.

Automating the deployment

We got everything set up, and now we want to automate the deployment.

Why would we want to do this you ask? Let’s say that you’ve decided that you need to set up a test version of your site (what some might call UAT) on a new server (at some point I’ll write something up about about multiple Django Sites on the same server and part of this will still apply then). How can you do it?

Well you’ll want to write yourself some scripts!

I have a mix of Python and Shell scripts set up to do this. They are a bit piece meal, but they also allow me to run specific parts of the process without having to try and execute a script with ‘commented’ out pieces.

Python Scripts



Shell Scripts












The Python script create_server.py looks like this:

# create_server.py

import requests
import os
from collections import namedtuple
from operator import attrgetter
from time import sleep

Server = namedtuple('Server', 'created ip_address name')

doat = os.environ['DIGITAL_OCEAN_ACCESS_TOKEN']

# Create Droplet
headers = {
    'Content-Type': 'application/json',
    'Authorization': f'Bearer {doat}',

data = <data_keys>
print('>>> Creating Server')
requests.post('https://api.digitalocean.com/v2/droplets', headers=headers, data=data)
print('>>> Server Created')
print('>>> Waiting for Server Stand up')

print('>>> Getting Droplet Data')
params = (
    ('page', '1'),
    ('per_page', '10'),

get_droplets = requests.get('https://api.digitalocean.com/v2/droplets', headers=headers, params=params)

server_list = []

for d in get_droplets.json()['droplets']:
    server_list.append(Server(d['created_at'], d['networks']['v4'][0]['ip_address'], d['name']))

server_list = sorted(server_list, key=attrgetter('created'), reverse=True)

server_ip_address = server_list[0].ip_address
db_name = os.environ['DJANGO_PG_DB_NAME']
db_username = os.environ['DJANGO_PG_USER_NAME']
if server_ip_address != <production_server_id>:
    print('>>> Run server setup')
    os.system(f'./setup-server.sh {server_ip_address} {db_name} {db_username}')
    print(f'>>> Server setup complete. You need to add {server_ip_address} to the ALLOWED_HOSTS section of your settings.py file ')
    print('WARNING: Running Server set up will destroy your current production server. Aborting process')

Earlier I said that I liked Digital Ocean because of it’s nice API for interacting with it’s servers (i.e. Droplets). Here we start to see some.

The First part of the script uses my Digital Ocean Token and some input parameters to create a Droplet via the Command Line. The sleep(90) allows the process to complete before I try and get the IP address. Ninety seconds is a bit longer than is needed, but I figure, better safe than sorry … I’m sure that there’s a way to call to DO and ask if the just created droplet has an IP address, but I haven’t figured it out yet.

After we create the droplet AND is has an IP address, we get it to pass to the bash script server-setup.sh.

# server-setup.sh


# Create the server on Digital Ocean
export SERVER=$1

# Take secret key as 2nd argument
if [[ -z "$1" ]]
    echo "ERROR: No value set for server ip address1"
    exit 1

echo -e "\n>>> Setting up $SERVER"
ssh root@$SERVER /bin/bash << EOF
    set -e

    echo -e "\n>>> Updating apt sources"
    apt-get -qq update

    echo -e "\n>>> Upgrading apt packages"
    apt-get -qq upgrade

    echo -e "\n>>> Installing apt packages"
    apt-get -qq install python3 python3-pip python3-venv tree supervisor postgresql postgresql-contrib nginx

    echo -e "\n>>> Create User to Run Web App"
    if getent passwd burningfiddle
      echo ">>> User already present"
      adduser --disabled-password --gecos "" burningfiddle
      echo -e "\n>>> Add newly created user to www-data"
      adduser burningfiddle www-data

    echo -e "\n>>> Make directory for code to be deployed to"

    if [[ ! -d "/home/burningfiddle/BurningFiddle" ]]
        mkdir /home/burningfiddle/BurningFiddle
        echo ">>> Skipping Deploy Folder creation - already present"

    echo -e "\n>>> Create VirtualEnv in this directory"
    if [[ ! -d "/home/burningfiddle/venv" ]]
      python3 -m venv /home/burningfiddle/venv
        echo ">>> Skipping virtualenv creation - already present"

    # I don't think i need this anymore
    echo ">>> Start and Enable gunicorn"
    systemctl start gunicorn.socket
    systemctl enable gunicorn.socket


./setup_nginx.sh $SERVER
./deploy_env_variables.sh $SERVER
./deploy.sh $SERVER

All of that stuff we did before, logging into the server and running commands, we’re now doing via a script. What the above does is attempt to keep the server in an idempotent state (that is to say you can run it as many times as you want and you don’t get weird artifacts … if you’re a math nerd you may have heard idempotent in Linear Algebra to describe the multiplication of a matrix by itself and returning the original matrix … same idea here!)

The one thing that is new here is the part

ssh root@$SERVER /bin/bash << EOF

A block like that says, “take everything in between EOF and run it on the server I just ssh’d into using bash.

At the end we run 3 shell scripts:

  • setup_nginx.sh
  • deploy_env_variables.sh
  • deploy.sh

Let’s review these scripts

The script setup_nginx.sh copies several files needed for the nginx service:

  • gunicorn.service
  • gunicorn.sockets
  • nginx.conf

It then sets up a link between the available-sites and enabled-sites for nginx and finally restarts nginx

# setup_nginx.sh

export SERVER=$1
export sitename=burningfiddle
scp -r ../config/gunicorn.service root@$SERVER:/etc/systemd/system/
scp -r ../config/gunicorn.socket root@$SERVER:/etc/systemd/system/
scp -r ../config/nginx.conf root@$SERVER:/etc/nginx/sites-available/$sitename

ssh root@$SERVER /bin/bash << EOF

  echo -e ">>> Set up site to be linked in Nginx"
  ln -s /etc/nginx/sites-available/$sitename /etc/nginx/sites-enabled
  echo -e ">>> Restart Nginx"
  systemctl restart nginx
  echo -e ">>> Allow Nginx Full access"
  ufw allow 'Nginx Full'


The script deploy_env_variables.sh copies environment variables. There are packages (and other methods) that help to manage environment variables better than this, and that is one of the enhancements I’ll be looking at.

This script captures the values of various environment variables (one at a time) and then passes them through to the server. It then checks to see if these environment variables exist on the server and will place them in the /etc/environment file

export SERVER=$1


ssh root@$SERVER /bin/bash << EOF
        echo "DJANGO_SECRET_KEY=$DJANGO_SECRET_KEY" >> /etc/environment
        echo ">>> Skipping DJANGO_SECRET_KEY - already present"

        echo "DJANGO_PG_PASSWORD=$DJANGO_PG_PASSWORD" >> /etc/environment
        echo ">>> Skipping DJANGO_PG_PASSWORD - already present"

        echo "DJANGO_PG_USER_NAME=$DJANGO_PG_USER_NAME" >> /etc/environment
        echo ">>> Skipping DJANGO_PG_USER_NAME - already present"

    if [[ "\$DJANGO_PG_DB_NAME" != "$DJANGO_PG_DB_NAME" ]]
        echo "DJANGO_PG_DB_NAME=$DJANGO_PG_DB_NAME" >> /etc/environment
        echo ">>> Skipping DJANGO_PG_DB_NAME - already present"

    if [[ "\$DJANGO_DEBUG" != "$DJANGO_DEBUG" ]]
        echo "DJANGO_DEBUG=$DJANGO_DEBUG" >> /etc/environment
        echo ">>> Skipping DJANGO_DEBUG - already present"

The deploy.sh calls two scripts itself:

# deploy.sh

set -e
# Deploy Django project.
export SERVER=$1

The final two scripts!

The upload-code.sh script uploads the files to the deploy folder of the server while the install-code.sh script move all of the files to where then need to be on the server and restart any services.

# upload-code.sh

set -e

echo -e "\n>>> Copying Django project files to server."
if [[ -z "$SERVER" ]]
    echo "ERROR: No value set for SERVER."
    exit 1
echo -e "\n>>> Preparing scripts locally."
rm -rf ../../deploy/*
rsync -rv --exclude 'htmlcov' --exclude 'venv' --exclude '*__pycache__*' --exclude '*staticfiles*' --exclude '*.pyc'  ../../BurningFiddle/* ../../deploy

echo -e "\n>>> Copying files to the server."
ssh root@$SERVER "rm -rf /root/deploy/"
scp -r ../../deploy root@$SERVER:/root/

echo -e "\n>>> Finished copying Django project files to server."

And finally,

# install-code.sh

# Install Django app on server.
set -e
echo -e "\n>>> Installing Django project on server."
if [[ -z "$SERVER" ]]
    echo "ERROR: No value set for SERVER."
    exit 1
echo $SERVER
ssh root@$SERVER /bin/bash << EOF
  set -e

  echo -e "\n>>> Activate the Virtual Environment"
  source /home/burningfiddle/venv/bin/activate

  cd /home/burningfiddle/

  echo -e "\n>>> Deleting old files"
  rm -rf /home/burningfiddle/BurningFiddle

  echo -e "\n>>> Copying new files"
  cp -r /root/deploy/ /home/burningfiddle/BurningFiddle

  echo -e "\n>>> Installing Python packages"
  pip install -r /home/burningfiddle/BurningFiddle/requirements.txt

  echo -e "\n>>> Running Django migrations"
  python /home/burningfiddle/BurningFiddle/manage.py migrate

  echo -e "\n>>> Creating Superuser"
  python /home/burningfiddle/BurningFiddle/manage.py createsuperuser --noinput --username bfadmin --email rcheley@gmail.com || true

  echo -e "\n>>> Load Initial Data"
  python /home/burningfiddle/BurningFiddle/manage.py loaddata /home/burningfiddle/BurningFiddle/fixtures/pages.json

  echo -e "\n>>> Collecting static files"
  python /home/burningfiddle/BurningFiddle/manage.py collectstatic

  echo -e "\n>>> Reloading Gunicorn"
  systemctl daemon-reload
  systemctl restart gunicorn


echo -e "\n>>> Finished installing Django project on server."

Preparing the code for deployment to Digital Ocean

OK, we’ve got our server ready for our Django App. We set up Gunicorn and Nginx. We created the user which will run our app and set up all of the folders that will be needed.

Now, we work on deploying the code!

Deploying the Code

There are 3 parts for deploying our code:

  1. Collect Locally
  2. Copy to Server
  3. Place in correct directory

Why don’t we just copy to the spot on the server we want o finally be in? Because we’ll need to restart Nginx once we’re fully deployed and it’s easier to have that done in 2 steps than in 1.

Collect the Code Locally

My project is structured such that there is a deploy folder which is on the Same Level as my Django Project Folder. That is to say

Project Structure

We want to clear out any old code. To do this we run from the same level that the Django Project Folder is in

rm -rf deploy/*

This will remove ALL of the files and folders that were present. Next, we want to copy the data from the yoursite folder to the deploy folder:

rsync -rv --exclude 'htmlcov' --exclude 'venv' --exclude '*__pycache__*' --exclude '*staticfiles*' --exclude '*.pyc'  yoursite/* deploy

Again, running this form the same folder. I’m using rsync here as it has a really good API for allowing me to exclude items (I’m sure the above could be done better with a mix of Regular Expressions, but this gets the jobs done)

Copy to the Server

We have the files collected, now we need to copy them to the server.

This is done in two steps. Again, we want to remove ALL of the files in the deploy folder on the server (see rationale from above)

ssh root@$SERVER "rm -rf /root/deploy/"

Next, we use scp to secure copy the files to the server

scp -r deploy root@$SERVER:/root/

Our files are now on the server!

Installing the Code

We have several steps to get through in order to install the code. They are:

  1. Activate the Virtual Environment
  2. Deleting old files
  3. Copying new files
  4. Installing Python packages
  5. Running Django migrations
  6. Collecting static files
  7. Reloading Gunicorn

Before we can do any of this we’ll need to ssh into our server. Once that’s done, we can proceed with the steps below.

Above we created our virtual environment in a folder called venv located in /home/yoursite/. We’ll want to activate it now (1)

source /home/yoursite/venv/bin/activate

Next, we change directory into the yoursite home directory

cd /home/yoursite/

Now, we delete the old files from the last install (2):

rm -rf /home/yoursite/yoursite

Copy our new files (3)

cp -r /root/deploy/ /home/yoursite/yoursite

Install our Python packages (4)

pip install -r /home/yoursite/yoursite/requirements.txt

Run any migrations (5)

python /home/yoursite/yoursite/manage.py migrate

Collect Static Files (6)

python /home/yoursite/yoursite/manage.py collectstatic

Finally, reload Gunicorn

systemctl daemon-reload
systemctl restart gunicorn

When we visit our domain we should see our Django Site fn

Setting up the Server (on Digital Ocean)

The initial setup

Digital Ocean has a pretty nice API which makes it easy to automate the creation of their servers (which they call Droplets. This is nice when you’re trying to work towards automation of the entire process (like I was).

I won’t jump into the automation piece just yet, but once you have your DO account setup (sign up here if you don’t have one), it’s a simple interface to Setup Your Droplet.

I chose the Ubuntu 18.04 LTS image with a \$5 server (1GB Ram, 1CPU, 25GB SSD Space, 1000GB Transfer) hosted in their San Francisco data center (SFO21).

We’ve got a server … now what?

We’re going to want to update, upgrade, and install all of the (non-Python) packages for the server. For my case, that meant running the following:

apt-get update
apt-get upgrade
apt-get install python3 python3-pip python3-venv tree postgresql postgresql-contrib nginx

That’s it! We’ve now got a server that is ready to be setup for our Django Project.

In the next post, I’ll walk through how to get your Domain Name to point to the Digital Ocean Server.

  1. SFO2 is disabled for new customers and you will now need to use SFO3 unless you already have resources on SFO2, but if you’re following along you probably don’t. What’s the difference between the two? Nothing 😁

Deploying a Django Site to Digital Ocean - A Series

Previous Efforts

When I first heard of Django I thought it looks like a really interesting, and Pythonic way, to get a website up and running. I spent a whole weekend putting together a site locally and then, using Digital Ocean, decided to push my idea up onto a live site.

One problem that I ran into, which EVERY new Django Developer will run into was static files. I couldn’t get static files to work. No matter what I did, they were just … missing. I proceeded to spend the next few weekends trying to figure out why, but alas, I was not very good (or patient) with reading documentation and gave up.

Fast forward a few years, and while taking the 100 Days of Code on the Web Python course from Talk Python to Me I was able to follow along on a part of the course that pushed up a Django App to Heroku.

I wrote about that effort here. Needless to say, I was pretty pumped. But, I was wondering, is there a way I can actually get a Django site to work on a non-Heroku (PaaS) type infrastructure.


While going through my Twitter timeline I cam across a retweet from TestDrive.io of Matt Segal. He has an amazing walk through of deploying a Django site on the hard level (i.e. using Windows). It’s a mix of Blog posts and YouTube Videos and I highly recommend it. There is some NSFW language, BUT if you can get past that (and I can) it’s a great resource.

This series is meant to be a written record of what I did to implement these recommendations and suggestions, and then to push myself a bit further to expand the complexity of the app.


A list of the Articles will go here. For now, here’s a rough outline of the planned posts:

The ‘Enhancements’ will be multiple follow up posts (hopefully) as I catalog improvements make to the site. My currently planned enhancements are: