Auto Tweeting New Post
Each time I write something for this site there are several steps that I go through to make sure that the post makes it's way to where people can see it.
- Run 
make htmlto generate the SQLite database that powers my site's search tool1 - Run 
make vercelto deploy the SQLite database to vercel - Run 
git add <filename>to add post to be committed to GitHub - Run 
git commit -m <message>to commit to GitHub - Post to Twitter with a link to my new post
 
If there's more than 2 things to do, I'm totally going to forget to do one of them.
The above steps are all automat-able, but the one I wanted to tackle first was the automated tweet. Last night I figured out how to tweet with a GitHub action.
There were a few things to do to get the auto tweet to work:
- Find a GitHub in the Market Place that did the auto tweet (or try to write one if I couldn't find one)
 - Set up a twitter app with Read and Write privileges
 - Set the necessary secrets for the report (API Key, API Key Secret, Access Token, Access Token Secret, Bearer)
 - Test the GitHub Action
 
The action I chose was send-tweet-action. It's got easy to read documentation on what is needed. Honestly the hardest part was getting a twitter app set up with Read and Write privileges.
I'm still not sure how to do it, honestly. I was lucky enough that I already had an app sitting around with Read and Write from the WordPress blog I had previously, so I just regenerated the keys for that one and used them.
The last bit was just testing the action and seeing that it worked as expected. It was pretty cool running an action and then seeing a tweet in my timeline.
The TIL for this was that GitHub Actions can have conditionals. This is important because I don't want to generate a new tweet each time I commit to main. I only want that to happen when I have a new post.
To do that, you just need this in the GitHub Action:
    if: "contains(github.event.head_commit.message, '<String to Filter on>')"
In my case, the <String to Filter on> is New Post:.
The send-tweet-action has a status field which is the text tweeted. I can use the github.event.head_commit.message in the action like this:
    ${{ github.event.head_commit.message }}
Now when I have a commit message that starts 'New Post:' against main I'll have a tweet get sent out too!
This got me to thinking that I can/should automate all of these steps.
With that in mind, I'm going to work on getting the process down to just having to run a single command. Something like:
    make publish "New Post: Title of my Post https://www.ryancheley.com/yyyy/mm/dd/slug/"
make vercelactually runsmake htmlso this isn't really a step that I need to do. ↩︎
Enhancements: Using GitHub Actions to Deploy
Integrating a version control system into your development cycle is just kind of one of those things that you do, right? I use GutHub for my version control, and it’s GitHub Actions to help with my deployment process.
There are 3 yaml files I have to get my local code deployed to my production server:
- django.yaml
 - dev.yaml
 - prod.yaml
 
Each one serving it’s own purpose
django.yaml
The django.yaml file is used to run my tests and other actions on a GitHub runner. It does this in 9 distinct steps and one Postgres service.
The steps are:
- Set up Python 3.8 - setting up Python 3.8 on the docker image provided by GitHub
 - psycopg2 prerequisites - setting up 
psycopg2to use the Postgres service created - graphviz prerequisites - setting up the requirements for graphviz which creates an image of the relationships between the various models
 - Install dependencies - installs all of my Python package requirements via pip
 - Run migrations - runs the migrations for the Django App
 - Load Fixtures - loads data into the database
 - Lint - runs 
blackon my code - Flake8 - runs 
flake8on my code - Run Tests - runs all of the tests to ensure they pass
 
name: Django CI
on:
  push:
    branches-ignore:
      - main
      - dev
jobs:
  build:
    runs-on: ubuntu-18.04
    services:
      postgres:
        image: postgres:12.2
        env:
          POSTGRES_USER: postgres
          POSTGRES_PASSWORD: postgres
          POSTGRES_DB: github_actions
        ports:
          - 5432:5432
        # needed because the postgres container does not provide a healthcheck
        options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
    steps:
    - uses: actions/checkout@v1
    - name: Set up Python 3.8
      uses: actions/setup-python@v1
      with:
        python-version: 3.8
    - uses: actions/cache@v1
      with:
        path: ~/.cache/pip
        key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
        restore-keys: |
          ${{ runner.os }}-pip-
    - name: psycopg2 prerequisites
      run: sudo apt-get install python-dev libpq-dev
    - name: graphviz prerequisites
      run: sudo apt-get install graphviz libgraphviz-dev pkg-config
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install psycopg2
        pip install -r requirements/local.txt
    - name: Run migrations
      run: python manage.py migrate
    - name: Load Fixtures
      run: |
        python manage.py loaddata fixtures/User.json
        python manage.py loaddata fixtures/Sport.json
        python manage.py loaddata fixtures/League.json
        python manage.py loaddata fixtures/Conference.json
        python manage.py loaddata fixtures/Division.json
        python manage.py loaddata fixtures/Venue.json
        python manage.py loaddata fixtures/Team.json
    - name: Lint
      run: black . --check
    - name: Flake8
      uses: cclauss/GitHub-Action-for-Flake8@v0.5.0
    - name: Run tests
      run: coverage run -m pytest
dev.yaml
The code here does essentially they same thing that is done in the deploy.sh in my earlier post Automating the Deployment except that it pulls code from my dev branch on GitHub onto the server. The other difference is that this is on my UAT server, not my production server, so if something goes off the rails, I don’t hose production.
name: Dev CI
on:
  pull_request:
    branches:
      - dev
jobs:
  deploy:
    runs-on: ubuntu-18.04
    steps:
      - name: deploy code
        uses: appleboy/ssh-action@v0.1.2
        with:
          host: ${{ secrets.SSH_HOST_TEST }}
          key: ${{ secrets.SSH_KEY_TEST }}
          username: ${{ secrets.SSH_USERNAME }}
          script: |
            rm -rf StadiaTracker
            git clone --branch dev git@github.com:ryancheley/StadiaTracker.git
            source /home/stadiatracker/venv/bin/activate
            cd /home/stadiatracker/
            rm -rf /home/stadiatracker/StadiaTracker
            cp -r /root/StadiaTracker/ /home/stadiatracker/StadiaTracker
            cp /home/stadiatracker/.env /home/stadiatracker/StadiaTracker/StadiaTracker/.env
            pip -q install -r /home/stadiatracker/StadiaTracker/requirements.txt
            python /home/stadiatracker/StadiaTracker/manage.py migrate
            mkdir /home/stadiatracker/StadiaTracker/static
            mkdir /home/stadiatracker/StadiaTracker/staticfiles
            python /home/stadiatracker/StadiaTracker/manage.py collectstatic --noinput -v0
            systemctl daemon-reload
            systemctl restart stadiatracker
prod.yaml
Again, the code here does essentially they same thing that is done in the deploy.sh in my earlier post Automating the Deployment except that it pulls code from my main branch on GitHub onto the server.
name: Prod CI
on:
  pull_request:
    branches:
      - main
jobs:
  deploy:
    runs-on: ubuntu-18.04
    steps:
      - name: deploy code
        uses: appleboy/ssh-action@v0.1.2
        with:
          host: ${{ secrets.SSH_HOST }}
          key: ${{ secrets.SSH_KEY }}
          username: ${{ secrets.SSH_USERNAME }}
          script: |
            rm -rf StadiaTracker
            git clone git@github.com:ryancheley/StadiaTracker.git
            source /home/stadiatracker/venv/bin/activate
            cd /home/stadiatracker/
            rm -rf /home/stadiatracker/StadiaTracker
            cp -r /root/StadiaTracker/ /home/stadiatracker/StadiaTracker
            cp /home/stadiatracker/.env /home/stadiatracker/StadiaTracker/StadiaTracker/.env
            pip -q install -r /home/stadiatracker/StadiaTracker/requirements.txt
            python /home/stadiatracker/StadiaTracker/manage.py migrate
            mkdir /home/stadiatracker/StadiaTracker/static
            mkdir /home/stadiatracker/StadiaTracker/staticfiles
            python /home/stadiatracker/StadiaTracker/manage.py collectstatic --noinput -v0
            systemctl daemon-reload
            systemctl restart stadiatracker
The general workflow is:
- Create a branch on my local computer with 
git switch -c branch_name - Push the code changes to GitHub which kicks off the 
django.yamlworkflow. - If everything passes then I do a pull request from 
branch_nameintodev. - This kicks off the 
dev.yamlworkflow which will update UAT - I check UAT to make sure that everything works like I expect it to (it almost always does … and when it doesn’t it’s because I’ve mucked around with a server configuration which is the problem, not my code)
 - I do a pull request from 
devtomainwhich updates my production server 
My next enhancement is to kick off the dev.yaml process if the tests from django.yaml all pass, i.e. do an auto merge from branch_name to dev, but I haven’t done that yet.