Django and Legacy Databases
I work at a place that is heavily investing in the Microsoft Tech Stack. Windows Servers, c#.Net, Angular, VB.net, Windows Work Stations, Microsoft SQL Server ... etc
When not at work, I really like working with Python and Django. I've never really thought I'd be able to combine the two until I discovered the package mssql-django which was released Feb 18, 2021 in alpha and as a full-fledged version 1 in late July of that same year.
Ever since then I've been trying to figure out how to incorporate Django into my work life.
I'm going to use this series as an outline of how I'm working through the process of getting Django to be useful at work. The issues I run into, and the solutions I'm (hopefully) able to achieve.
I'm also going to use this as a more in depth analysis of an accompanying talk I'm hoping to give at Django Con 2022 later this year.
I'm going to break this down into a several part series that will roughly align with the talk I'm hoping to give. The parts will be:
- Introduction/Background
- Overview of the Project
- Wiring up the Project Models
- Database Routers
- Django Admin Customization
- Admin Documentation
- Review & Resources
My intention is to publish one part every week or so. Sometimes the posts will come fast, and other times not. This will mostly be due to how well I'm doing with writing up my findings and/or getting screenshots that will work.
The tool set I'll be using is:
- docker
- docker-compose
- Django
- MS SQL
- SQLite
I made a Slackbot!
Building my first Slack Bot
I had added a project to my OmniFocus database in November of 2021 which was, "Build a Slackbot" after watching a Video by Mason Egger. I had hoped that I would be able to spend some time on it over the holidays, but I was never able to really find the time.
A few weeks ago, Bob Belderbos tweeted:
If you were to build a Slack bot, what would it do?
— Bob Belderbos (@bbelderbos) February 2, 2022
And I responded
I work in US Healthcare where there are a lot of Acronyms (many of which are used in tech but have different meaning), so my slack bot would allow a user to enter an acronym and return what it means, i.e., CMS = Centers for Medicare and Medicaid Services.
— The B Is Silent (@ryancheley) February 2, 2022
I didn't really have anymore time now than I did over the holiday, but Bob asking and me answering pushed me to actually write the darned thing.
I think one of the problems I encountered was what backend / tech stack to use. I'm familiar with Django, but going from 0 to something in production has a few steps and although I know how to do them ... I just felt ~overwhelmed~ by the prospect.
I felt equally ~overwhelmed~ by the prospect of trying FastAPI to create the API or Flask, because I am not as familiar with their deployment story.
Another thing that was different now than before was that I had worked on a Django Cookie Cutter to use and that was 'good enough' to try it out. So I did.
I ran into a few problems while working with my Django Cookie Cutter but I fixed them and then dove head first into writing the Slack Bot
The model
The initial implementation of the model was very simple ... just 2 fields:
class Acronym(models.Model):
acronym = models.CharField(max_length=8)
definition = models.TextField()
def save(self, *args, **kwargs):
self.acronym = self.acronym.lower()
super(Acronym, self).save(*args, **kwargs)
class Meta:
unique_together = ("acronym", "definition")
ordering = ["acronym"]
def __str__(self) -> str:
return self.acronym
Next I created the API using Django Rest Framework using a single serializer
class AcronymSerializer(serializers.ModelSerializer):
class Meta:
model = Acronym
fields = [
"id",
"acronym",
"definition",
]
which is used by a single view
class AcronymViewSet(viewsets.ReadOnlyModelViewSet):
serializer_class = AcronymSerializer
queryset = Acronym.objects.all()
def get_object(self):
queryset = self.filter_queryset(self.get_queryset())
print(self.kwargs["acronym"])
acronym = self.kwargs["acronym"]
obj = get_object_or_404(queryset, acronym__iexact=acronym)
return obj
and exposed on 2 end points:
from django.urls import include, path
from .views import AcronymViewSet, AddAcronym, CountAcronyms, Events
app_name = "api"
user_list = AcronymViewSet.as_view({"get": "list"})
user_detail = AcronymViewSet.as_view({"get": "retrieve"})
urlpatterns = [
path("", AcronymViewSet.as_view({"get": "list"}), name="acronym-list"),
path("<acronym>/", AcronymViewSet.as_view({"get": "retrieve"}), name="acronym-detail"),
path("api-auth/", include("rest_framework.urls", namespace="rest_framework")),
]
Getting the data
At my joby-job we use Jira and Confluence. In one of our Confluence spaces we have a Glossary page which includes nearly 200 acronyms. I had two choices:
- Copy and Paste the acronym and definition for each item
- Use Python to get the data
I used Python to get the data, via a Jupyter Notebook, but I didn't seem to save the code anywhere (🤦🏻), so I can't include it here. But trust me, it was 💯.
Setting up the Slack Bot
Although I had watched Mason's video, since I was building this with Django I used this article as a guide in the development of the code below.
The code from my views.py
is below:
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
SLACK_VERIFICATION_TOKEN = getattr(settings, "SLACK_VERIFICATION_TOKEN", None)
SLACK_BOT_USER_TOKEN = getattr(settings, "SLACK_BOT_USER_TOKEN", None)
CONFLUENCE_LINK = getattr(settings, "CONFLUENCE_LINK", None)
client = slack.WebClient(SLACK_BOT_USER_TOKEN, ssl=ssl_context)
class Events(APIView):
def post(self, request, *args, **kwargs):
slack_message = request.data
if slack_message.get("token") != SLACK_VERIFICATION_TOKEN:
return Response(status=status.HTTP_403_FORBIDDEN)
# verification challenge
if slack_message.get("type") == "url_verification":
return Response(data=slack_message, status=status.HTTP_200_OK)
# greet bot
if "event" in slack_message:
event_message = slack_message.get("event")
# ignore bot's own message
if event_message.get("subtype"):
return Response(status=status.HTTP_200_OK)
# process user's message
user = event_message.get("user")
text = event_message.get("text")
channel = event_message.get("channel")
url = f"https://slackbot.ryancheley.com/api/{text}/"
response = requests.get(url).json()
definition = response.get("definition")
if definition:
message = f"The acronym '{text.upper()}' means: {definition}"
else:
confluence = CONFLUENCE_LINK + f'/dosearchsite.action?cql=siteSearch+~+"{text}"'
confluence_link = f"<{confluence}|Confluence>"
message = f"I'm sorry <@{user}> I don't know what *{text.upper()}* is :shrug:. Try checking {confluence_link}."
if user != "U031T0UHLH1":
client.chat_postMessage(
blocks=[{"type": "section", "text": {"type": "mrkdwn", "text": message}}], channel=channel
)
return Response(status=status.HTTP_200_OK)
return Response(status=status.HTTP_200_OK)
Essentially what the Slack Bot does is takes in the request.data['text']
and checks it against the DRF API end point to see if there is a matching Acronym.
If there is, then it returns the acronym and it's definition.
If it's not, you get a message that it's not sure what you're looking for, but that maybe Confluence1 can help, and gives a link to our Confluence Search page.
The last thing you'll notice is that if the User has a specific ID it won't respond with a message. That's because in my initial testing I just had the Slack Bot replying to the user saying 'Hi' with a 'Hi' back to the user.
I had a missing bit of logic though, so once you said hi to the Slack Bot, it would reply back 'Hi' and then keep replying 'Hi' because it was talking to itself. It was comical to see in real time 😂.
Using ngrok to test it locally
ngrok
is a great tool for taking a local url, like localhost:8000/api/entpoint, and exposing it on the internet with a url like https://a123-45-678-901-234.ngrok.io/api/entpoint. This allows you to test your local code and see any issues that might arise when pushed to production.
As I mentioned above the Slack Bot continually said "Hi" to itself in my initial testing. Since I was running ngrok to serve up my local Server I was able to stop the infinite loop by stopping my local web server. This would have been a little more challenging if I had to push my code to an actual web server first and then tested.
Conclusion
This was such a fun project to work on, and I'm really glad that Bob tweeted asking what Slack Bot we would build.
That gave me the final push to actually build it.
- You'll notice that I'm using an environment variable to define the Confluence Link and may wonder why. It's mostly to keep the actual Confluence Link used at work non-public and not for any other reason 🤷🏻 ↩
djhtml and justfile
I had read about a project called djhtml and wanted to use it on one of my projects. The documentation is really good for adding it to precommit-ci, but I wasn't sure what I needed to do to just run it on the command line.
It took a bit of googling, but I was finally able to get the right incantation of commands to be able to get it to run on my templates:
djhtml -i $(find templates -name '*.html' -print)
But of course because I have the memory of a goldfish and this is more than 3 commands to try to remember to string together, instead of telling myself I would remember it, I simply added it to a just file and now have this recipe:
# applies djhtml linting to templates
djhtml:
djhtml -i $(find templates -name '*.html' -print)
This means that I can now run just djhtml
and I can apply djhtml's linting to my templates.
Pretty darn cool if you ask me. But then I got to thinking, I can make this a bit more general for 'linting' type activities. I include all of these in my precommit-ci, but I figured, what the heck, might as well have a just recipe for all of them!
So I refactored the recipe to be this:
# applies linting to project (black, djhtml, flake8)
lint:
djhtml -i $(find templates -name '*.html' -print)
black .
flake8 .
And now I can run all of these linting style libraries with a single command just lint
Contributing to django-sql-dashboard
Last Saturday (July 3rd) while on vacation, I dubbed it “Security update Saturday”. I took the opportunity to review all of the GitHub bot alerts about out of date packages, and make the updates I needed to.
This included updated django-sql-dashboard
to version 1.0 … which I was really excited about doing. It included two things I was eager to see:
- Implemented a new column cog menu, with options for sorting, counting distinct items and counting by values. #57
- Admin change list view now only shows dashboards the user has permission to edit. Thanks, Atul Varma. #130
I made the updates on my site StadiaTracker.com using my normal workflow:
- Make the change locally on my MacBook Pro
- Run the tests
- Push to UAT
- Push to PROD
The next day, on July 4th, I got the following error message via my error logging:
Internal Server Error: /dashboard/games-seen-in-person/
ProgrammingError at /dashboard/games-seen-in-person/
could not find array type for data type information_schema.sql_identifier
So I copied the url /dashboard/games-seen-in-person/
to see if I could replicate the issue as an authenticated user and sure enough, I got a 500 Server error.
Troubleshooting process
The first thing I did was to fire up the local version and check the url there. Oddly enough, it worked without issue.
OK … well that’s odd. What are the differences between the local version and the uat / prod version?
The local version is running on macOS 10.15.7 while the uat / prod versions are running Ubuntu 18.04. That could be one source of the issue.
The local version is running Postgres 13.2 while the uat / prod versions are running Postgres 10.17
OK, two differences. Since the error is could not find array type for data type information_schema.sql_identifier
I’m going to start with taking a look at the differences on the Postgres versions.
First, I looked at the Change Log to see what changed between version 0.16 and version 1.0. Nothing jumped out at me, so I looked at the diff between several files between the two versions looking specifically for information_schema.sql_identifier
which didn’t bring up anything.
Next I checked for either information_schema
or sql_identifier
and found a chance in the views.py
file. On line 151 (version 0.16) this change was made:
string_agg(column_name, ', ' order by ordinal_position) as columns
to this:
array_to_json(array_agg(column_name order by ordinal_position)) as columns
Next, I extracted the entire SQL statement from the views.py
file to run in Postgres on the UAT server
with visible_tables as (
select table_name
from information_schema.tables
where table_schema = 'public'
order by table_name
),
reserved_keywords as (
select word
from pg_get_keywords()
where catcode = 'R'
)
select
information_schema.columns.table_name,
array_to_json(array_agg(column_name order by ordinal_position)) as columns
from
information_schema.columns
join
visible_tables on
information_schema.columns.table_name = visible_tables.table_name
where
information_schema.columns.table_schema = 'public'
group by
information_schema.columns.table_name
order by
information_schema.columns.table_name
Running this generated the same error I was seeing from the logs!
Next, I picked apart the various select statements, testing each one to see what failed, and ended on this one:
select information_schema.columns.table_name,
array_to_json(array_agg(column_name order by ordinal_position)) as columns
from information_schema.columns
Which generated the same error message. Great!
In order to determine how to proceed next I googled sql_identifier
to see what it was. Turns out it’s a field type in Postgres! (I’ve been working in MSSQL for more than 10 years and as far as I know, this isn’t a field type over there, so I learned something)
Further, there were changes made to that field type in Postgres 12!
OK, since there were changes made to that afield type in Postgres 12, I’ll probably need to cast the field to another field type that won’t fail.
That led me to try this:
select information_schema.columns.table_name,
array_to_json(array_agg(cast(column_name as text) order by ordinal_position)) as columns
from information_schema.columns
Which returned a value without error!
Submitting the updated code
With the solution in hand, I read the Contribution Guide and submitting my patch. And the most awesome part? Within less than an hour Simon Willison (the project’s maintainer) had replied back and merged by code!
And then, the icing on the cake was getting a shout out in a post that Simon wrote up about the update that I submitted!
Holy smokes that was sooo cool.
I love solving problems, and I love writing code, so this kind of stuff just really makes my day.
Now, I’ve contributed to an open source project (that makes 3 now!) and the issue with the /dashboard/
has been fixed.
All
How does my Django site connect to the internet anyway?
I created a Django site to troll my cousin Barry who is a big San Diego Padres fan. Their Shortstop is a guy called Fernando Tatis Jr. and he’s really good. Like really good. He’s also young, and arrogant, and is everything an old dude like me doesn’t like about the ‘new generation’ of ball players that are changing the way the game is played.
In all honesty though, it’s fun to watch him play (anyone but the Dodgers).
The thing about him though, is that while he’s really good at the plate, he’s less good at playing defense. He currently leads the league in errors. Not just for all shortstops, but for ALL players!
Anyway, back to the point. I made this Django site call Does Tatis Jr Have an Error Today?It is a simple site that only does one thing ... tells you if Tatis Jr has made an error today. If he hasn’t, then it says No
, and if he has, then it says Yes
.
It’s a dumb site that doesn’t do anything else. At all.
But, what it did do was lead me down a path to answer the question, “How does my site connect to the internet anyway?”
Seems like a simple enough question to answer, and it is, but it wasn’t really what I thought when I started.
How it works
I use a MacBook Pro to work on the code. I then deploy it to a Digital Ocean server using GitHub Actions. But they say, a picture is worth a thousand words, so here's a chart of the workflow:
This shows the development cycle, but that doesn’t answer the question, how does the site connect to the internet!
How is it that when I go to the site, I see anything? I thought I understood it, and when I tried to actually draw it out, turns out I didn't!
After a bit of Googling, I found this and it helped me to create this:
My site runs on an Ubuntu 18.04 server using Nginx as proxy server. Nginx determines if the request is for a static asset (a css file for example) or dynamic one (something served up by the Django App, like answering if Tatis Jr. has an error today).
If the request is static, then Nginx just gets the static data and server it. If it’s dynamic data it hands off the request to Gunicorn which then interacts with the Django App.
So, what actually handles the HTTP request? From the serverfault.com answer above:
[T]he simple answer is Gunicorn. The complete answer is both Nginx and Gunicorn handle the request. Basically, Nginx will receive the request and if it's a dynamic request (generally based on URL patterns) then it will give that request to Gunicorn, which will process it, and then return a response to Nginx which then forwards the response back to the original client.
In my head, I thought that Nginx was ONLY there to handle the static requests (and it is) but I wasn’t clean on how dynamic requests were handled ... but drawing this out really made me stop and ask, “Wait, how DOES that actually work?”
Now I know, and hopefully you do to!
Notes:
These diagrams are generated using the amazing library Diagrams. The code used to generate them is here.
Automating the deployment
We got everything set up, and now we want to automate the deployment.
Why would we want to do this you ask? Let’s say that you’ve decided that you need to set up a test version of your site (what some might call UAT) on a new server (at some point I’ll write something up about about multiple Django Sites on the same server and part of this will still apply then). How can you do it?
Well you’ll want to write yourself some scripts!
I have a mix of Python and Shell scripts set up to do this. They are a bit piece meal, but they also allow me to run specific parts of the process without having to try and execute a script with ‘commented’ out pieces.
Python Scripts
create_server.py
destroy_droplet.py
Shell Scripts
copy_for_deploy.sh
create_db.sh
create_server.sh
deploy.sh
deploy_env_variables.sh
install-code.sh
setup-server.sh
setup_nginx.sh
setup_ssl.sh
super.sh
upload-code.sh
The Python script create_server.py
looks like this:
# create_server.py
import requests
import os
from collections import namedtuple
from operator import attrgetter
from time import sleep
Server = namedtuple('Server', 'created ip_address name')
doat = os.environ['DIGITAL_OCEAN_ACCESS_TOKEN']
# Create Droplet
headers = {
'Content-Type': 'application/json',
'Authorization': f'Bearer {doat}',
}
data = <data_keys>
print('>>> Creating Server')
requests.post('https://api.digitalocean.com/v2/droplets', headers=headers, data=data)
print('>>> Server Created')
print('>>> Waiting for Server Stand up')
sleep(90)
print('>>> Getting Droplet Data')
params = (
('page', '1'),
('per_page', '10'),
)
get_droplets = requests.get('https://api.digitalocean.com/v2/droplets', headers=headers, params=params)
server_list = []
for d in get_droplets.json()['droplets']:
server_list.append(Server(d['created_at'], d['networks']['v4'][0]['ip_address'], d['name']))
server_list = sorted(server_list, key=attrgetter('created'), reverse=True)
server_ip_address = server_list[0].ip_address
db_name = os.environ['DJANGO_PG_DB_NAME']
db_username = os.environ['DJANGO_PG_USER_NAME']
if server_ip_address != <production_server_id>:
print('>>> Run server setup')
os.system(f'./setup-server.sh {server_ip_address} {db_name} {db_username}')
print(f'>>> Server setup complete. You need to add {server_ip_address} to the ALLOWED_HOSTS section of your settings.py file ')
else:
print('WARNING: Running Server set up will destroy your current production server. Aborting process')
Earlier I said that I liked Digital Ocean because of it’s nice API for interacting with it’s servers (i.e. Droplets). Here we start to see some.
The First part of the script uses my Digital Ocean Token and some input parameters to create a Droplet via the Command Line. The sleep(90)
allows the process to complete before I try and get the IP address. Ninety seconds is a bit longer than is needed, but I figure, better safe than sorry … I’m sure that there’s a way to call to DO and ask if the just created droplet has an IP address, but I haven’t figured it out yet.
After we create the droplet AND is has an IP address, we get it to pass to the bash script server-setup.sh
.
# server-setup.sh
#!/bin/bash
# Create the server on Digital Ocean
export SERVER=$1
# Take secret key as 2nd argument
if [[ -z "$1" ]]
then
echo "ERROR: No value set for server ip address1"
exit 1
fi
echo -e "\n>>> Setting up $SERVER"
ssh root@$SERVER /bin/bash << EOF
set -e
echo -e "\n>>> Updating apt sources"
apt-get -qq update
echo -e "\n>>> Upgrading apt packages"
apt-get -qq upgrade
echo -e "\n>>> Installing apt packages"
apt-get -qq install python3 python3-pip python3-venv tree supervisor postgresql postgresql-contrib nginx
echo -e "\n>>> Create User to Run Web App"
if getent passwd burningfiddle
then
echo ">>> User already present"
else
adduser --disabled-password --gecos "" burningfiddle
echo -e "\n>>> Add newly created user to www-data"
adduser burningfiddle www-data
fi
echo -e "\n>>> Make directory for code to be deployed to"
if [[ ! -d "/home/burningfiddle/BurningFiddle" ]]
then
mkdir /home/burningfiddle/BurningFiddle
else
echo ">>> Skipping Deploy Folder creation - already present"
fi
echo -e "\n>>> Create VirtualEnv in this directory"
if [[ ! -d "/home/burningfiddle/venv" ]]
then
python3 -m venv /home/burningfiddle/venv
else
echo ">>> Skipping virtualenv creation - already present"
fi
# I don't think i need this anymore
echo ">>> Start and Enable gunicorn"
systemctl start gunicorn.socket
systemctl enable gunicorn.socket
EOF
./setup_nginx.sh $SERVER
./deploy_env_variables.sh $SERVER
./deploy.sh $SERVER
All of that stuff we did before, logging into the server and running commands, we’re now doing via a script. What the above does is attempt to keep the server in an idempotent state (that is to say you can run it as many times as you want and you don’t get weird artifacts … if you’re a math nerd you may have heard idempotent in Linear Algebra to describe the multiplication of a matrix by itself and returning the original matrix … same idea here!)
The one thing that is new here is the part
ssh root@$SERVER /bin/bash << EOF
...
EOF
A block like that says, “take everything in between EOF
and run it on the server I just ssh’d into using bash.
At the end we run 3 shell scripts:
setup_nginx.sh
deploy_env_variables.sh
deploy.sh
Let’s review these scripts
The script setup_nginx.sh
copies several files needed for the nginx
service:
gunicorn.service
gunicorn.sockets
nginx.conf
It then sets up a link between the available-sites
and enabled-sites
for nginx
and finally restarts nginx
# setup_nginx.sh
export SERVER=$1
export sitename=burningfiddle
scp -r ../config/gunicorn.service root@$SERVER:/etc/systemd/system/
scp -r ../config/gunicorn.socket root@$SERVER:/etc/systemd/system/
scp -r ../config/nginx.conf root@$SERVER:/etc/nginx/sites-available/$sitename
ssh root@$SERVER /bin/bash << EOF
echo -e ">>> Set up site to be linked in Nginx"
ln -s /etc/nginx/sites-available/$sitename /etc/nginx/sites-enabled
echo -e ">>> Restart Nginx"
systemctl restart nginx
echo -e ">>> Allow Nginx Full access"
ufw allow 'Nginx Full'
EOF
The script deploy_env_variables.sh
copies environment variables. There are packages (and other methods) that help to manage environment variables better than this, and that is one of the enhancements I’ll be looking at.
This script captures the values of various environment variables (one at a time) and then passes them through to the server. It then checks to see if these environment variables exist on the server and will place them in the /etc/environment
file
export SERVER=$1
DJANGO_SECRET_KEY=printenv | grep DJANGO_SECRET_KEY
DJANGO_PG_PASSWORD=printenv | grep DJANGO_PG_PASSWORD
DJANGO_PG_USER_NAME=printenv | grep DJANGO_PG_USER_NAME
DJANGO_PG_DB_NAME=printenv | grep DJANGO_PG_DB_NAME
DJANGO_SUPERUSER_PASSWORD=printenv | grep DJANGO_SUPERUSER_PASSWORD
DJANGO_DEBUG=False
ssh root@$SERVER /bin/bash << EOF
if [[ "\$DJANGO_SECRET_KEY" != "$DJANGO_SECRET_KEY" ]]
then
echo "DJANGO_SECRET_KEY=$DJANGO_SECRET_KEY" >> /etc/environment
else
echo ">>> Skipping DJANGO_SECRET_KEY - already present"
fi
if [[ "\$DJANGO_PG_PASSWORD" != "$DJANGO_PG_PASSWORD" ]]
then
echo "DJANGO_PG_PASSWORD=$DJANGO_PG_PASSWORD" >> /etc/environment
else
echo ">>> Skipping DJANGO_PG_PASSWORD - already present"
fi
if [[ "\$DJANGO_PG_USER_NAME" != "$DJANGO_PG_USER_NAME" ]]
then
echo "DJANGO_PG_USER_NAME=$DJANGO_PG_USER_NAME" >> /etc/environment
else
echo ">>> Skipping DJANGO_PG_USER_NAME - already present"
fi
if [[ "\$DJANGO_PG_DB_NAME" != "$DJANGO_PG_DB_NAME" ]]
then
echo "DJANGO_PG_DB_NAME=$DJANGO_PG_DB_NAME" >> /etc/environment
else
echo ">>> Skipping DJANGO_PG_DB_NAME - already present"
fi
if [[ "\$DJANGO_DEBUG" != "$DJANGO_DEBUG" ]]
then
echo "DJANGO_DEBUG=$DJANGO_DEBUG" >> /etc/environment
else
echo ">>> Skipping DJANGO_DEBUG - already present"
fi
EOF
The deploy.sh
calls two scripts itself:
# deploy.sh
#!/bin/bash
set -e
# Deploy Django project.
export SERVER=$1
#./scripts/backup-database.sh
./upload-code.sh
./install-code.sh
The final two scripts!
The upload-code.sh
script uploads the files to the deploy
folder of the server while the install-code.sh
script move all of the files to where then need to be on the server and restart any services.
# upload-code.sh
#!/bin/bash
set -e
echo -e "\n>>> Copying Django project files to server."
if [[ -z "$SERVER" ]]
then
echo "ERROR: No value set for SERVER."
exit 1
fi
echo -e "\n>>> Preparing scripts locally."
rm -rf ../../deploy/*
rsync -rv --exclude 'htmlcov' --exclude 'venv' --exclude '*__pycache__*' --exclude '*staticfiles*' --exclude '*.pyc' ../../BurningFiddle/* ../../deploy
echo -e "\n>>> Copying files to the server."
ssh root@$SERVER "rm -rf /root/deploy/"
scp -r ../../deploy root@$SERVER:/root/
echo -e "\n>>> Finished copying Django project files to server."
And finally,
# install-code.sh
#!/bin/bash
# Install Django app on server.
set -e
echo -e "\n>>> Installing Django project on server."
if [[ -z "$SERVER" ]]
then
echo "ERROR: No value set for SERVER."
exit 1
fi
echo $SERVER
ssh root@$SERVER /bin/bash << EOF
set -e
echo -e "\n>>> Activate the Virtual Environment"
source /home/burningfiddle/venv/bin/activate
cd /home/burningfiddle/
echo -e "\n>>> Deleting old files"
rm -rf /home/burningfiddle/BurningFiddle
echo -e "\n>>> Copying new files"
cp -r /root/deploy/ /home/burningfiddle/BurningFiddle
echo -e "\n>>> Installing Python packages"
pip install -r /home/burningfiddle/BurningFiddle/requirements.txt
echo -e "\n>>> Running Django migrations"
python /home/burningfiddle/BurningFiddle/manage.py migrate
echo -e "\n>>> Creating Superuser"
python /home/burningfiddle/BurningFiddle/manage.py createsuperuser --noinput --username bfadmin --email rcheley@gmail.com || true
echo -e "\n>>> Load Initial Data"
python /home/burningfiddle/BurningFiddle/manage.py loaddata /home/burningfiddle/BurningFiddle/fixtures/pages.json
echo -e "\n>>> Collecting static files"
python /home/burningfiddle/BurningFiddle/manage.py collectstatic
echo -e "\n>>> Reloading Gunicorn"
systemctl daemon-reload
systemctl restart gunicorn
EOF
echo -e "\n>>> Finished installing Django project on server."
Logging in a Django App
Per the Django Documentation you can set up
A list of all the people who get code error notifications. When DEBUG=False and AdminEmailHandler is configured in LOGGING (done by default), Django emails these people the details of exceptions raised in the request/response cycle.
In order to set this up you need to include in your settings.py
file something like:
ADMINS = [
('John', 'john@example.com'),
('Mary', 'mary@example.com')
]
The difficulties I always ran into were:
- How to set up the AdminEmailHandler
- How to set up a way to actually email from the Django Server
Again, per the Django Documentation:
Django provides one log handler in addition to those provided by the Python logging module
Reading through the documentation didn’t really help me all that much. The docs show the following example:
'handlers': {
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'include_html': True,
}
},
That’s great, but there’s not a direct link (that I could find) to the example of how to configure the logging in that section. It is instead at the VERY bottom of the documentation page in the Contents section in the Configured logging > Examples section ... and you really need to know that you have to look for it!
The important thing to do is to include the above in the appropriate LOGGING
setting, like this:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'include_html': True,
}
},
},
}
Sending an email with Logging information
We’ve got the logging and it will be sent via email, but there’s no way for the email to get sent out yet!
In order to accomplish this I use SendGrid. No real reason other than that’s what I’ve used in the past.
There are great tutorials online for how to get SendGrid integrated with Django, so I won’t rehash that here. I’ll just drop my the settings I used in my settings.py
SENDGRID_API_KEY = env("SENDGRID_API_KEY")
EMAIL_HOST = "smtp.sendgrid.net"
EMAIL_HOST_USER = "apikey"
EMAIL_HOST_PASSWORD = SENDGRID_API_KEY
EMAIL_PORT = 587
EMAIL_USE_TLS = True
One final thing I needed to do was to update the email address that was being used to send the email. By default it uses root@localhost
which isn’t ideal.
You can override this by setting
SERVER_EMAIL = myemail@mydomain.tld
With those three settings, everything should just work.
CBV - PasswordChangeDoneView
From Classy Class Based Views PasswordChangeDoneView
Render a template. Pass keyword arguments from the URLconf to the context.
Attributes
- template_name: Much like the
LogoutView
the default view is the Django skin. Create your ownpassword_change_done.html
file to keep the user experience consistent across the site. - title: the default uses the function
gettext_lazy()
and passes the string ‘Password change successful’. The functiongettext_lazy()
will translate the text into the local language if a translation is available. I’d just keep the default on this.
Example
views.py
class myPasswordChangeDoneView(PasswordChangeDoneView):
pass
urls.py
path('password_change_done_view/', views.myPasswordChangeDoneView.as_view(), name='password_change_done_view'),
password_change_done.html
{% extends "base.html" %}
{% load i18n %}
{% block content %}
<h1>
{% block title %}
{{ title }}
{% endblock %}
</h1>
<p>{% trans "Password changed" %}</p>
{% endblock %}
settings.py
LOGIN_URL = '/<app_name>/login_view/'
The above assumes that have this set up in your urls.py
Special Notes
You need to set the URL_LOGIN
value in your settings.py
. It defaults to /accounts/login/
. If that path isn’t valid you’ll get a 404 error.
Diagram
A visual representation of how PasswordChangeDoneView
is derived can be seen here:
Conclusion
Again, not much to do here. Let Django do all of the heavy lifting, but be mindful of the needed work in settings.py
and the new template you’ll need/want to create
CBV - PasswordChangeView
From Classy Class Based Views PasswordChangeView
A view for displaying a form and rendering a template response.
Attributes
- form_class: The form that will be used by the template created. Defaults to Django’s
PasswordChangeForm
- success_url: If you’ve created your own custom PasswordChangeDoneView then you’ll need to update this. The default is to use Django’s but unless you have a top level
urls.py
has the name ofpassword_change_done
you’ll get an error. - title: defaults to ‘Password Change’ and is translated into local language
Example
views.py
class myPasswordChangeView(PasswordChangeView):
success_url = reverse_lazy('rango:password_change_done_view')
urls.py
path('password_change_view/', views.myPasswordChangeView.as_view(), name='password_change_view'),
password_change_form.html
{% extends "base.html" %}
{% load i18n %}
{% block content %}
<h1>
{% block title %}
{{ title }}
{% endblock %}
</h1>
<p>{% trans "Password changed" %}</p>
{% endblock %}
Diagram
A visual representation of how PasswordChangeView
is derived can be seen here:
Conclusion
The only thing to keep in mind here is the success_url that will most likely need to be set based on the application you’ve written. If you get an error about not being able to use reverse
to find your template, that’s the issue.
CBV - LoginView
From Classy Class Based Views LoginView
Display the login form and handle the login action.
Attributes
- authentication_form: Allows you to subclass
AuthenticationForm
if needed. You would want to do this IF you need other fields besides username and password for login OR you want to implement other logic than just account creation, i.e. account verification must be done as well. For details see example by Vitor Freitas for more details - form_class: The form that will be used by the template created. Defaults to Django’s
AuthenticationForm
- redirect_authenticated_user: If the user is logged in then when they attempt to go to your login page it will redirect them to the
LOGIN_REDIRECT_URL
configured in yoursettings.py
- redirect_field_name: similar idea to updating what the
next
field will be from theDetailView
. If this is specified then you’ll most likely need to create a custom login template. - template_name: The default value for this is
registration\login.html
, i.e. a file calledlogin.html
in theregistration
directory of thetemplates
directory.
There are no required attributes for this view, which is nice because you can just add pass
to the view and you’re set (for the view anyway you still need an html file).
You’ll also need to update settings.py
to include a value for the LOGIN_REDIRECT_URL
.
Note on redirect_field_name
Per the Django Documentation:
If the user isn’t logged in, redirect to settings.LOGIN*URL, passing the current absolute path in the query string. Example: /accounts/login/?next=/polls/3/. *
If redirect_field_name
is set then the URL would be:
/accounts/login/?<redirect_field_name>=/polls/3
Basically, you only use this if you have a pretty good reason.
Example
views.py
class myLoginView(LoginView):
pass
urls.py
path('login_view/', views.myLoginView.as_view(), name='login_view'),
registration/login.html
{% extends "base.html" %}
{% load i18n %}
{% block content %}
<form method="post" action=".">
{% csrf_token %}
<div class="mui--text-danger">
{% for error in form.non_field_errors %}
{{error}}
{% endfor %}
</div>
<div class="mui-textfield">
{{ form.username.label }}
{{ form.username }}
</div>
<div class="mui-textfield">
{{ form.password.label }}
{{ form.password }}
</div>
<input class="mui-btn mui-btn--primary" type="submit" value="{% trans 'Log in' %}" />
<input type="hidden" name="next" value="{{ request.GET.next }}" />
</form>
<br><div class="mui-divider"></div><br>
{% endblock %}
settings.py
LOGIN_REDIRECT_URL = '/<app_name>/'
Diagram
A visual representation of how LoginView
is derived can be seen here:
Conclusion
Really easy to implement right out of the box but allows some nice customization. That being said, make those customizations IF you need to, not just because you think you want to.
Page 1 / 3