Uptime Kuma and Claude

While talking to a colleague of mine he had mentioned something called Uptime Kuma, which is an open source alternative to Uptime Robot. This was interesting for me because I have Uptime Robot checking a couple of my websites and I thought, "Hey, maybe I can use Uptime Kuma as a replacement for Uptime Robot."

I dove into the docs a bit and discovered that there was a Docker image for it, which was nice.

I then used Claude to take the docker run command

docker run -d \
    --restart=always \
    -p 3001:3001 \
    -v uptime-kuma:/app/data \
    --name uptime-kuma louislam/uptime-kuma:1

and turn it into a compose file, which gave me this

version: '3'

services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
    restart: always
    ports:
      - "3001:3001"
    volumes:
      - uptime-kuma:/app/data

volumes:
  uptime-kuma:

Next, I took the output of that compose file and asked Claude to also generate a caddy configuration for me and it gave me this

monitor.yourdomain.com {
    # Enable compression
    encode gzip zstd

    # Add security headers
    header {
        # Enable HSTS
        Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
        # Prevent content-type sniffing
        X-Content-Type-Options "nosniff"
        # Referrer policy
        Referrer-Policy "strict-origin-when-cross-origin"
    }

    # Reverse proxy to Uptime Kuma
    reverse_proxy localhost:3001
}

I then SSH'd into one of my servers using Shellfish 1.

I updated the docker-compose.yml file and my Caddyfile to include what Claude had output.

I restarted my docker containers and didn't get my new container running.

So I took the whole Docker Compose file from my server and I put that into Claude and said,

Hey, is there anything wrong with my Docker Compose file?

It indicated that there were some issues and provided updates for. I made those changes and did the same thing with the Caddyfile. Again, Claude offered up some changes. I applied the recommended changes for the docker-compose.yml file and the Caddyfile stopped and started my docker containers.

I suddenly had an instance of Uptime Kuma. All in all, it took about a half hour from start to finish while I was watching a hockey game ... from my iPad.

I didn't really have to do anything other than a couple of tweaks here and there on the Docker Compose file and a couple of tweaks here and there on the Caddyfile. and I suddenly have this tool that allows me to monitor the uptime of various websites that I'm interested in.

As I wrapped up it hit me ... holy crap, this is an amazing time to live2. You have an idea, Claude (or whatever AI tool you want to use) outputs a thing, and then you're up and running. This really reduces that barrier to entry to just try new things.

Is the Docker Compose file the most performant? I don't know. Is the Caddyfile the most secured lockdown thing? I don't know.

But for these small projects that are just me, I don't know how much it really matters.

  1. this is an amazing app on the iPad, highly recommend ↩︎
  2. Yes, there's also some truly horrific shit going on too ↩︎

Migrating django-tailwind-cli to Django Commons

On Tuesday October 29 I worked with Oliver Andrich, Daniel Moran and Storm Heg to migrate Oliver's project django-tailwind-cli from Oliver's GitHub project to Django Commons.

This was the 5th library that has been migrated over, but the first one that I 'lead'. I was a bit nervous. The Django Commons docs are great and super helpful, but the first time you do something, it can be nerve wracking.

One thing that was super helpful was knowing that Daniel and Storm were there to help me out when any issues came up.

The first set up steps are pretty straight forward and we were able to get through them pretty quickly. Then we ran into an issue that none of us had seen previously.

django-tailwind-cli had initially set up GitHub Pages set up for the docs, but migrated to use Read the Docs. However, the GitHub pages were still set in the repo so when we tried to migrate them over we ran into an error. Apparently you can't remove GitHub pages using Terraform (the process that we use to manage the organization).

We spent a few minutes trying to parse the error, make some changes, and try again (and again) and we were able to finally successfully get the migration completed 🎉

Some other things that came up during the migration was a maintainer that was set in the front end, but not in the terraform file. Also, while I was making changes to the Terraform file locally I ran into an issue with an update that had been done in the GitHub UI on my branch which caused a conflict for me locally.

I've had to deal with this kind of thing before, but ... never with an audience! Trying to work through the issue was a bit stressful to say the least 😅

But, with the help of Daniel and Storm I was able to resolve the conflicts and get the code pushed up.

As of this writing we have 6 libraries that are part of the Django Commons organization and am really excited for the next time that I get to lead a migration. Who knows, at some point I might actually be able to do one on my own ... although our hope is that this can be automated much more ... so maybe that's what I can work on next

Working on a project like this has been really great. There are such great opportunities to learn various technologies (terraform, GitHub Actions, git) and getting to work with great collaborators.

What I'm hoping to be able to work on this coming weekend is1:

  1. Get a better understanding of Terraform and how to use it with GitHub
  2. Use Terraform to do something with GitHub Actions
  3. Try and create a merge conflict and then use the git cli, or Git Tower, or VS Code to resolve the merge conflict

For number 3 in particular I want to have more comfort for fixing those kinds of issues so that if / when they come up again I can resolve them.

  1. Now will I actually be able to 🤷🏻 ↩︎

Django Commons

First, what are "the commons"? The concept of "the commons" refers to resources that are shared and managed collectively by a community, rather than being owned privately or by the state. This idea has been applied to natural resources like air, water, and grazing land, but it has also expanded to include digital and cultural resources, such as open-source software, knowledge databases, and creative works.

As Organization Administrators of Django Commons, we're focusing on sustainability and stewardship as key aspects.

Asking for help is hard, but it can be done more easily in a safe environment. As we saw with the xz utils backdoor attack, maintainer burnout is real. And while there are several arguments about being part of a 'supply chain' if we can, as a community, offer up a place where maintainers can work together for the sustainability and support of their packages, Django community will be better off!

From the README of the membership repo in Django Commons

Django Commons is an organization dedicated to supporting the community's efforts to maintain packages. It seeks to improve the maintenance experience for all contributors; reducing the barrier to entry for new contributors and reducing overhead for existing maintainers.

OK, but what does this new organization get me as a maintainer? The (stretch) goal is that we'll be able to provide support to maintainers. Whether that's helping to identify best practices for packages (like requiring tests), or normalize the idea that maintainers can take a step back from their project and know that there will be others to help keep the project going. Being able to accomplish these two goals would be amazing ... but we want to do more!

In the long term we're hoping that we're able to do something to help provide compensation to maintainers, but as I said, that's a long term goal.

The project was spearheaded by Tim Schilling and he was able to get lots of interest from various folks in the Django Community. But I think one of the great aspects of this community project is the transparency that we're striving for. You can see here an example of a discussion, out in the open, as we try to define what we're doing, together. Also, while Tim spearheaded this effort, we're really all working as equals towards a common goal.

What we're building here is a sustainable infrastructure and community. This community will allow packages to have a good home, to allow people to be as active as they want to be, and also allow people to take a step back when they need to.

Too often in tech, and especially in OSS, maintainers / developers will work and work and work because the work they do is generally interesting, and has interesting problems to try and solve.

But this can have a downside that we've all seen .. burnout.

By providing a platform for maintainers to 'park' their projects, along with the necessary infrastructure to keep them active, the goal is to allow maintainers the opportunity to take a break if, or when, they need to. When they're ready to return, they can do so with renewed interest, with new contributors and maintainers who have helped create a more sustainable environment for the open-source project.

The idea for this project is very similar to, but different from, Jazz Band. Again, from the README

Django Commons and Jazzband have similar goals, to support community-maintained projects. There are two main differences. The first is that Django Commons leans into the GitHub paradigm and centers the organization as a whole within GitHub. This is a risk, given there's some vendor lock-in. However, the repositories are still cloned to several people's machines and the organization controls the keys to PyPI, not GitHub. If something were to occur, it's manageable.

The second is that Django Commons is built from the beginning to have more than one administrator. Jazzband has been working for a while to add additional roadies (administrators), but there hasn't been visible progress. Given the importance of several of these projects it's a major risk to the community at large to have a single point of failure in managing the projects. By being designed from the start to spread the responsibility, it becomes easier to allow people to step back and others to step up, making Django more sustainable and the community stronger.

One of the goals for Django Commons is to be very public about what's going on. We actively encourage use of the Discussions feature in GitHub and have several active conversations happening there now1 2 3

So far we've been able to migrate ~3~ 4 libraries4 5 6 7into Django Commons. Each one has been a great learning experience, not only for the library maintainers, but also for the Django Commons admins.

We're working to automate as much of the work as possible. Daniel Moran has done an amazing job of writing Terraform scripts to help in the automation process.

While there are still several manual steps, with each new library, we discover new opportunities for automation.

This is an exciting project to be a part of. If you're interested in joining us you have a couple of options

  1. Transfer your project into Django Commons
  2. Join as member and help contribute to one of the projects that's already in Django Commons

I'm looking forward to seeing you be part of this amazing community!

  1. How to approach existing libraries ↩︎
  2. Creating a maintainer-contributor feedback loop ↩︎
  3. DjangoCon US 2024 Maintainership Open pace ↩︎
  4. django-tasks-scheduler ↩︎
  5. django-typer ↩︎
  6. django-fsm-2 ↩︎
  7. django-debug-toolbar ↩︎

Contributing to Tryceratops

I read about a project called Tryceratops on Twitter when it was tweeted about by Jeff Triplet

I checked it out and it seemed interesting. I decided to use it on my simplest Django project just to give it a test drive running this command:

tryceratops .

and got this result:

Done processing! 🦖✨
Processed 16 files
Found 0 violations
Failed to process 1 files
Skipped 2340 files

This is nice, but what is the file that failed to process?

This left me with two options:

  1. Complain that this awesome tool created by someone didn't do the thing I thought it needed to do

OR

  1. Submit an issue to the project and offer to help.

I went with option 2 😀

My initial commit was made in a pretty naive way. It did the job, but not in the best way for maintainability. I had a really great exchange with the maintainer Guilherme Latrova about the change that was made and he helped to direct me in a different direction.

The biggest thing I learned while working on this project (for Python at least) was the logging library. Specifically I learned how to add:

  • a formatter
  • a handler
  • a logger

For my change, I added a simple format with a verbose handler in a custom logger. It looked something like this:

The formatter:

"simple": {
    "format": "%(message)s",
},

The handler:

"verbose_output": {
    "class": "logging.StreamHandler",
    "level": "DEBUG",
    "formatter": "simple",
    "stream": "ext://sys.stdout",
},

The logger:

"loggers": {
    "tryceratops": {
        "level": "INFO",
        "handlers": [
            "verbose_output",
        ],
    },
},

This allows the verbose flag to output the message to Standard Out and give and INFO level of detail.

Because of what I learned, I've started using the logging library on some of my work projects where I had tried to roll my own logging tool. I should have known there was a logging tool in the Standard Library BEFORE I tried to roll me own 🤦🏻‍♂️

The other thing I (kind of) learned how to do was to squash my commits. I had never had a need (or desire?) to squash commits before, but the commit message is what Guilherme uses to generate the change log. So, with his guidance and help I tried my best to squash those commits. Although in the end he had to do it (still not entiredly sure what I did wrong) I was exposed to the idea of squashing commits and why they might be done. A win-win!

The best part about this entire experience was getting to work with Guilherme Latrova. He was super helpful and patient and had great advice without telling me what to do. The more I work within the Python ecosystem the more I'm just blown away by just how friendly and helpful everyone is and it's what make me want to do these kinds of projects.

If you haven't had a chance to work on an open source project, I highly recommend it. It's a great chance to learn and to meet new people.

Contributing to django-sql-dashboard

Last Saturday (July 3rd) while on vacation, I dubbed it “Security update Saturday”. I took the opportunity to review all of the GitHub bot alerts about out of date packages, and make the updates I needed to.

This included updated django-sql-dashboard to version 1.0 … which I was really excited about doing. It included two things I was eager to see:

  1. Implemented a new column cog menu, with options for sorting, counting distinct items and counting by values. #57
  2. Admin change list view now only shows dashboards the user has permission to edit. Thanks, Atul Varma. #130

I made the updates on my site StadiaTracker.com using my normal workflow:

  1. Make the change locally on my MacBook Pro
  2. Run the tests
  3. Push to UAT
  4. Push to PROD

The next day, on July 4th, I got the following error message via my error logging:

Internal Server Error: /dashboard/games-seen-in-person/

ProgrammingError at /dashboard/games-seen-in-person/
could not find array type for data type information_schema.sql_identifier

So I copied the url /dashboard/games-seen-in-person/ to see if I could replicate the issue as an authenticated user and sure enough, I got a 500 Server error.

Troubleshooting process

The first thing I did was to fire up the local version and check the url there. Oddly enough, it worked without issue.

OK … well that’s odd. What are the differences between the local version and the uat / prod version?

The local version is running on macOS 10.15.7 while the uat / prod versions are running Ubuntu 18.04. That could be one source of the issue.

The local version is running Postgres 13.2 while the uat / prod versions are running Postgres 10.17

OK, two differences. Since the error is could not find array type for data type information_schema.sql_identifier I’m going to start with taking a look at the differences on the Postgres versions.

First, I looked at the Change Log to see what changed between version 0.16 and version 1.0. Nothing jumped out at me, so I looked at the diff between several files between the two versions looking specifically for information_schema.sql_identifier which didn’t bring up anything.

Next I checked for either information_schema or sql_identifier and found a chance in the views.py file. On line 151 (version 0.16) this change was made:

string_agg(column_name, ', ' order by ordinal_position) as columns

to this:

array_to_json(array_agg(column_name order by ordinal_position)) as columns

Next, I extracted the entire SQL statement from the views.py file to run in Postgres on the UAT server

            with visible_tables as (
              select table_name
                from information_schema.tables
                where table_schema = 'public'
                order by table_name
            ),
            reserved_keywords as (
              select word
                from pg_get_keywords()
                where catcode = 'R'
            )
            select
              information_schema.columns.table_name,
              array_to_json(array_agg(column_name order by ordinal_position)) as columns
            from
              information_schema.columns
            join
              visible_tables on
              information_schema.columns.table_name = visible_tables.table_name
            where
              information_schema.columns.table_schema = 'public'
            group by
              information_schema.columns.table_name
            order by
              information_schema.columns.table_name

Running this generated the same error I was seeing from the logs!

Next, I picked apart the various select statements, testing each one to see what failed, and ended on this one:

select information_schema.columns.table_name,
array_to_json(array_agg(column_name order by ordinal_position)) as columns
from information_schema.columns

Which generated the same error message. Great!

In order to determine how to proceed next I googled sql_identifier to see what it was. Turns out it’s a field type in Postgres! (I’ve been working in MSSQL for more than 10 years and as far as I know, this isn’t a field type over there, so I learned something)

Further, there were changes made to that field type in Postgres 12!

OK, since there were changes made to that afield type in Postgres 12, I’ll probably need to cast the field to another field type that won’t fail.

That led me to try this:

select information_schema.columns.table_name,
array_to_json(array_agg(cast(column_name as text) order by ordinal_position)) as columns
from information_schema.columns

Which returned a value without error!

Submitting the updated code

With the solution in hand, I read the Contribution Guide and submitting my patch. And the most awesome part? Within less than an hour Simon Willison (the project’s maintainer) had replied back and merged by code!

And then, the icing on the cake was getting a shout out in a post that Simon wrote up about the update that I submitted!

Holy smokes that was sooo cool.

I love solving problems, and I love writing code, so this kind of stuff just really makes my day.

Now, I’ve contributed to an open source project (that makes 3 now!) and the issue with the /dashboard/ has been fixed.

All