Hynek's approach to uv and Python projects

I'm super excited to see that Hynek is doing a series on his YouTube channel about Python projects and uv in production. It's set to be a three part series, and the first episode My 2025 uv-based Python Project Layout for Production Apps dropped a few days ago.

I finally had a chance to watch it, and wow ... I'm just so excited to see how it goes. Based on his workflow and workload it might be some time before he's completed, but honestly it's going to be worth the wait.

The approach he's taking is to use uv, but only after manually going through the why's and how's of a Python project so you can have a better sense of what exactly uv is offering you, and when, or why you might want to change it to be different.

I'll likely watch it at least one more time and follow along to get a better sense of what he's working toward.

Can't wait for the next one!

Community

Work has been a bit hectic recently which has really cut into some of my open source(ish) community participation, at least the "in person" ones. I've not been able to attend a DSF Office hour, or had a chance to do my writing session, or go to Jeff's Office Hours for a few weeks.

Today was looking like I would miss Jeff's Office Hours again, but I realized that if I could go, even for 30 minutes, I should.

I didn't realize before hand how worth it the experience would be. I was only there for about 30 minutes, but it was such a great experience to see some people I hadn't seen in some while, and to talk a bit about hockey and Python and just generally listen to my friend banter about various things.

These types of community are so necessary and so rejuvenating for me. I need to remember this. Work will be hectic for the foreseeable future ... as with everything, there's too much to do, and not enough time to do it in.

I will most likely forget this again, until I remember it, but hopefully I can work hard to stay engaged in the ways that are helpful and needed for me.

PyCascades 2025 - Postlude

I'm back home from PyCascades. I'm glad to be back home, but I sure did have a great time in Portland. Seeing old friends, and meeting new ones. I'm also really happy that my talk seemed to resonate with at least a few people. It's always nice to hear someone come up to you after a talk and hear them say that they liked it. I don't know that I did enough of that this weekend, because there were a lot of really great talks.

The flight back home was a bit of an adventure. My initial gate was at the very end of C terminal at PDX. It was then moved to B terminal so there was a little more walking than I was thinking there would be. Normally this wouldn't be such an issue, but my daughter Abby decided that she wanted to buy about 8 books and I was lucky enough to carry her bag.

Once we got to our new gate the plane was delayed about an hour. I still made it home at a decent enough hour, but it was a longer day than I was really expecting it to be.

Going to conferences can be hard, but they are a hard thing that is worth doing because of the new people you get to meet, and the old friends that you get to see. Ten out of ten, would recommend.

PyCascades 2025

I spoke at PyCascades today giving a talk I gave at DjangoCon US 2024 in Durham last September. The title of the talk was Error Culture and I got some really good feedback from several people in attendance about it. During the talk I saw a lot of head nods, and even got a few laughs (which feels very good as a speaker!)

The one thing that was 'missing' from this talk was the 'nerves'. Before I gave the talk at DjangoCon US I was pretty nervous, but this time I was really calm and I'm not sure why ... or if I was glad that I wasn't nervous.

Maybe I wasn't nervous because I had given the talk before ... or maybe it was because of the amount of time I had practiced. Or maybe it was something else. I'm just not sure.

I'm also not sure if I prefer the lack of nerves before a talk or not. There is something about being nervous before getting on stage that makes you feel a bit more alive, so I did miss the nerves. But that being said, I was happy that I wasn't nervous all day. My talk was at 5pm so if I had been nervous I would have had that feeling ALL DAY!

All that being said, I'm really happy to have been able to give the talk at PyCascades this year. It's been a great conference so far, and I'm really looking forward to tomorrow.

Social Events

I arrived in Portland for PyCascades 2025 earlier today. There was a pre conference social at Hawthorne Asylum Food Cart Pod where we could pick up our badges and register and just get to meet some of the other attendees. There was exactly one person I knew for sure that I would know at the conference (Hi Velda!) and I was hoping that I'd run into her so I would have someone I knew. I'm terrible at social events where I don't know anyone.

I got to the venue a few minutes before the start time (because if you're not early you're late!) and I didn't really see anything that appeared to be related to the conference. I checked the site a few more times to make sure I was in the right place, and then resigned myself to just sit on a bench and play on my phone for a while.

And then, after about 10 minutes, I heard Velda call my name out. I saw her smiling with her contagious smile and knew that my night was going to be alright.

It was a good time, and I got to meet a few people.

It's funny ... sometimes you just need 1 person that you know at a social event to immediately make you feel a little more at home.

uv and pip

On Sunday November 3 I posted this to Mastodon:

I've somehow managed to get Python on my macbook to not install packages into the virtual environment I've activated and I'm honestly not sure how to fix this.

Has anyone else ever run into this problem? If so, any pointers on how to fix it?

I got lots of helpful replies and with those replies I was able to determine what the issue was and 'fix' it.

A timeline of events

I was working on updating a library of mine and because it had been a while since it had been worked on, I had to git clone it locally. When I did that I then set out to try uv for the virtual environment management.

This worked well (and was lightning FAST) and I was hacking away at the update I wanted to do.

Then I had a call with my daughter to review her upcoming schedule for the spring semester. When I got back to working on my library I kind of dove right in and started to get an error messages about the library not being installed

zsh: command not found: the-well-maintained-test

So I tried to install it (though I was 100% sure it was already there) and got this message

ERROR: Could not find an activated virtualenv (required).

I deleted the venv directory and started over again (using uv still) and ran into the same issue.

I restarted my Mac (at my day job I use Windows computers and this is just a natural reaction to do when something doesn't work the way I think it should1)

That didn't fix the issue 😢

I spent the next little while certain that in some way pipx or pyenv had jacked up my system, so I uninstalled them ... now you might ask why I thought this, and dear reader, I have no f$%&ing clue.

With those pesky helpers out of the way, pip still wasn't working the way I expected it to!

I then took to Mastodon and with this one response I saw what I needed

@ryancheley Are you running python -m pip install... Or just pip install...? If that's a venv created by uv, pip isn't installed I think, so 'pip install' might resolve to a pip in a different python installation

I went back to my terminal, and sure enough that was the issue. I haven't used uv enough to get a real sense of it, and when I was done talking with my daughter, my brain switched to Python programming, but it forgot that I had used uv to set everything up.

Lessons learned

This was a good lesson but I'm still unsure about a few things:

  1. How do I develop a cli using uv?
  2. Why did it seem that my cli testing worked fine right up until the call with my daughter, and now it seems that I can't develop cli's with uv?

I did write a TIL for this but I discovered that

uv venv venv

is not a full replacement for

python -m venv venv

Specifically uv does not include pip, which is what contributed to my issues. You can include pip by running this command though

uv venv venv --seed

Needless to say, with the help of some great people on the internet I got my issue resolved, but I did spend a good portion of Monday evening un-f$%&ing my MacBook Pro by reinstalling pyenv, and pipx2 ... and cleaning up my system Python for 3.12 and 3.13 ... turns out Homebrew REALLY doesn't want you to do anything with the system Python, even if you accidentally installed a bunch of cruft in there accidentally.

  1. Yes this is dumb, and yes I hate it ↩︎
  2. As of this writing I've uninstalled pipx because uv can replace it too. See Jeff Triplett's post uv does everything ↩︎

Trying out pyenv ... again

I think I first tried pyenv probably sometime in late 2022. I saw some recent stuff about it on Mastadon and thought I'd give it another go.

I read through the installation instructions at the ReadMe at the repo and checked to see if it was already installed (spoiler alert it was!)

I noticed that I was not on the current version (2.3.36 at the time of this writing) and decided that I needed to update it.

With the update out of the way I tried to install a version of Python with it, starting at Python 3.10 (because why not?!)

pyenv install 3.10

But when I ran it I got an error like this:

BUILD FAILED (OS X 12.3.1 using python-build 20180424)

Which lead me here. There were some comments people left about deleting directories (which always makes me a bit uneasy ... especially when they're in /Library/)

Reading further down I did come across this comment

I had to uninstall and reinstall Home Brew before it returned to work. It concerned the change from Mac Intel to Mac M1(Silicon). See the article below from Josh Alletto to find out why. https://earthly.dev/blog/homebrew-on-m1/#:~: text=On%20Intel%20Macs %2C%20Homebrew%2C%20and, %2Fusr%2Flocal%2Fbin%20.&text= Homebrew%20chose%20%2Fusr %2Flocal%2F,in%20your %20PATH%20by%20default.

The link in the comment was a bit malformed, but I was able to clean it up and get this link. This is where I re-discovered1 that the way Homebrew is installed changed with the transition to the Apple Silicon.

Now, I got a new M2 MacBook Pro in March 2023 and since I don't use Homebrew a lot AND I didn't really use pyenv for anything, I hadn't noticed that stuff kind of changed.

Following the steps outlined I was able to redo my Homebrew and now have pyenv working. Now, the only question is will it's use 'stick' with me this time?

  1. earlier in the day I was working through a post by Marijke about Caddy and there was a statement in her write up about how Homebrew on the M1 Macs stored files in a different directory, but when I ran the command to check where Homebrew was pointing I got the Intel location, not the Apple Silicon location ... this really should have been my first clue that some part of my set up was incorrect ↩︎

Logging Part 2

In my previous post I wrote about inline logging, that is, using logging in the code without a configuration file of some kind.

In this post I'm going to go over setting up a configuration file to support the various different needs you may have for logging.

Previously I mentioned this scenario:

Perhaps the DevOps team wants robust logging messages on anything ERROR and above, but the application team wants to have INFO and above in a rotating file name schema, while the QA team needs to have the DEBUG and up output to standard out.

Before we get into how we may implement something like what's above, let's review the parts of the Logger which are:

Formatters

In a logging configuration file you can have multiple formatters specified. The above example doesn't state WHAT each team need, so let's define it here:

  • DevOps: They need to know when the error occurred, what the level was, and what module the error came from
  • Application Team: They need to know when the error occurred, the level, what module and line
  • The QA Team: They need to know when the error occurred, the level, what module and line, and they need a stack trace

For the Devops Team we can define a formatter as such1:

'%(asctime)s - %(levelname)s - %(module)s'

The Application team would have a formatter like this:

'%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'

while the QA team would have one like this:

'%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'

Handlers

The Handler controls where the data from the log is going to be sent. There are several kinds of handlers, but based on our requirements above, we'll only be looking at three of them (see the documentation for more types of handlers)

From the example above we know that the DevOps team wants to save the output to a file, while the Application Team wants to have the log data saved in a way that allows the log files to not get too big. Finally, we know that the QA team wants the output to go directly to stdout

We can handle all of these requirements via the handlers. In this case, we'd use

Configuration File

Above we defined the formatter and handler. Now we start to put them together. The basic format of a logging configuration has 3 parts (as described above). The example I use below is YAML, but a dictionary or a conf file would also work.

Below we see five keys in our YAML file:

version: 1
formatters:
handlers:
loggers:
root:
  level:
  handlers:

The version key is to allow for future versions in case any are introduced. As of this writing, there is only 1 version ... and it's version: 1

Formatters

We defined the formatters above so let's add them here and give them names that map to the teams

version: 1
formatters:
  devops:
    format: '%(asctime)s - %(levelname)s - %(module)s'
  application:
    format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
  qa:
    format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'

Right off the bat we can see that the formatters for application and qa are the same, so we can either keep them separate to help allow for easier updates in the future (and to be more explicit) OR we can merge them into a single formatter to adhere to DRY principals.

I'm choosing to go with option 1 and keep them separate.

Handlers

Next, we add our handlers. Again, we give them names to map to the team. There are several keys for the handlers that are specific to the type of handler that is used. For each handler we set a level (which will map to the level from the specs above).

Additionally, each handler has keys associated based on the type of handler selected. For example, logging.FileHandler needs to have the filename specified, while logging.StreamHandler needs to specify where to output to.

When using logging.handlers.RotatingFileHandler we have to specify a few more items in addition to a filename so the logger knows how and when to rotate the log writing.

version: 1
formatters:
  devops:
    format: '%(asctime)s - %(levelname)s - %(module)s'
  application:
    format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
  qa:
    format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
handlers:
  devops:
    class: logging.FileHandler
    level: ERROR
    filename: 'devops.log'
  application:
    class: logging.handlers.RotatingFileHandler
    level: INFO
    filename: 'application.log'
    mode: 'a'
    maxBytes: 10000
    backupCount: 3
  qa:
    class: logging.StreamHandler
    level: DEBUG
    stream: ext://sys.stdout

What the setup above does for the devops handler is to output the log data to a file called devops.log, while the application handler outputs to a rotating set of files called application.log. For the application.log it will hold a maximum of 10,000 bytes. Once the file is 'full' it will create a new file called application.log.1, copy the contents of application.log and then clear out the contents of application.log to start over. It will do this 3 times, giving the application team the following files:

  • application.log
  • application.log.1
  • application.log.2

Finally, the handler for QA will output directly to stdout.

Loggers

Now we can take all of the work we did above to create the formatters and handlers and use them in the loggers!

Below we see how the loggers are set up in configuration file. It seems a bit redundant because I've named my formatters, handlers, and loggers all matching terms, but 🤷‍♂️

The only new thing we see in the configuration below is the new propagate: no for each of the loggers. If there were parent loggers (we don't have any) then this would prevent the logging information from being sent 'up' the chain to parent loggers.

The documentation has a good diagram showing the workflow for how the propagate works.

Below we can see what the final, fully formed logging configuration looks like.

version: 1
formatters:
  devops:
    format: '%(asctime)s - %(levelname)s - %(module)s'
  application:
    format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
  qa:
    format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
handlers:
  devops:
    class: logging.FileHandler
    level: ERROR
    filename: 'devops.log'
  application:
    class: logging.handlers.RotatingFileHandler
    level: INFO
    filename: 'application.log'
    mode: 'a'
    maxBytes: 10000
    backupCount: 3
  qa:
    class: logging.StreamHandler
    level: DEBUG
    stream: ext://sys.stdout
loggers:
  devops:
    level: ERROR
    formatter: devops
    handlers: [devops]
    propagate: no
  application:
    level: INFO
    formatter: application
    handlers: [application]
    propagate: no
  qa:
    level: DEBUG
    formatter: qa
    handlers: [qa]
    propagate: no
root:
  level: ERROR
  handlers: [devops, application, qa]

In my next post I'll write about how to use the above configuration file to allow the various teams to get the log output they need.

  1. full documentation on what is available for the formatters can be found here: https://docs.python.org/3/library/logging.html#logrecord-attributes ↩︎

Logging Part 1

Logging

Last year I worked on an update to the package tryceratops with Gui Latrova to include a verbose flag for logging.

Honestly, Gui was a huge help and I wrote about my experience here but I didn't really understand why what I did worked.

Recently I decided that I wanted to better understand logging so I dove into some posts from Gui, and sat down and read the documentation on the logging from the standard library.

My goal with this was to (1) be able to use logging in my projects, and (2) write something that may be able to help others.

Full disclosure, Gui has a really good article explaining logging and I think everyone should read it. My notes below are a synthesis of his article, my understanding of the documentation from the standard library, and the Python HowTo written in a way to answer the Five W questions I was taught in grade school.

The Five W's

Who are the generated logs for?

Anyone trying to troubleshoot an issue, or monitor the history of actions that have been logged in an application.

What is written to the log?

The formatter determines what to display or store.

When is data written to the log?

The logging level determines when to log the issue.

Where is the log data sent to?

The handler determines where to send the log data whether that's a file, or stdout.

Why would I want to use logging?

To keep a history of actions taken during your code.

How is the data sent to the log?

The loggers determine how to bundle all of it together through calls to various methods.

Examples

Let's say I want a logger called my_app_errors that captures all ERROR level incidents and higher to a file and to tell me the date time, level, message, logger name, and give a trace back of the error, I could do the following:

import logging

message='oh no! an error occurred'
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s - %(name)s')
logger = logging.getLogger('my_app_errors')
fh = logging.FileHandler('errors.log')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.error(message, stack_info=True)

The code above would generate something like this to a file called errors.log

2022-03-28 19:45:49,188 - ERROR - oh no! an error occurred - my_app_errors
Stack (most recent call last):
  File "/Users/ryan/Documents/github/logging/test.py", line 9, in <module>
    logger.error(message, stack_info=True)

If I want a logger that will do all of the above AND output debug information to the console I could:

import logging

message='oh no! an error occurred'

logger = logging.getLogger('my_app_errors')

ch = logging.StreamHandler()
fh = logging.FileHandler('errors.log')

formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s - %(name)s')

fh.setFormatter(formatter)
ch.setFormatter(formatter)

logger.addHandler(fh)
logger.addHandler(ch)

logger.error(message, stack_info=True)
logger.debug(message, stack_info=True)

Again, the code above would generate something like this to a file called errors.log

2022-03-28 19:45:09,406 - ERROR - oh no! an error occurred - my_app_errors
Stack (most recent call last):
  File "/Users/ryan/Documents/github/logging/test.py", line 18, in <module>
    logger.error(message, stack_info=True)

but it would also output to stderr in the terminal something like this:

2022-03-27 13:18:45,367 - ERROR - oh no! an error occurred - my_app_errors
Stack (most recent call last):
  File "<stdin>", line 1, in <module>

The above it a bit hard to scale though. What happens when we want to have multiple formatters, for different levels that get output to different places? We can incorporate all of that into something like what we see above, OR, we can stat to leverage the use of logging configuration files.

Why would we want to have multiple formatters? Perhaps the DevOps team wants robust logging messages on anything ERROR and above, but the application team wants to have INFO and above in a rotating file name schema, while the QA team needs to have the DEBUG and up output to standard out.

You CAN do all of this inline with the code above, but would you really want to? Probably not.

Enter configuration files to allow easier management of log files (and a potential way to make everyone happy) which I'll cover in the next post.

The Well Maintained Test

At the beginning of November Adam Johnson tweeted

I’ve come up with a test that we can use to decide whether a new package we’re considering depending on is well-maintained.

and linked to an article he wrote.

He came up (with the help of Twitter) twelve questions to ask of any library that you're looking at:

  1. Is it described as “production ready”?
  2. Is there sufficient documentation?
  3. Is there a changelog?
  4. Is someone responding to bug reports?
  5. Are there sufficient tests?
  6. Are the tests running with the latest <Language> version?
  7. Are the tests running with the latest <Integration> version?
  8. Is there a Continuous Integration (CI) configuration?
  9. Is the CI passing?
  10. Does it seem relatively well used?
  11. Has there been a commit in the last year?
  12. Has there been a release in the last year?

I thought it would be interesting to turn that checklist into a Click App using Simon Willison's Click App Cookiecutter.

I set out in earnest to do just that on November 8th.

What started out as just a simple Click app, quickly turned in a pretty robust CLI using Will McGugan's Rich library.

I started by using the GitHub API to try and answer the questions, but quickly found that it couldn't answer them all. Then I cam across the PyPI API which helped to answer almost all of them programmatically.

There's still a bit of work to do to get it where I want it to, but it's pretty sweet that I can now run a simple command and review the output to see if the package is well maintained.

You can even try it on the package I wrote!

the-well-maintained-test https://github.com/ryancheley/the-well-maintained-test

Which will return (as of this writing) the output below:

1. Is it described as 'production ready'?
        The project is set to Development Status Beta
2. Is there sufficient documentation?
        Documentation can be found at
https://github.com/ryancheley/the-well-maintained-test/blob/main/README.md
3. Is there a changelog?
        Yes
4. Is someone responding to bug reports?
        The maintainer took 0 days to respond to the bug report
        It has been 2 days since a comment was made on the bug.
5. Are there sufficient tests? [y/n]: y
        Yes
6. Are the tests running with the latest Language version?
        The project supports the following programming languages
                - Python 3.7
                - Python 3.8
                - Python 3.9
                - Python 3.10

7. Are the tests running with the latest Integration version?
        This project has no associated frameworks
8. Is there a Continuous Integration (CI) configuration?
        There are 2 workflows
         - Publish Python Package
         - Test

9. Is the CI passing?
        Yes
10.  Does it seem relatively well used?
        The project has the following statistics:
        - Watchers: 0
        - Forks: 0
        - Open Issues: 1
        - Subscribers: 1
11.  Has there been a commit in the last year?
        Yes. The last commit was on 11-20-2021 which was 2 days ago
12. Has there been a release in the last year?
        Yes. The last commit was on 11-20-2021 which was 2 days ago

There is still one question that I haven't been able to answer programmatically with an API and that is:

Are there sufficient tests?

When that question comes up, you're prompted in the terminal to answer either y/n.

But, it does leave room for a fix by someone else!


Page 1 / 6