Why We Need to Stop Fighting About AI Tools and Start Teaching Them

In mid-June, Hynek tooted on Mastodon the following toot:

Watching the frustratingly fruitless fights over the USEFULNESS of LLM-based coding helpers, I've come down to 3 points that explain why ppl seem to live in different realities:

Most programmers:

1) Write inconsequential remixes of trivial code that has been written many times before.

2) Lack the taste for good design & suck at code review in general (yours truly included).

3) Lack the judgement to differentiate between 1) & FOSS repos of nontrivial code, leading to PR slop avalanche.

1/3

So, if you're writing novel code & not another CRUD app or API wrapper, all you can see is LLMs fall on their faces.

Same goes for bigger applications if you care about design. Deceivingly, if you lack 2), you won't notice that an architecture is crap b/c it doesn't look worse than your usual stuff.

That means that the era of six figures for CRUD apps is coming to an end, but it also means that Claude Code et al can be very useful for certain tasks. Not every task involves splitting atoms. 2/3

2/3

There's also a bit of a corollary here. Given that LLMs are stochastic parrots, the inputs determine the outputs.

And, without naming names, certain communities are more… rigorous… at software design than others.

It follows that the quality of LLM-generated code will inevitably become a decision factor for choosing frameworks and languages and I'm not sure if I'm ready for that.

3/3

I've been having a lot of success with using Claude Code recently so I've been thinking about this toot a lot lately. Simon Willison talks a lot about the things that he's been able to do because he just asks OpenAI's ChatGPT while walking his dog. He's asking a coding agent to help him with ideas he has in languages with which he may not be familiar. However, he's a good enough programmer that he can spot anti-patterns that are being written by the agent.

For me, it comes down to the helpfulness of these agentic coding tools; they can help me write boiler plate code more quickly. What it's really coming down to, for me, is that when something is trivially easy to implement, like another CRUD app or an API wrapper, those problems are solved. We don't need to keep solving them in ways that don't really help. What we need to do in order to be better programmers is figure out how to solve problems most effectively. And if that's creating a CRUD app or an API wrapper or whatever, then yeah, you're not solving any huge problem there. But if you're looking to solve something in a very unique or novel way, agentic coding tools aren't going to help you as much.

I don't need to know how the internal combustion engine of my car works. I do need to know that when the check engine light comes on, I need to take it to a mechanic. And then that mechanic is going to use some device that lets them know what is wrong with the car and what needs to be done to fix it. This seems very analogous to the coding agents that we're seeing now. We don't have to keep trying to solve those problems with well-known solutions. We can and we should rely on the knowledge that is available to us and use that knowledge to solve these problems quickly. This allows us to focus on trying to solve new problems that no one has ever seen.

This doesn't mean we can skip learning the fundamentals. Like blocking and tackling in football, if you can't handle the basic building blocks of programming, you're not going to succeed with complex projects. That foundational understanding remains essential.

The real value of large language models and coding agents lies in how they can accelerate that learning process. Being able to ask an LLM about how a specific GitHub action works, or why you'd want to use a particular pattern, creates opportunities to understand concepts more quickly. These tools won't solve novel problems for you—that's still the core work of being a software developer. But they can eliminate the repetitive research and boilerplate implementation that used to consume so much of our time, freeing us to focus on the problems that actually require human creativity and problem-solving skills.

How many software developers write in assembly anymore? Some of us maybe, but really what it comes down to is that we don't have to. We've abstracted away a lot of that particular knowledge set to a point where we don't need it anymore. We can write code in higher-level languages to help us get to solutions more quickly. If that's the case, why shouldn't we use LLMs to help us get to solutions even more quickly?

I've noticed a tendency to view LLM-assisted coding as somehow less legitimate, but this misses the opportunity to help developers integrate these tools thoughtfully into their workflow. Instead of questioning the validity of using these tools, we should be focusing on how we can help people learn to use them effectively.

In the same way that we helped people to learn how to use Google, we should help them to use large language models. Back in the early 2000s when Google was just starting to become a thing, knowing how to effectively use it to exclude specific terms, search for exact phrases using quotation marks, that wasn't always known by everybody. But the people who knew how to do that were able to find things more effectively.

I see a parallel here. Instead of dismissing people who use these tools, we should be asking more constructive questions: How do we help them become more effective with LLMs? How do we help them use these tools to actually learn and grow as developers?

Understanding the limitations of large language models is crucial to using them well, but right now we're missing that opportunity by focusing on whether people should use them at all rather than how they can use them better.

We need to take a step back and re-evaluate how we use LLMs and how we encourage others to use them. The goal is getting to a point where we understand that LLMs are one more tool in our developer toolkit, regardless of whether we're working on open-source projects or commercial software. We don't need to avoid these tools. We just need to learn how to use them more effectively, and we need to do this quickly.

Updated TIL

While browsing Mastodon the other day I came across a toot that linked to this Cursed Knowledge page. I thought it was a great page, but it occurred to me that it could be helpful to apply the same sort of styling to my TIL Repo.

My TIL (Today I Learned) repository was inspired by Simon Wilison's and is basically a rip off completely inspired by him.

Now, I'm not a front end guy, and while I can do some pretty basic stuff with GitHub Actions, I knew that what I wanted to do far outstripped my current capabilities. Enter Claude!

I started with this prompt

You are an expert in GitHub Actions, and GitHub. Additionally, you are an expert in HTML, CSS, and JavaScript. The solutions provided can not leverage JavaScript frameworks.

I then made this request

I currently have a Today I learned (TIL) repository on GitHub located here

https://github.com/ryancheley/til

I came across the idea of documenting cursed information here

https://immich.app/cursed-knowledge/

I would like to implement my TIL in a timeline the same way the cursed knowledge page was done.

How can I do that?

It output this

name: Build TIL Timeline

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]
  schedule:
    # Rebuild daily at 6 AM UTC
    - cron: '0 6 * * *'

permissions:
  contents: read
  pages: write
  id-token: write

jobs:
  build-timeline:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Setup Node.js
      uses: actions/setup-node@v4
      with:
        node-version: '18'

    - name: Generate timeline data
      run: |
        node << 'EOF'
        const fs = require('fs');
        const path = require('path');

        // Function to extract metadata from markdown files
        function extractTilData(filePath, content) {
          const relativePath = path.relative('.', filePath);
          const fileName = path.basename(filePath, '.md');

          // Extract title from first heading or use filename
          const titleMatch = content.match(/^#\s+(.+)$/m);
          const title = titleMatch ? titleMatch[1] : fileName.replace(/[-_]/g, ' ');

          // Extract first paragraph as description
          const lines = content.split('\n').filter(line => line.trim());
          let description = '';
          for (const line of lines) {
            if (!line.startsWith('#') && line.trim() && !line.startsWith('```')) {
              description = line.trim();
              break;
            }
          }

          // Get file stats for date
          const stats = fs.statSync(filePath);
          const date = stats.mtime; // Use modification time

          // Extract category from directory structure
          const pathParts = relativePath.split('/');
          const category = pathParts.length > 1 ? pathParts[0] : 'General';

          return {
            title,
            description: description || 'No description available',
            date: date.toISOString().split('T')[0], // YYYY-MM-DD format
            url: `https://github.com/ryancheley/til/blob/main/${relativePath}`,
            category,
            fileName: relativePath
          };
        }

        // Recursively find all markdown files
        function findMarkdownFiles(dir) {
          const files = [];
          const entries = fs.readdirSync(dir, { withFileTypes: true });

          for (const entry of entries) {
            const fullPath = path.join(dir, entry.name);
            if (entry.isDirectory() && !entry.name.startsWith('.')) {
              files.push(...findMarkdownFiles(fullPath));
            } else if (entry.isFile() && entry.name.endsWith('.md') && entry.name !== 'README.md') {
              files.push(fullPath);
            }
          }

          return files;
        }

        // Generate timeline data
        const markdownFiles = findMarkdownFiles('.');
        const tilEntries = [];

        for (const filePath of markdownFiles) {
          try {
            const content = fs.readFileSync(filePath, 'utf8');
            const tilData = extractTilData(filePath, content);
            tilEntries.push(tilData);
          } catch (error) {
            console.error(`Error processing ${filePath}:`, error.message);
          }
        }

        // Sort by date (newest first)
        tilEntries.sort((a, b) => new Date(b.date) - new Date(a.date));

        // Create output directory
        if (!fs.existsSync('docs')) {
          fs.mkdirSync('docs');
        }

        // Write timeline data
        fs.writeFileSync('docs/timeline-data.json', JSON.stringify(tilEntries, null, 2));

        console.log(`Generated timeline with ${tilEntries.length} entries`);
        EOF

    - name: Generate timeline HTML
      run: |
        mkdir -p docs
        cat > docs/index.html << 'EOF'
        <!DOCTYPE html>
        <html lang="en">
        <head>
            <meta charset="UTF-8">
            <meta name="viewport" content="width=device-width, initial-scale=1.0">
            <title>Ryan's Today I Learned Timeline</title>
            <style>
                * {
                    margin: 0;
                    padding: 0;
                    box-sizing: border-box;
                }

                body {
                    font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', system-ui, sans-serif;
                    background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
                    min-height: 100vh;
                    color: #333;
                }

                .container {
                    max-width: 1200px;
                    margin: 0 auto;
                    padding: 2rem;
                }

                .header {
                    text-align: center;
                    margin-bottom: 3rem;
                    color: white;
                }

                .header h1 {
                    font-size: 3rem;
                    margin-bottom: 1rem;
                    text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
                }

                .header p {
                    font-size: 1.2rem;
                    opacity: 0.9;
                    max-width: 600px;
                    margin: 0 auto;
                }

                .timeline {
                    position: relative;
                    margin-top: 2rem;
                }

                .timeline::before {
                    content: '';
                    position: absolute;
                    left: 2rem;
                    top: 0;
                    bottom: 0;
                    width: 2px;
                    background: linear-gradient(to bottom, #4CAF50, #2196F3, #FF9800, #E91E63);
                }

                .timeline-item {
                    position: relative;
                    margin-bottom: 2rem;
                    margin-left: 4rem;
                    background: white;
                    border-radius: 12px;
                    padding: 1.5rem;
                    box-shadow: 0 8px 25px rgba(0,0,0,0.1);
                    transition: transform 0.3s ease, box-shadow 0.3s ease;
                }

                .timeline-item:hover {
                    transform: translateY(-5px);
                    box-shadow: 0 15px 35px rgba(0,0,0,0.15);
                }

                .timeline-item::before {
                    content: '';
                    position: absolute;
                    left: -3rem;
                    top: 2rem;
                    width: 16px;
                    height: 16px;
                    background: #4CAF50;
                    border: 3px solid white;
                    border-radius: 50%;
                    box-shadow: 0 0 0 3px rgba(76, 175, 80, 0.3);
                }

                .timeline-item:nth-child(4n+2)::before { background: #2196F3; box-shadow: 0 0 0 3px rgba(33, 150, 243, 0.3); }
                .timeline-item:nth-child(4n+3)::before { background: #FF9800; box-shadow: 0 0 0 3px rgba(255, 152, 0, 0.3); }
                .timeline-item:nth-child(4n+4)::before { background: #E91E63; box-shadow: 0 0 0 3px rgba(233, 30, 99, 0.3); }

                .timeline-header {
                    display: flex;
                    justify-content: space-between;
                    align-items: flex-start;
                    margin-bottom: 1rem;
                    flex-wrap: wrap;
                    gap: 1rem;
                }

                .timeline-title {
                    font-size: 1.4rem;
                    font-weight: 600;
                    color: #2c3e50;
                    text-decoration: none;
                    flex-grow: 1;
                    transition: color 0.3s ease;
                }

                .timeline-title:hover {
                    color: #3498db;
                }

                .timeline-meta {
                    display: flex;
                    gap: 1rem;
                    align-items: center;
                    flex-shrink: 0;
                }

                .timeline-date {
                    background: linear-gradient(135deg, #667eea, #764ba2);
                    color: white;
                    padding: 0.5rem 1rem;
                    border-radius: 20px;
                    font-size: 0.9rem;
                    font-weight: 500;
                }

                .timeline-category {
                    background: #f8f9fa;
                    color: #6c757d;
                    padding: 0.4rem 0.8rem;
                    border-radius: 15px;
                    font-size: 0.8rem;
                    font-weight: 500;
                    border: 1px solid #e9ecef;
                }

                .timeline-description {
                    color: #5a6c7d;
                    line-height: 1.6;
                    font-size: 1rem;
                }

                .loading {
                    text-align: center;
                    padding: 3rem;
                    color: white;
                    font-size: 1.2rem;
                }

                .error {
                    background: #f8d7da;
                    color: #721c24;
                    padding: 1rem;
                    border-radius: 8px;
                    border: 1px solid #f5c6cb;
                    margin-bottom: 2rem;
                }

                .stats {
                    background: rgba(255,255,255,0.95);
                    border-radius: 12px;
                    padding: 1.5rem;
                    margin-bottom: 2rem;
                    backdrop-filter: blur(10px);
                    border: 1px solid rgba(255,255,255,0.2);
                }

                .stats-grid {
                    display: grid;
                    grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
                    gap: 1rem;
                    text-align: center;
                }

                .stat-item {
                    padding: 1rem;
                }

                .stat-number {
                    font-size: 2rem;
                    font-weight: bold;
                    color: #667eea;
                    display: block;
                }

                .stat-label {
                    color: #666;
                    font-size: 0.9rem;
                    margin-top: 0.5rem;
                }

                @media (max-width: 768px) {
                    .container {
                        padding: 1rem;
                    }

                    .header h1 {
                        font-size: 2rem;
                    }

                    .timeline::before {
                        left: 1rem;
                    }

                    .timeline-item {
                        margin-left: 2rem;
                        padding: 1rem;
                    }

                    .timeline-item::before {
                        left: -2rem;
                    }

                    .timeline-header {
                        flex-direction: column;
                        align-items: stretch;
                    }

                    .timeline-meta {
                        justify-content: space-between;
                    }
                }
            </style>
        </head>
        <body>
            <div class="container">
                <div class="header">
                    <h1>Today I Learned</h1>
                    <p>A timeline of discoveries, learnings, and insights from my development journey</p>
                </div>

                <div class="stats" id="stats" style="display: none;">
                    <div class="stats-grid">
                        <div class="stat-item">
                            <span class="stat-number" id="totalEntries">0</span>
                            <div class="stat-label">Total Entries</div>
                        </div>
                        <div class="stat-item">
                            <span class="stat-number" id="categoriesCount">0</span>
                            <div class="stat-label">Categories</div>
                        </div>
                        <div class="stat-item">
                            <span class="stat-number" id="latestEntry">-</span>
                            <div class="stat-label">Latest Entry</div>
                        </div>
                    </div>
                </div>

                <div class="loading" id="loading">Loading timeline...</div>
                <div class="error" id="error" style="display: none;"></div>
                <div class="timeline" id="timeline"></div>
            </div>

            <script>
                async function loadTimeline() {
                    try {
                        const response = await fetch('timeline-data.json');
                        if (!response.ok) {
                            throw new Error('Failed to load timeline data');
                        }

                        const entries = await response.json();
                        displayTimeline(entries);
                        displayStats(entries);

                    } catch (error) {
                        console.error('Error loading timeline:', error);
                        document.getElementById('loading').style.display = 'none';
                        document.getElementById('error').style.display = 'block';
                        document.getElementById('error').textContent = 'Failed to load timeline. Please try again later.';
                    }
                }

                function displayStats(entries) {
                    const categories = [...new Set(entries.map(entry => entry.category))];
                    const latest = entries.length > 0 ? new Date(entries[0].date).toLocaleDateString() : '-';

                    document.getElementById('totalEntries').textContent = entries.length;
                    document.getElementById('categoriesCount').textContent = categories.length;
                    document.getElementById('latestEntry').textContent = latest;
                    document.getElementById('stats').style.display = 'block';
                }

                function displayTimeline(entries) {
                    const timelineContainer = document.getElementById('timeline');
                    const loading = document.getElementById('loading');

                    loading.style.display = 'none';

                    if (entries.length === 0) {
                        timelineContainer.innerHTML = '<p style="text-align: center; color: white;">No entries found.</p>';
                        return;
                    }

                    const timelineHTML = entries.map(entry => {
                        const date = new Date(entry.date).toLocaleDateString('en-US', {
                            year: 'numeric',
                            month: 'short',
                            day: 'numeric'
                        });

                        return `
                            <div class="timeline-item">
                                <div class="timeline-header">
                                    <a href="${entry.url}" class="timeline-title" target="_blank" rel="noopener noreferrer">
                                        ${entry.title}
                                    </a>
                                    <div class="timeline-meta">
                                        <span class="timeline-category">${entry.category}</span>
                                        <span class="timeline-date">${date}</span>
                                    </div>
                                </div>
                                <div class="timeline-description">
                                    ${entry.description}
                                </div>
                            </div>
                        `;
                    }).join('');

                    timelineContainer.innerHTML = timelineHTML;
                }

                // Load timeline on page load
                loadTimeline();
            </script>
        </body>
        </html>
        EOF

    - name: Setup Pages
      uses: actions/configure-pages@v4

    - name: Upload artifact
      uses: actions/upload-pages-artifact@v3
      with:
        path: 'docs'

  deploy:
    environment:
      name: github-pages
      url: ${{ steps.deployment.outputs.page_url }}
    runs-on: ubuntu-latest
    needs: build-timeline
    if: github.ref == 'refs/heads/main'

    steps:
    - name: Deploy to GitHub Pages
      id: deployment
      uses: actions/deploy-pages@v4

I added it to my TIL repo and tested to see how it looked. It was a great start!

I then followed up with

this is a great start! however, it looks like the are a few issues: 1. All of the dates on the timeline are the published date and not the date of the actual learning. There is a SQLite database that stores this kind of meta data 2. The order of the items matches the order they are displayed on the Read Me, which is by category and not by date (see point 1) 3. the colors for each TIL seem to just rotate in order and aren't associated with any specific category of TIL

Claude went to work and generated a new GitHub Action for me

name: Build TIL Timeline

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]
  schedule:
    # Rebuild daily at 6 AM UTC
    - cron: '0 6 * * *'

permissions:
  contents: read
  pages: write
  id-token: write

jobs:
  build-timeline:
    runs-on: ubuntu-latest

    steps:
    - name: Check out repo
      uses: actions/checkout@v4
      with:
        fetch-depth: 0  # Need full history for git analysis

    - name: Set up Python
      uses: actions/setup-python@v5
      with:
        python-version: 3.12

    - uses: actions/cache@v4
      name: Configure pip caching
      with:
        path: ~/.cache/pip
        key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
        restore-keys: |
          ${{ runner.os }}-pip-

    - name: Install Python dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt

    - name: Build database
      run: python build_database.py

    - name: Generate timeline data from SQLite
      run: |
        python << 'EOF'
        import sqlite3
        import json
        import os
        from pathlib import Path

        # Connect to the SQLite database
        db_path = Path("tils.db")
        if not db_path.exists():
            print("Database not found!")
            exit(1)

        conn = sqlite3.connect(db_path)
        conn.row_factory = sqlite3.Row  # Enable dict-like access to rows

        # Query all TIL entries, ordered by created date (actual learning date)
        cursor = conn.execute("""
            SELECT
                path,
                slug,
                topic,
                title,
                url,
                body,
                created,
                created_utc,
                updated,
                updated_utc
            FROM til
            ORDER BY created_utc DESC
        """)

        entries = []
        for row in cursor:
            # Extract first paragraph as description, excluding headers and code blocks
            lines = row['body'].split('\n')
            description = ''
            for line in lines:
                line = line.strip()
                if line and not line.startswith('#') and not line.startswith('```') and line != '---':
                    description = line
                    break

            # Clean up the description
            if len(description) > 200:
                description = description[:200] + '...'

            # Extract date from created_utc (YYYY-MM-DD format)
            created_date = row['created_utc'].split('T')[0] if row['created_utc'] else row['created'].split('T')[0]

            entry = {
                'title': row['title'],
                'description': description or 'No description available',
                'date': created_date,
                'url': row['url'],
                'category': row['topic'],
                'fileName': row['path'].replace('_', '/'),  # Convert back from slug format
                'slug': row['slug']
            }
            entries.append(entry)

        conn.close()

        # Create output directory
        os.makedirs('docs', exist_ok=True)

        # Write timeline data
        with open('docs/timeline-data.json', 'w') as f:
            json.dump(entries, f, indent=2)

        print(f"Generated timeline with {len(entries)} entries")

        # Generate category statistics for consistent coloring
        categories = {}
        for entry in entries:
            cat = entry['category']
            if cat not in categories:
                categories[cat] = len([e for e in entries if e['category'] == cat])

        # Sort categories by count (most popular first) for consistent color assignment
        sorted_categories = sorted(categories.items(), key=lambda x: x[1], reverse=True)
        category_colors = {}

        # Define a set of distinct colors for categories
        color_palette = [
            '#4CAF50',  # Green
            '#2196F3',  # Blue
            '#FF9800',  # Orange
            '#E91E63',  # Pink
            '#9C27B0',  # Purple
            '#00BCD4',  # Cyan
            '#FF5722',  # Deep Orange
            '#795548',  # Brown
            '#607D8B',  # Blue Grey
            '#FFC107',  # Amber
            '#8BC34A',  # Light Green
            '#3F51B5',  # Indigo
            '#F44336',  # Red
            '#009688',  # Teal
            '#CDDC39',  # Lime
        ]

        for i, (category, count) in enumerate(sorted_categories):
            category_colors[category] = color_palette[i % len(color_palette)]

        # Write category color mapping
        with open('docs/category-colors.json', 'w') as f:
            json.dump(category_colors, f, indent=2)

        print(f"Generated color mapping for {len(category_colors)} categories")
        EOF

    - name: Generate timeline HTML
      run: |
        cat > docs/index.html << 'EOF'
        <!DOCTYPE html>
        <html lang="en">
        <head>
            <meta charset="UTF-8">
            <meta name="viewport" content="width=device-width, initial-scale=1.0">
            <title>Ryan's Today I Learned Timeline</title>
            <meta name="description" content="A chronological timeline of learning discoveries from software development, featuring insights on Python, Django, SQL, and more.">
            <style>
                * {
                    margin: 0;
                    padding: 0;
                    box-sizing: border-box;
                }

                body {
                    font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', system-ui, sans-serif;
                    background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
                    min-height: 100vh;
                    color: #333;
                }

                .container {
                    max-width: 1200px;
                    margin: 0 auto;
                    padding: 2rem;
                }

                .header {
                    text-align: center;
                    margin-bottom: 3rem;
                    color: white;
                }

                .header h1 {
                    font-size: 3rem;
                    margin-bottom: 1rem;
                    text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
                }

                .header p {
                    font-size: 1.2rem;
                    opacity: 0.9;
                    max-width: 600px;
                    margin: 0 auto;
                }

                .filters {
                    background: rgba(255,255,255,0.95);
                    border-radius: 12px;
                    padding: 1.5rem;
                    margin-bottom: 2rem;
                    backdrop-filter: blur(10px);
                    border: 1px solid rgba(255,255,255,0.2);
                }

                .filter-group {
                    display: flex;
                    flex-wrap: wrap;
                    gap: 0.5rem;
                    align-items: center;
                }

                .filter-label {
                    font-weight: 600;
                    margin-right: 1rem;
                    color: #666;
                }

                .category-filter {
                    padding: 0.4rem 0.8rem;
                    border-radius: 20px;
                    border: 2px solid transparent;
                    background: #f8f9fa;
                    color: #666;
                    cursor: pointer;
                    transition: all 0.3s ease;
                    font-size: 0.9rem;
                    user-select: none;
                }

                .category-filter:hover {
                    transform: translateY(-2px);
                    box-shadow: 0 4px 8px rgba(0,0,0,0.1);
                }

                .category-filter.active {
                    color: white;
                    border-color: currentColor;
                    font-weight: 600;
                }

                .timeline {
                    position: relative;
                    margin-top: 2rem;
                }

                .timeline::before {
                    content: '';
                    position: absolute;
                    left: 2rem;
                    top: 0;
                    bottom: 0;
                    width: 2px;
                    background: linear-gradient(to bottom, #4CAF50, #2196F3, #FF9800, #E91E63);
                }

                .timeline-item {
                    position: relative;
                    margin-bottom: 2rem;
                    margin-left: 4rem;
                    background: white;
                    border-radius: 12px;
                    padding: 1.5rem;
                    box-shadow: 0 8px 25px rgba(0,0,0,0.1);
                    transition: all 0.3s ease;
                    opacity: 1;
                }

                .timeline-item.hidden {
                    display: none;
                }

                .timeline-item:hover {
                    transform: translateY(-5px);
                    box-shadow: 0 15px 35px rgba(0,0,0,0.15);
                }

                .timeline-item::before {
                    content: '';
                    position: absolute;
                    left: -3rem;
                    top: 2rem;
                    width: 16px;
                    height: 16px;
                    background: var(--category-color, #4CAF50);
                    border: 3px solid white;
                    border-radius: 50%;
                    box-shadow: 0 0 0 3px rgba(76, 175, 80, 0.3);
                }

                .timeline-header {
                    display: flex;
                    justify-content: space-between;
                    align-items: flex-start;
                    margin-bottom: 1rem;
                    flex-wrap: wrap;
                    gap: 1rem;
                }

                .timeline-title {
                    font-size: 1.4rem;
                    font-weight: 600;
                    color: #2c3e50;
                    text-decoration: none;
                    flex-grow: 1;
                    transition: color 0.3s ease;
                }

                .timeline-title:hover {
                    color: #3498db;
                }

                .timeline-meta {
                    display: flex;
                    gap: 1rem;
                    align-items: center;
                    flex-shrink: 0;
                }

                .timeline-date {
                    background: linear-gradient(135deg, #667eea, #764ba2);
                    color: white;
                    padding: 0.5rem 1rem;
                    border-radius: 20px;
                    font-size: 0.9rem;
                    font-weight: 500;
                }

                .timeline-category {
                    background: var(--category-color, #f8f9fa);
                    color: white;
                    padding: 0.4rem 0.8rem;
                    border-radius: 15px;
                    font-size: 0.8rem;
                    font-weight: 500;
                    border: 1px solid rgba(255,255,255,0.2);
                }

                .timeline-description {
                    color: #5a6c7d;
                    line-height: 1.6;
                    font-size: 1rem;
                }

                .loading {
                    text-align: center;
                    padding: 3rem;
                    color: white;
                    font-size: 1.2rem;
                }

                .error {
                    background: #f8d7da;
                    color: #721c24;
                    padding: 1rem;
                    border-radius: 8px;
                    border: 1px solid #f5c6cb;
                    margin-bottom: 2rem;
                }

                .stats {
                    background: rgba(255,255,255,0.95);
                    border-radius: 12px;
                    padding: 1.5rem;
                    margin-bottom: 2rem;
                    backdrop-filter: blur(10px);
                    border: 1px solid rgba(255,255,255,0.2);
                }

                .stats-grid {
                    display: grid;
                    grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
                    gap: 1rem;
                    text-align: center;
                }

                .stat-item {
                    padding: 1rem;
                }

                .stat-number {
                    font-size: 2rem;
                    font-weight: bold;
                    color: #667eea;
                    display: block;
                }

                .stat-label {
                    color: #666;
                    font-size: 0.9rem;
                    margin-top: 0.5rem;
                }

                @media (max-width: 768px) {
                    .container {
                        padding: 1rem;
                    }

                    .header h1 {
                        font-size: 2rem;
                    }

                    .timeline::before {
                        left: 1rem;
                    }

                    .timeline-item {
                        margin-left: 2rem;
                        padding: 1rem;
                    }

                    .timeline-item::before {
                        left: -2rem;
                    }

                    .timeline-header {
                        flex-direction: column;
                        align-items: stretch;
                    }

                    .timeline-meta {
                        justify-content: space-between;
                    }

                    .filter-group {
                        flex-direction: column;
                        align-items: stretch;
                        gap: 1rem;
                    }

                    .category-filter {
                        text-align: center;
                    }
                }
            </style>
        </head>
        <body>
            <div class="container">
                <div class="header">
                    <h1>Today I Learned</h1>
                    <p>A chronological timeline of discoveries, learnings, and insights from my development journey</p>
                </div>

                <div class="stats" id="stats" style="display: none;">
                    <div class="stats-grid">
                        <div class="stat-item">
                            <span class="stat-number" id="totalEntries">0</span>
                            <div class="stat-label">Total Entries</div>
                        </div>
                        <div class="stat-item">
                            <span class="stat-number" id="categoriesCount">0</span>
                            <div class="stat-label">Categories</div>
                        </div>
                        <div class="stat-item">
                            <span class="stat-number" id="latestEntry">-</span>
                            <div class="stat-label">Latest Entry</div>
                        </div>
                        <div class="stat-item">
                            <span class="stat-number" id="filteredCount">0</span>
                            <div class="stat-label">Showing</div>
                        </div>
                    </div>
                </div>

                <div class="filters" id="filters" style="display: none;">
                    <div class="filter-group">
                        <span class="filter-label">Filter by category:</span>
                        <div id="categoryFilters"></div>
                    </div>
                </div>

                <div class="loading" id="loading">Loading timeline...</div>
                <div class="error" id="error" style="display: none;"></div>
                <div class="timeline" id="timeline"></div>
            </div>

            <script>
                let allEntries = [];
                let categoryColors = {};
                let activeCategory = null;

                async function loadTimeline() {
                    try {
                        // Load timeline data and category colors
                        const [entriesResponse, colorsResponse] = await Promise.all([
                            fetch('timeline-data.json'),
                            fetch('category-colors.json')
                        ]);

                        if (!entriesResponse.ok || !colorsResponse.ok) {
                            throw new Error('Failed to load timeline data');
                        }

                        allEntries = await entriesResponse.json();
                        categoryColors = await colorsResponse.json();

                        displayTimeline(allEntries);
                        displayStats(allEntries);
                        createCategoryFilters();

                    } catch (error) {
                        console.error('Error loading timeline:', error);
                        document.getElementById('loading').style.display = 'none';
                        document.getElementById('error').style.display = 'block';
                        document.getElementById('error').textContent = 'Failed to load timeline. Please try again later.';
                    }
                }

                function createCategoryFilters() {
                    const categories = [...new Set(allEntries.map(entry => entry.category))];
                    const filtersContainer = document.getElementById('categoryFilters');

                    // Add "All" filter
                    const allFilter = document.createElement('span');
                    allFilter.className = 'category-filter active';
                    allFilter.textContent = 'All';
                    allFilter.onclick = () => filterByCategory(null);
                    filtersContainer.appendChild(allFilter);

                    // Add category filters
                    categories.sort().forEach(category => {
                        const filter = document.createElement('span');
                        filter.className = 'category-filter';
                        filter.textContent = category;
                        filter.style.setProperty('--category-color', categoryColors[category] || '#666');
                        filter.onclick = () => filterByCategory(category);
                        filtersContainer.appendChild(filter);
                    });

                    document.getElementById('filters').style.display = 'block';
                }

                function filterByCategory(category) {
                    activeCategory = category;

                    // Update filter button states
                    document.querySelectorAll('.category-filter').forEach(filter => {
                        filter.classList.remove('active');
                        if ((category === null && filter.textContent === 'All') ||
                            filter.textContent === category) {
                            filter.classList.add('active');
                            if (category !== null) {
                                filter.style.background = categoryColors[category];
                            }
                        }
                    });

                    // Filter timeline items
                    const filteredEntries = category ?
                        allEntries.filter(entry => entry.category === category) :
                        allEntries;

                    displayTimeline(filteredEntries);
                    updateFilteredCount(filteredEntries.length);
                }

                function updateFilteredCount(count) {
                    document.getElementById('filteredCount').textContent = count;
                }

                function displayStats(entries) {
                    const categories = [...new Set(entries.map(entry => entry.category))];
                    const latest = entries.length > 0 ? new Date(entries[0].date).toLocaleDateString() : '-';

                    document.getElementById('totalEntries').textContent = entries.length;
                    document.getElementById('categoriesCount').textContent = categories.length;
                    document.getElementById('latestEntry').textContent = latest;
                    document.getElementById('filteredCount').textContent = entries.length;
                    document.getElementById('stats').style.display = 'block';
                }

                function displayTimeline(entries) {
                    const timelineContainer = document.getElementById('timeline');
                    const loading = document.getElementById('loading');

                    loading.style.display = 'none';

                    if (entries.length === 0) {
                        timelineContainer.innerHTML = '<p style="text-align: center; color: white;">No entries found.</p>';
                        return;
                    }

                    const timelineHTML = entries.map(entry => {
                        const date = new Date(entry.date).toLocaleDateString('en-US', {
                            year: 'numeric',
                            month: 'short',
                            day: 'numeric'
                        });

                        const categoryColor = categoryColors[entry.category] || '#666';

                        return `
                            <div class="timeline-item" style="--category-color: ${categoryColor}">
                                <div class="timeline-header">
                                    <a href="${entry.url}" class="timeline-title" target="_blank" rel="noopener noreferrer">
                                        ${entry.title}
                                    </a>
                                    <div class="timeline-meta">
                                        <span class="timeline-category" style="--category-color: ${categoryColor}; background: ${categoryColor};">
                                            ${entry.category}
                                        </span>
                                        <span class="timeline-date">${date}</span>
                                    </div>
                                </div>
                                <div class="timeline-description">
                                    ${entry.description}
                                </div>
                            </div>
                        `;
                    }).join('');

                    timelineContainer.innerHTML = timelineHTML;
                }

                // Load timeline on page load
                loadTimeline();
            </script>
        </body>
        </html>
        EOF

    - name: Setup Pages
      uses: actions/configure-pages@v4

    - name: Upload artifact
      uses: actions/upload-pages-artifact@v3
      with:
        path: 'docs'

  deploy:
    environment:
      name: github-pages
      url: ${{ steps.deployment.outputs.page_url }}
    runs-on: ubuntu-latest
    needs: build-timeline
    if: github.ref == 'refs/heads/main'

    steps:
    - name: Deploy to GitHub Pages
      id: deployment
      uses: actions/deploy-pages@v4

Another huge step forward! However, there was one last change that was needed

oh wow! much better! The last issue I'm seeing is that because I have so many categories, the Filter by category needs a way to wrap based on the page width. Right now it's on a single line, but would likely benefit from a css style that will wrap the lines

And I got this

name: Build TIL Timeline

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]
  schedule:
    # Rebuild daily at 6 AM UTC
    - cron: '0 6 * * *'

permissions:
  contents: read
  pages: write
  id-token: write

jobs:
  build-timeline:
    runs-on: ubuntu-latest

    steps:
    - name: Check out repo
      uses: actions/checkout@v4
      with:
        fetch-depth: 0  # Need full history for git analysis

    - name: Set up Python
      uses: actions/setup-python@v5
      with:
        python-version: 3.12

    - uses: actions/cache@v4
      name: Configure pip caching
      with:
        path: ~/.cache/pip
        key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
        restore-keys: |
          ${{ runner.os }}-pip-

    - name: Install Python dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt

    - name: Build database
      run: python build_database.py

    - name: Generate timeline data from SQLite
      run: |
        python << 'EOF'
        import sqlite3
        import json
        import os
        from pathlib import Path

        # Connect to the SQLite database
        db_path = Path("tils.db")
        if not db_path.exists():
            print("Database not found!")
            exit(1)

        conn = sqlite3.connect(db_path)
        conn.row_factory = sqlite3.Row  # Enable dict-like access to rows

        # Query all TIL entries, ordered by created date (actual learning date)
        cursor = conn.execute("""
            SELECT
                path,
                slug,
                topic,
                title,
                url,
                body,
                created,
                created_utc,
                updated,
                updated_utc
            FROM til
            ORDER BY created_utc DESC
        """)

        entries = []
        for row in cursor:
            # Extract first paragraph as description, excluding headers and code blocks
            lines = row['body'].split('\n')
            description = ''
            for line in lines:
                line = line.strip()
                if line and not line.startswith('#') and not line.startswith('```') and line != '---':
                    description = line
                    break

            # Clean up the description
            if len(description) > 200:
                description = description[:200] + '...'

            # Extract date from created_utc (YYYY-MM-DD format)
            created_date = row['created_utc'].split('T')[0] if row['created_utc'] else row['created'].split('T')[0]

            entry = {
                'title': row['title'],
                'description': description or 'No description available',
                'date': created_date,
                'url': row['url'],
                'category': row['topic'],
                'fileName': row['path'].replace('_', '/'),  # Convert back from slug format
                'slug': row['slug']
            }
            entries.append(entry)

        conn.close()

        # Create output directory
        os.makedirs('docs', exist_ok=True)

        # Write timeline data
        with open('docs/timeline-data.json', 'w') as f:
            json.dump(entries, f, indent=2)

        print(f"Generated timeline with {len(entries)} entries")

        # Generate category statistics for consistent coloring
        categories = {}
        for entry in entries:
            cat = entry['category']
            if cat not in categories:
                categories[cat] = len([e for e in entries if e['category'] == cat])

        # Sort categories by count (most popular first) for consistent color assignment
        sorted_categories = sorted(categories.items(), key=lambda x: x[1], reverse=True)
        category_colors = {}

        # Define a set of distinct colors for categories
        color_palette = [
            '#4CAF50',  # Green
            '#2196F3',  # Blue
            '#FF9800',  # Orange
            '#E91E63',  # Pink
            '#9C27B0',  # Purple
            '#00BCD4',  # Cyan
            '#FF5722',  # Deep Orange
            '#795548',  # Brown
            '#607D8B',  # Blue Grey
            '#FFC107',  # Amber
            '#8BC34A',  # Light Green
            '#3F51B5',  # Indigo
            '#F44336',  # Red
            '#009688',  # Teal
            '#CDDC39',  # Lime
        ]

        for i, (category, count) in enumerate(sorted_categories):
            category_colors[category] = color_palette[i % len(color_palette)]

        # Write category color mapping
        with open('docs/category-colors.json', 'w') as f:
            json.dump(category_colors, f, indent=2)

        print(f"Generated color mapping for {len(category_colors)} categories")
        EOF

    - name: Generate timeline HTML
      run: |
        cat > docs/index.html << 'EOF'
        <!DOCTYPE html>
        <html lang="en">
        <head>
            <meta charset="UTF-8">
            <meta name="viewport" content="width=device-width, initial-scale=1.0">
            <title>Ryan's Today I Learned Timeline</title>
            <meta name="description" content="A chronological timeline of learning discoveries from software development, featuring insights on Python, Django, SQL, and more.">
            <style>
                * {
                    margin: 0;
                    padding: 0;
                    box-sizing: border-box;
                }

                body {
                    font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', system-ui, sans-serif;
                    background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
                    min-height: 100vh;
                    color: #333;
                }

                .container {
                    max-width: 1200px;
                    margin: 0 auto;
                    padding: 2rem;
                }

                .header {
                    text-align: center;
                    margin-bottom: 3rem;
                    color: white;
                }

                .header h1 {
                    font-size: 3rem;
                    margin-bottom: 1rem;
                    text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
                }

                .header p {
                    font-size: 1.2rem;
                    opacity: 0.9;
                    max-width: 600px;
                    margin: 0 auto;
                }

                .filters {
                    background: rgba(255,255,255,0.95);
                    border-radius: 12px;
                    padding: 1.5rem;
                    margin-bottom: 2rem;
                    backdrop-filter: blur(10px);
                    border: 1px solid rgba(255,255,255,0.2);
                }

                .filter-group {
                    display: flex;
                    flex-direction: column;
                    gap: 1rem;
                }

                .filter-label {
                    font-weight: 600;
                    color: #666;
                    margin-bottom: 0.5rem;
                }

                .category-filters-container {
                    display: flex;
                    flex-wrap: wrap;
                    gap: 0.5rem;
                    align-items: center;
                }

                .category-filter {
                    padding: 0.4rem 0.8rem;
                    border-radius: 20px;
                    border: 2px solid transparent;
                    background: #f8f9fa;
                    color: #666;
                    cursor: pointer;
                    transition: all 0.3s ease;
                    font-size: 0.9rem;
                    user-select: none;
                }

                .category-filter:hover {
                    transform: translateY(-2px);
                    box-shadow: 0 4px 8px rgba(0,0,0,0.1);
                }

                .category-filter.active {
                    color: white;
                    border-color: currentColor;
                    font-weight: 600;
                }

                .timeline {
                    position: relative;
                    margin-top: 2rem;
                }

                .timeline::before {
                    content: '';
                    position: absolute;
                    left: 2rem;
                    top: 0;
                    bottom: 0;
                    width: 2px;
                    background: linear-gradient(to bottom, #4CAF50, #2196F3, #FF9800, #E91E63);
                }

                .timeline-item {
                    position: relative;
                    margin-bottom: 2rem;
                    margin-left: 4rem;
                    background: white;
                    border-radius: 12px;
                    padding: 1.5rem;
                    box-shadow: 0 8px 25px rgba(0,0,0,0.1);
                    transition: all 0.3s ease;
                    opacity: 1;
                }

                .timeline-item.hidden {
                    display: none;
                }

                .timeline-item:hover {
                    transform: translateY(-5px);
                    box-shadow: 0 15px 35px rgba(0,0,0,0.15);
                }

                .timeline-item::before {
                    content: '';
                    position: absolute;
                    left: -3rem;
                    top: 2rem;
                    width: 16px;
                    height: 16px;
                    background: var(--category-color, #4CAF50);
                    border: 3px solid white;
                    border-radius: 50%;
                    box-shadow: 0 0 0 3px rgba(76, 175, 80, 0.3);
                }

                .timeline-header {
                    display: flex;
                    justify-content: space-between;
                    align-items: flex-start;
                    margin-bottom: 1rem;
                    flex-wrap: wrap;
                    gap: 1rem;
                }

                .timeline-title {
                    font-size: 1.4rem;
                    font-weight: 600;
                    color: #2c3e50;
                    text-decoration: none;
                    flex-grow: 1;
                    transition: color 0.3s ease;
                }

                .timeline-title:hover {
                    color: #3498db;
                }

                .timeline-meta {
                    display: flex;
                    gap: 1rem;
                    align-items: center;
                    flex-shrink: 0;
                }

                .timeline-date {
                    background: linear-gradient(135deg, #667eea, #764ba2);
                    color: white;
                    padding: 0.5rem 1rem;
                    border-radius: 20px;
                    font-size: 0.9rem;
                    font-weight: 500;
                }

                .timeline-category {
                    background: var(--category-color, #f8f9fa);
                    color: white;
                    padding: 0.4rem 0.8rem;
                    border-radius: 15px;
                    font-size: 0.8rem;
                    font-weight: 500;
                    border: 1px solid rgba(255,255,255,0.2);
                }

                .timeline-description {
                    color: #5a6c7d;
                    line-height: 1.6;
                    font-size: 1rem;
                }

                .loading {
                    text-align: center;
                    padding: 3rem;
                    color: white;
                    font-size: 1.2rem;
                }

                .error {
                    background: #f8d7da;
                    color: #721c24;
                    padding: 1rem;
                    border-radius: 8px;
                    border: 1px solid #f5c6cb;
                    margin-bottom: 2rem;
                }

                .stats {
                    background: rgba(255,255,255,0.95);
                    border-radius: 12px;
                    padding: 1.5rem;
                    margin-bottom: 2rem;
                    backdrop-filter: blur(10px);
                    border: 1px solid rgba(255,255,255,0.2);
                }

                .stats-grid {
                    display: grid;
                    grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
                    gap: 1rem;
                    text-align: center;
                }

                .stat-item {
                    padding: 1rem;
                }

                .stat-number {
                    font-size: 2rem;
                    font-weight: bold;
                    color: #667eea;
                    display: block;
                }

                .stat-label {
                    color: #666;
                    font-size: 0.9rem;
                    margin-top: 0.5rem;
                }

                @media (max-width: 768px) {
                    .container {
                        padding: 1rem;
                    }

                    .header h1 {
                        font-size: 2rem;
                    }

                    .timeline::before {
                        left: 1rem;
                    }

                    .timeline-item {
                        margin-left: 2rem;
                        padding: 1rem;
                    }

                    .timeline-item::before {
                        left: -2rem;
                    }

                    .timeline-header {
                        flex-direction: column;
                        align-items: stretch;
                    }

                    .timeline-meta {
                        justify-content: space-between;
                    }

                    .filter-group {
                        gap: 1rem;
                    }

                    .category-filters-container {
                        justify-content: center;
                    }
                }
            </style>
        </head>
        <body>
            <div class="container">
                <div class="header">
                    <h1>Today I Learned</h1>
                    <p>A chronological timeline of discoveries, learnings, and insights from my development journey</p>
                </div>

                <div class="stats" id="stats" style="display: none;">
                    <div class="stats-grid">
                        <div class="stat-item">
                            <span class="stat-number" id="totalEntries">0</span>
                            <div class="stat-label">Total Entries</div>
                        </div>
                        <div class="stat-item">
                            <span class="stat-number" id="categoriesCount">0</span>
                            <div class="stat-label">Categories</div>
                        </div>
                        <div class="stat-item">
                            <span class="stat-number" id="latestEntry">-</span>
                            <div class="stat-label">Latest Entry</div>
                        </div>
                        <div class="stat-item">
                            <span class="stat-number" id="filteredCount">0</span>
                            <div class="stat-label">Showing</div>
                        </div>
                    </div>
                </div>

                <div class="filters" id="filters" style="display: none;">
                    <div class="filter-group">
                        <span class="filter-label">Filter by category:</span>
                        <div class="category-filters-container" id="categoryFilters"></div>
                    </div>
                </div>

                <div class="loading" id="loading">Loading timeline...</div>
                <div class="error" id="error" style="display: none;"></div>
                <div class="timeline" id="timeline"></div>
            </div>

            <script>
                let allEntries = [];
                let categoryColors = {};
                let activeCategory = null;

                async function loadTimeline() {
                    try {
                        // Load timeline data and category colors
                        const [entriesResponse, colorsResponse] = await Promise.all([
                            fetch('timeline-data.json'),
                            fetch('category-colors.json')
                        ]);

                        if (!entriesResponse.ok || !colorsResponse.ok) {
                            throw new Error('Failed to load timeline data');
                        }

                        allEntries = await entriesResponse.json();
                        categoryColors = await colorsResponse.json();

                        displayTimeline(allEntries);
                        displayStats(allEntries);
                        createCategoryFilters();

                    } catch (error) {
                        console.error('Error loading timeline:', error);
                        document.getElementById('loading').style.display = 'none';
                        document.getElementById('error').style.display = 'block';
                        document.getElementById('error').textContent = 'Failed to load timeline. Please try again later.';
                    }
                }

                function createCategoryFilters() {
                    const categories = [...new Set(allEntries.map(entry => entry.category))];
                    const filtersContainer = document.getElementById('categoryFilters');

                    // Add "All" filter
                    const allFilter = document.createElement('span');
                    allFilter.className = 'category-filter active';
                    allFilter.textContent = 'All';
                    allFilter.onclick = () => filterByCategory(null);
                    filtersContainer.appendChild(allFilter);

                    // Add category filters
                    categories.sort().forEach(category => {
                        const filter = document.createElement('span');
                        filter.className = 'category-filter';
                        filter.textContent = category;
                        filter.style.setProperty('--category-color', categoryColors[category] || '#666');
                        filter.onclick = () => filterByCategory(category);
                        filtersContainer.appendChild(filter);
                    });

                    document.getElementById('filters').style.display = 'block';
                }

                function filterByCategory(category) {
                    activeCategory = category;

                    // Update filter button states
                    document.querySelectorAll('.category-filter').forEach(filter => {
                        filter.classList.remove('active');
                        if ((category === null && filter.textContent === 'All') ||
                            filter.textContent === category) {
                            filter.classList.add('active');
                            if (category !== null) {
                                filter.style.background = categoryColors[category];
                            }
                        }
                    });

                    // Filter timeline items
                    const filteredEntries = category ?
                        allEntries.filter(entry => entry.category === category) :
                        allEntries;

                    displayTimeline(filteredEntries);
                    updateFilteredCount(filteredEntries.length);
                }

                function updateFilteredCount(count) {
                    document.getElementById('filteredCount').textContent = count;
                }

                function displayStats(entries) {
                    const categories = [...new Set(entries.map(entry => entry.category))];
                    const latest = entries.length > 0 ? new Date(entries[0].date).toLocaleDateString() : '-';

                    document.getElementById('totalEntries').textContent = entries.length;
                    document.getElementById('categoriesCount').textContent = categories.length;
                    document.getElementById('latestEntry').textContent = latest;
                    document.getElementById('filteredCount').textContent = entries.length;
                    document.getElementById('stats').style.display = 'block';
                }

                function displayTimeline(entries) {
                    const timelineContainer = document.getElementById('timeline');
                    const loading = document.getElementById('loading');

                    loading.style.display = 'none';

                    if (entries.length === 0) {
                        timelineContainer.innerHTML = '<p style="text-align: center; color: white;">No entries found.</p>';
                        return;
                    }

                    const timelineHTML = entries.map(entry => {
                        const date = new Date(entry.date).toLocaleDateString('en-US', {
                            year: 'numeric',
                            month: 'short',
                            day: 'numeric'
                        });

                        const categoryColor = categoryColors[entry.category] || '#666';

                        return `
                            <div class="timeline-item" style="--category-color: ${categoryColor}">
                                <div class="timeline-header">
                                    <a href="${entry.url}" class="timeline-title" target="_blank" rel="noopener noreferrer">
                                        ${entry.title}
                                    </a>
                                    <div class="timeline-meta">
                                        <span class="timeline-category" style="--category-color: ${categoryColor}; background: ${categoryColor};">
                                            ${entry.category}
                                        </span>
                                        <span class="timeline-date">${date}</span>
                                    </div>
                                </div>
                                <div class="timeline-description">
                                    ${entry.description}
                                </div>
                            </div>
                        `;
                    }).join('');

                    timelineContainer.innerHTML = timelineHTML;
                }

                // Load timeline on page load
                loadTimeline();
            </script>
        </body>
        </html>
        EOF

    - name: Setup Pages
      uses: actions/configure-pages@v4

    - name: Upload artifact
      uses: actions/upload-pages-artifact@v3
      with:
        path: 'docs'

  deploy:
    environment:
      name: github-pages
      url: ${{ steps.deployment.outputs.page_url }}
    runs-on: ubuntu-latest
    needs: build-timeline
    if: github.ref == 'refs/heads/main'

    steps:
    - name: Deploy to GitHub Pages
      id: deployment
      uses: actions/deploy-pages@v4

All told from start to finish, this took about 20 minutes. There are still some potential updates, but for 20 minutes of work I was able to take a 'wild' idea that I would have never been able to do before and had something that I'm actually excited about! And it has the added bonus of encouraging me to write more TILs because I now have this nice looking timeline of my TILs.

Fun with MCPs

Special Thanks to Jeff Triplett who provided an example that really got me started on better understanding of how this all works.

In trying to wrap my head around MCPs over the long Memorial weekend I had a breakthrough. I'm not really sure why this was so hard for me to grok, but now something seems to have clicked.

I am working with Pydantic AI and so I'll be using that as an example, but since MCPs are a standard protocol, these concepts apply broadly across different implementations.

What is Model Context Protocol (MCP)?

Per the Anthropic announcement (from November 2024!!!!)

The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.

What this means is that there is a standard way to extend models like Claude, or OpenAI to include other information. That information can be files on the file system, data in a database, etc.

(Potential) Real World Example

I work for a Healthcare organization in Southern California. One of the biggest challenges with onboarding new hires (and honestly can be a challenge for people that have been with the organization for a long time) is who to reach out to for support on which specific application.

Typically a user will send an email to one of the support teams, and the email request can get bounced around for a while until it finally lands on the 'right' support desk. There's the potential to have the applications themselves include who to contact, but some applications are vendor supplied and there isn't always a way to do that.

Even if there were, in my experience those are often not noticed by users OR the users will think that the support email is for non-technical issues, like "Please update the phone number for this patient" and not issues like, "The web page isn't returning any results for me, but it is for my coworker."

Enter an MCP with a Local LLM

Let's say you have a service that allows you to search through a file system in a predefined set of directories. This service is run with the following command

npx -y --no-cache @modelcontextprotocol/server-filesystem /path/to/your/files

In Pydantic AI the use of the MCPServerStdio is using this same syntax only it breaks it into two parts

  • command
  • args

The command is any application in your $PATH like uvx or docker or npx, or you can explicitly define where the executable is by calling out its path, like /Users/ryancheley/.local/share/mise/installs/bun/latest/bin/bunx

The args are the commands you'd pass to your application.

Taking the command from above and breaking it down we can set up our MCP using the following

MCPServerStdio(
    "npx",
    args=[
        "-y",
        "--no-cache",
        "@modelcontextprotocol/server-filesystem",
        "/path/to/your/files",
    ]

Application of MCP with the Example

Since I work in Healthcare, and I want to be mindful of the protection of patient data, even if that data won't be exposed to this LLM, I'll use ollama to construct my example.

I created a support.csv file that contains the following information

  • Common Name of the Application
  • URL of the Application
  • Support Email
  • Support Extension
  • Department

I used the following prompt

Review the file support.csv and help me determine who I contact about questions related to CarePath Analytics.

Here are the contents of the support.csv file

Name URL Support Email Support Extension Department
MedFlow Solutions https://medflow.com support@medflow.com 1234 Clinical Systems
HealthTech Portal https://healthtech-portal.org help@medflow.com 3456 Patient Services
CarePath Analytics https://carepath.io support@medflow.com 4567 Data Analytics
VitalSign Monitor https://vitalsign.net support@medflow.com 1234 Clinical Systems
Patient Connect Hub https://patientconnect.com contact@medflow.com 3456 Patient Services
EHR Bridge https://ehrbridge.org support@medflow.com 2341 Integration Services
Clinical Workflow Pro https://clinicalwf.com support@medflow.com 1234 Clinical Systems
HealthData Sync https://healthdata-sync.net sync@medflow.com 6789 Integration Services
TeleHealth Connect https://telehealth-connect.com help@medflow.com 3456 Patient Services
MedRecord Central https://medrecord.central records@medflow.com 5678 Medical Records

The script is below:

# /// script
# requires-python = ">=3.12"
# dependencies = [
#     "pydantic-ai",
# ]
# ///

import asyncio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider

async def main():
    # Configure the Ollama model using OpenAI-compatible API
    model = OpenAIModel(
        model_name='qwen3:8b',  # or whatever model you have installed locally
        provider=OpenAIProvider(base_url='http://localhost:11434/v1')
    )

    # Set up the MCP server to access our support files
    support_files_server = MCPServerStdio(
        "npx",
        args=[
            "-y",
            "@modelcontextprotocol/server-filesystem",
            "/path/to/your/files"  # Directory containing support.csv
        ]
    )

    # Create the agent with the model and MCP server
    agent = Agent(
        model=model,
        mcp_servers=[support_files_server],
    )

    # Run the agent with the MCP server
    async with agent.run_mcp_servers():
        # Get response from Ollama about support contact
        result = await agent.run(
            "Review the file `support.csv` and help me determine who I contact about questions related to CarePath Analytics?"        )
        print(result.output)

if __name__ == "__main__":
    asyncio.run(main())

As a user, if I ask, who do I contact about questions related to CarePath Analytics the LLM will search through the support.csv file and supply the email contact.

This example shows a command line script, and a Web Interface would probably be better for most users. That would be the next thing I'd try to do here.

Once that was done you could extend it to also include an MCP to write an email on the user's behalf. It could even ask probing questions to help make sure that the email had more context for the support team.

Some support systems have their own ticketing / issue tracking systems and it would be really valuable if this ticket could be written directly to that system. With the MCP this is possible.

We'd need to update the support.csv file with some information about direct writes via an API, and we'd need to secure the crap out of this, but it is possible.

Now, the user can be more confident that their issue will go to the team that it needs to and that their question / issue can be resolved much more quickly.

Uptime Kuma and Claude

While talking to a colleague of mine he had mentioned something called Uptime Kuma, which is an open source alternative to Uptime Robot. This was interesting for me because I have Uptime Robot checking a couple of my websites and I thought, "Hey, maybe I can use Uptime Kuma as a replacement for Uptime Robot."

I dove into the docs a bit and discovered that there was a Docker image for it, which was nice.

I then used Claude to take the docker run command

docker run -d \
    --restart=always \
    -p 3001:3001 \
    -v uptime-kuma:/app/data \
    --name uptime-kuma louislam/uptime-kuma:1

and turn it into a compose file, which gave me this

version: '3'

services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
    restart: always
    ports:
      - "3001:3001"
    volumes:
      - uptime-kuma:/app/data

volumes:
  uptime-kuma:

Next, I took the output of that compose file and asked Claude to also generate a caddy configuration for me and it gave me this

monitor.yourdomain.com {
    # Enable compression
    encode gzip zstd

    # Add security headers
    header {
        # Enable HSTS
        Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
        # Prevent content-type sniffing
        X-Content-Type-Options "nosniff"
        # Referrer policy
        Referrer-Policy "strict-origin-when-cross-origin"
    }

    # Reverse proxy to Uptime Kuma
    reverse_proxy localhost:3001
}

I then SSH'd into one of my servers using Shellfish 1.

I updated the docker-compose.yml file and my Caddyfile to include what Claude had output.

I restarted my docker containers and didn't get my new container running.

So I took the whole Docker Compose file from my server and I put that into Claude and said,

Hey, is there anything wrong with my Docker Compose file?

It indicated that there were some issues and provided updates for. I made those changes and did the same thing with the Caddyfile. Again, Claude offered up some changes. I applied the recommended changes for the docker-compose.yml file and the Caddyfile stopped and started my docker containers.

I suddenly had an instance of Uptime Kuma. All in all, it took about a half hour from start to finish while I was watching a hockey game ... from my iPad.

I didn't really have to do anything other than a couple of tweaks here and there on the Docker Compose file and a couple of tweaks here and there on the Caddyfile. and I suddenly have this tool that allows me to monitor the uptime of various websites that I'm interested in.

As I wrapped up it hit me ... holy crap, this is an amazing time to live2. You have an idea, Claude (or whatever AI tool you want to use) outputs a thing, and then you're up and running. This really reduces that barrier to entry to just try new things.

Is the Docker Compose file the most performant? I don't know. Is the Caddyfile the most secured lockdown thing? I don't know.

But for these small projects that are just me, I don't know how much it really matters.

  1. this is an amazing app on the iPad, highly recommend ↩︎
  2. Yes, there's also some truly horrific shit going on too ↩︎

uv and pip

On Sunday November 3 I posted this to Mastodon:

I've somehow managed to get Python on my macbook to not install packages into the virtual environment I've activated and I'm honestly not sure how to fix this.

Has anyone else ever run into this problem? If so, any pointers on how to fix it?

I got lots of helpful replies and with those replies I was able to determine what the issue was and 'fix' it.

A timeline of events

I was working on updating a library of mine and because it had been a while since it had been worked on, I had to git clone it locally. When I did that I then set out to try uv for the virtual environment management.

This worked well (and was lightning FAST) and I was hacking away at the update I wanted to do.

Then I had a call with my daughter to review her upcoming schedule for the spring semester. When I got back to working on my library I kind of dove right in and started to get an error messages about the library not being installed

zsh: command not found: the-well-maintained-test

So I tried to install it (though I was 100% sure it was already there) and got this message

ERROR: Could not find an activated virtualenv (required).

I deleted the venv directory and started over again (using uv still) and ran into the same issue.

I restarted my Mac (at my day job I use Windows computers and this is just a natural reaction to do when something doesn't work the way I think it should1)

That didn't fix the issue 😢

I spent the next little while certain that in some way pipx or pyenv had jacked up my system, so I uninstalled them ... now you might ask why I thought this, and dear reader, I have no f$%&ing clue.

With those pesky helpers out of the way, pip still wasn't working the way I expected it to!

I then took to Mastodon and with this one response I saw what I needed

@ryancheley Are you running python -m pip install... Or just pip install...? If that's a venv created by uv, pip isn't installed I think, so 'pip install' might resolve to a pip in a different python installation

I went back to my terminal, and sure enough that was the issue. I haven't used uv enough to get a real sense of it, and when I was done talking with my daughter, my brain switched to Python programming, but it forgot that I had used uv to set everything up.

Lessons learned

This was a good lesson but I'm still unsure about a few things:

  1. How do I develop a cli using uv?
  2. Why did it seem that my cli testing worked fine right up until the call with my daughter, and now it seems that I can't develop cli's with uv?

I did write a TIL for this but I discovered that

uv venv venv

is not a full replacement for

python -m venv venv

Specifically uv does not include pip, which is what contributed to my issues. You can include pip by running this command though

uv venv venv --seed

Needless to say, with the help of some great people on the internet I got my issue resolved, but I did spend a good portion of Monday evening un-f$%&ing my MacBook Pro by reinstalling pyenv, and pipx2 ... and cleaning up my system Python for 3.12 and 3.13 ... turns out Homebrew REALLY doesn't want you to do anything with the system Python, even if you accidentally installed a bunch of cruft in there accidentally.

  1. Yes this is dumb, and yes I hate it ↩︎
  2. As of this writing I've uninstalled pipx because uv can replace it too. See Jeff Triplett's post uv does everything ↩︎

Migrating django-tailwind-cli to Django Commons

On Tuesday October 29 I worked with Oliver Andrich, Daniel Moran and Storm Heg to migrate Oliver's project django-tailwind-cli from Oliver's GitHub project to Django Commons.

This was the 5th library that has been migrated over, but the first one that I 'lead'. I was a bit nervous. The Django Commons docs are great and super helpful, but the first time you do something, it can be nerve wracking.

One thing that was super helpful was knowing that Daniel and Storm were there to help me out when any issues came up.

The first set up steps are pretty straight forward and we were able to get through them pretty quickly. Then we ran into an issue that none of us had seen previously.

django-tailwind-cli had initially set up GitHub Pages set up for the docs, but migrated to use Read the Docs. However, the GitHub pages were still set in the repo so when we tried to migrate them over we ran into an error. Apparently you can't remove GitHub pages using Terraform (the process that we use to manage the organization).

We spent a few minutes trying to parse the error, make some changes, and try again (and again) and we were able to finally successfully get the migration completed 🎉

Some other things that came up during the migration was a maintainer that was set in the front end, but not in the terraform file. Also, while I was making changes to the Terraform file locally I ran into an issue with an update that had been done in the GitHub UI on my branch which caused a conflict for me locally.

I've had to deal with this kind of thing before, but ... never with an audience! Trying to work through the issue was a bit stressful to say the least 😅

But, with the help of Daniel and Storm I was able to resolve the conflicts and get the code pushed up.

As of this writing we have 6 libraries that are part of the Django Commons organization and am really excited for the next time that I get to lead a migration. Who knows, at some point I might actually be able to do one on my own ... although our hope is that this can be automated much more ... so maybe that's what I can work on next

Working on a project like this has been really great. There are such great opportunities to learn various technologies (terraform, GitHub Actions, git) and getting to work with great collaborators.

What I'm hoping to be able to work on this coming weekend is1:

  1. Get a better understanding of Terraform and how to use it with GitHub
  2. Use Terraform to do something with GitHub Actions
  3. Try and create a merge conflict and then use the git cli, or Git Tower, or VS Code to resolve the merge conflict

For number 3 in particular I want to have more comfort for fixing those kinds of issues so that if / when they come up again I can resolve them.

  1. Now will I actually be able to 🤷🏻 ↩︎

Django Commons

First, what are "the commons"? The concept of "the commons" refers to resources that are shared and managed collectively by a community, rather than being owned privately or by the state. This idea has been applied to natural resources like air, water, and grazing land, but it has also expanded to include digital and cultural resources, such as open-source software, knowledge databases, and creative works.

As Organization Administrators of Django Commons, we're focusing on sustainability and stewardship as key aspects.

Asking for help is hard, but it can be done more easily in a safe environment. As we saw with the xz utils backdoor attack, maintainer burnout is real. And while there are several arguments about being part of a 'supply chain' if we can, as a community, offer up a place where maintainers can work together for the sustainability and support of their packages, Django community will be better off!

From the README of the membership repo in Django Commons

Django Commons is an organization dedicated to supporting the community's efforts to maintain packages. It seeks to improve the maintenance experience for all contributors; reducing the barrier to entry for new contributors and reducing overhead for existing maintainers.

OK, but what does this new organization get me as a maintainer? The (stretch) goal is that we'll be able to provide support to maintainers. Whether that's helping to identify best practices for packages (like requiring tests), or normalize the idea that maintainers can take a step back from their project and know that there will be others to help keep the project going. Being able to accomplish these two goals would be amazing ... but we want to do more!

In the long term we're hoping that we're able to do something to help provide compensation to maintainers, but as I said, that's a long term goal.

The project was spearheaded by Tim Schilling and he was able to get lots of interest from various folks in the Django Community. But I think one of the great aspects of this community project is the transparency that we're striving for. You can see here an example of a discussion, out in the open, as we try to define what we're doing, together. Also, while Tim spearheaded this effort, we're really all working as equals towards a common goal.

What we're building here is a sustainable infrastructure and community. This community will allow packages to have a good home, to allow people to be as active as they want to be, and also allow people to take a step back when they need to.

Too often in tech, and especially in OSS, maintainers / developers will work and work and work because the work they do is generally interesting, and has interesting problems to try and solve.

But this can have a downside that we've all seen .. burnout.

By providing a platform for maintainers to 'park' their projects, along with the necessary infrastructure to keep them active, the goal is to allow maintainers the opportunity to take a break if, or when, they need to. When they're ready to return, they can do so with renewed interest, with new contributors and maintainers who have helped create a more sustainable environment for the open-source project.

The idea for this project is very similar to, but different from, Jazz Band. Again, from the README

Django Commons and Jazzband have similar goals, to support community-maintained projects. There are two main differences. The first is that Django Commons leans into the GitHub paradigm and centers the organization as a whole within GitHub. This is a risk, given there's some vendor lock-in. However, the repositories are still cloned to several people's machines and the organization controls the keys to PyPI, not GitHub. If something were to occur, it's manageable.

The second is that Django Commons is built from the beginning to have more than one administrator. Jazzband has been working for a while to add additional roadies (administrators), but there hasn't been visible progress. Given the importance of several of these projects it's a major risk to the community at large to have a single point of failure in managing the projects. By being designed from the start to spread the responsibility, it becomes easier to allow people to step back and others to step up, making Django more sustainable and the community stronger.

One of the goals for Django Commons is to be very public about what's going on. We actively encourage use of the Discussions feature in GitHub and have several active conversations happening there now1 2 3

So far we've been able to migrate ~3~ 4 libraries4 5 6 7into Django Commons. Each one has been a great learning experience, not only for the library maintainers, but also for the Django Commons admins.

We're working to automate as much of the work as possible. Daniel Moran has done an amazing job of writing Terraform scripts to help in the automation process.

While there are still several manual steps, with each new library, we discover new opportunities for automation.

This is an exciting project to be a part of. If you're interested in joining us you have a couple of options

  1. Transfer your project into Django Commons
  2. Join as member and help contribute to one of the projects that's already in Django Commons

I'm looking forward to seeing you be part of this amazing community!

  1. How to approach existing libraries ↩︎
  2. Creating a maintainer-contributor feedback loop ↩︎
  3. DjangoCon US 2024 Maintainership Open pace ↩︎
  4. django-tasks-scheduler ↩︎
  5. django-typer ↩︎
  6. django-fsm-2 ↩︎
  7. django-debug-toolbar ↩︎

DjangoCon US 2024 Talk

At DjangoCon US 2023 I gave a talk, and wrote about my experience preparing for that talk

Well, I spoke again at DjangoCon US this year (2024) and had a similar, but wildly different experience in preparing for my talk.

Last year I lamented that I didn't really track my time (which is weird because I track my time for ALL sorts of things!).

This year, I did track my time and have a much better sense of how much time I prepared for the talk.

Another difference between each year is that in 2023 I gave a 45 minute talk, while this year my talk was 25 minutes.

I've heard that you need about 1 hour of prep time for each 1 minute of talk that you're going to give. That means that, on average, for a 25 minute talk I'd need about 25 hours of prep time.

My time tracking shows that I was a little short of that (19 hours) but my talk ended up being about 20 minutes, so it seems that maybe I was on track for that.

This year, as last year, my general prep technique was to:

  1. Give the presentation AND record it
  2. Watch the recording and make notes about what I needed to change
  3. Make the changes

I would typically do each step on a different day, though towards the end I would do steps 2 and 3 on the same day, and during the last week I would do all of the steps on the same day.

This flow really seems to help me get the most of out practicing my talk and getting a sense of its strengths and weaknesses.

One issue that came up a week before I was to leave for DjangoCon US is that my boss said I couldn't have anything directly related to my employer in the presentation. My initial drafts didn't have specifics, but the examples I used were too close for my comfort on that, so I ended up having to refactor that part of my talk.

Honestly, I think it came out better because of it. During my practice runs I felt like I was kind of dancing around topics, but once I removed them i felt freer to just kind of speak my mind.

Preparing and giving talks like these are truly a ton of work. Yes, you'll (most likely) be given a free ticket to the conference you're speaking at — but unless you're a seasoned public speaker you will have to practice a lot to give a great talk.

One thing I didn't mention in my prep time is that my talk was essentially just a rendition of my series of blog posts I started writing at DjangoCon US 2023 (Error Culture)

So when you add in the time it took for me to brainstorm those articles, write, and edit them, we're probably looking at another 5 - 7 hours of prep.

This puts me closer to the 25 hours of prep time for the 25 minute talk.

I've given 2 talks so far, and after each one I've said, 'Never again!'

It's been a few weeks since I gave my talk, and I have to say, I'm kind of looking forward to trying to give a talk again next year. Now, I just need to figure out what I would talk about that anyone would want to hear. 🤔

SSH Keys

If you want to access a server in a 'passwordless' way, the best approach I know is to use SSH Keys. This is great, but what does that mean and how do you set it up?

I'm going to attempt to write out the steps for getting this done.

Let's assume we have two servers, web1 and web2. These two servers have 1 non-root user which I'll call user1.

So we have something like this

  • user1@web1
  • user1@web2

Suppose we want to allow user1 from web2 to access web1.

At a high level, we need to allow SSH access to web1 for user1 on web2 we need to:

  1. Create user1 on web1
  2. Create user1 on web2
  3. Create SSH keys on web2 for user1
  4. Add the public key for user1 from web2 to onto the authorized_keys for for user1 on web1

OK, let's try this. I am using DigitalOcean and will be taking advantage of their CLI tool doctl

To create a droplet, there are two required arguments.:

  • image
  • size

I'm also going to include a few other options

  • tag
  • region
  • ssh-keys1

Below is the command to use to create a server called web-t-001

doctl compute droplet create web-t-001 \
  --image ubuntu-24-04-x64 \
  --size s-1vcpu-1gb \
  --enable-monitoring \
  --region sfo2 \
  --tag-name test \
  --ssh-keys $(doctl compute ssh-key list --output json | jq -r 'map(.id) | join(",")')

and to create a server called web-t-002

doctl compute droplet create web-t-002 \
  --image ubuntu-24-04-x64 \
  --size s-1vcpu-1gb \
  --enable-monitoring \
  --region sfo2 \
  --tag-name test \
  --ssh-keys $(doctl compute ssh-key list --output json | jq -r 'map(.id) | join(",")')

The values for the ssh-keys above will get all of the ssh-keys I have stored at DigitalOcean and add them.

The output looks something like:

--ssh-keys 1234, 4567, 6789, 1222

Now that we've created two droplets called web-t-001 and web-t-002 we can set up user1 on each of the servers.

I'll SSH as root into each of the servers and create user1 on each (I can do this because of the ssh keys that were added as part of the droplet creation)

  • adduser --disabled-password --gecos "User 1" user1 --home /home/user1

I then switch to user1

su user1

and run this command

ssh-keygen -q -t rsa -b 2048 -f /home/user1/.ssh/id_rsa -N ''

This will generate two files in ~/.ssh without any prompts:

  • id_rsa
  • id_rsa.pub

The id_rsa identifies the computer. This is the Private Key. It should NOT be shared with anyone!

The id_rsa.pub. This is the Public Key. It CAN be shared.

The contents of the id_rsa.pub file will be used for the authorized_keys file on the computer this user will SSH into.

OK, what does this actually look like?

On web-t-002, I get the content of ~/.ssh/id_rsa.pub for user1 and copy it to my clipboard.

Then from web-t-001 as user1 I paste that into the the authorized_keys in ~/.ssh. If the file isn't already there, it needs to be created.

This tells web-t-001 that a computer with the private key that matches the public key is allowed to access as user1

The implementation of this is to be on web-t-002 as user1 and then run the command

ssh user1@web-t-001

The first time this is done, a prompt will come up that looks like this:

The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
ED25519 key fingerprint is SHA256:....
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])?

This ensure that you know which computer you're connecting to and you want to continue. This helps to prevent potential man-in-the-middle attacks. When you type yes this creates a file called known_hosts

OK, so where are we at? The table below shows the files, their content, and their servers

Server id_rsa id_rsa.pub authorized_keys known_hosts
web-t-001 private key ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDFwHs8VKWWSH737fVz4cs+5Eq8OcRJRf2ti0ytaChM1ySh2+olcKokHao3fl5G+ZZv4pQeKfCh8ClFP86g7rZN1evu2EFVlmBo1Ked4IwF4UBY2+rnfZmvxeHd+smtyZgfVZI/6ySfe1D+inAqv7otsMsNRRuE4aG0DNEJ39qwFxukGNcDXk9RNVvmwbCc5zT/HN0yMJ6Y7KtfPZgjl5v854VodZkfxsLpah7Bn64zAQr/xDh2KcWbtDrsvTdjNMPY7oW20VoqDs98mA6xAw9RNMI+xotNmivdWdv3BEYj9JyH61euTBQ27HC4LsOPuCOFKBqOwGXiJhpzvJZbNCcvQEztem3kqQFAPLg+4wBInyxnY2i31QX7+2IJs0a4pYTWRSRcrvwBAvi2GlXGltrZ7V6KOLzwBrXLD7XiO3C5kO5fcpanKlm/RdVAxUTjUq159H+v9om8HAgX/pIpYBpPnRrG7setNQVzDNQsxfR/YC0h+f9LWnnaBV6+51IjbaqAPSSf6KYv0AKO5XNlJsSTXNRBZaRvrfr0qllgXU82f9y8Eb0sgjL71wD9Fv24fV0toFW8PH3yOeePC6d7kNqZkFdSBksChzqagZwPudYnVhMmhMYV7k1v831H8WHdGPVRe9Z3BDnSCzf8o8fRS3mSEAJBiT30bXlGWUNopIpsgw== user1@ahc-web-t-001 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCbx+wTVEcdy2Uu2iB+u6+R8Q0yH9ws92GM6K/XXmAXoUuXylkdJzw9vUeuaZTmGxwGRdp+lLh+vVDmiuzrUPjbkFA7Y1SxfR5lgJu7PviDDZzsFeUo5fqSp6FOC5x75jOjqy6fc68GzOnoxk4WR6EWKWRd+xqdgCTGWiuhfUEl1lw7YN8MUhd1Hi0Ef55ZpH133jCzffWbkLFFInyIwuzG6jaPsobNPRshvg9kUoFwo5WqCx/s8Zk4iVl86yCwoV+pXjiubLylSKF7hb7uDE4Ll8gADOtuXUqmc470yvzSxxI4yaZOFz4Ajo1qZHgscSOxWgb+ZVIOKhGK5ftHPaZ4CCxXuhW5J8L3Aqs0WQeRu9Goof83V/ruZhzgg1vnhmC2511QSS2dL6U7n2JNLtNnXNjeSQ0BGVlY1FuZRczmAxN9nJETmRCdUfiTwKdPS4LdfAwrnckPHKtk1QoFKietLwfbmipU+pGvt6qKpKeRfZ/XGbG+ZiQ7oPiqcYU/eh54IAUxxo9CvVHtn742A4ABqK5+0MJP5VuY3fcDU8dIvA0r4LpxRpG/KSB4yZMUhjf+KR7QUpN3mJIDOKTDAxpGOqpNoD2gTYGpyT13AdrRROpOjOJZJqDiVi6m6r/U+sIgqymsxDqBur5+n4VxvvXbdNd+6vz7AI12WA8I8+0xZw== user1@ahc-web-t-002 NULL
web-t-002 private key ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCbx+wTVEcdy2Uu2iB+u6+R8Q0yH9ws92GM6K/XXmAXoUuXylkdJzw9vUeuaZTmGxwGRdp+lLh+vVDmiuzrUPjbkFA7Y1SxfR5lgJu7PviDDZzsFeUo5fqSp6FOC5x75jOjqy6fc68GzOnoxk4WR6EWKWRd+xqdgCTGWiuhfUEl1lw7YN8MUhd1Hi0Ef55ZpH133jCzffWbkLFFInyIwuzG6jaPsobNPRshvg9kUoFwo5WqCx/s8Zk4iVl86yCwoV+pXjiubLylSKF7hb7uDE4Ll8gADOtuXUqmc470yvzSxxI4yaZOFz4Ajo1qZHgscSOxWgb+ZVIOKhGK5ftHPaZ4CCxXuhW5J8L3Aqs0WQeRu9Goof83V/ruZhzgg1vnhmC2511QSS2dL6U7n2JNLtNnXNjeSQ0BGVlY1FuZRczmAxN9nJETmRCdUfiTwKdPS4LdfAwrnckPHKtk1QoFKietLwfbmipU+pGvt6qKpKeRfZ/XGbG+ZiQ7oPiqcYU/eh54IAUxxo9CvVHtn742A4ABqK5+0MJP5VuY3fcDU8dIvA0r4LpxRpG/KSB4yZMUhjf+KR7QUpN3mJIDOKTDAxpGOqpNoD2gTYGpyT13AdrRROpOjOJZJqDiVi6m6r/U+sIgqymsxDqBur5+n4VxvvXbdNd+6vz7AI12WA8I8+0xZw== user1@ahc-web-t-002 NULL |1|V6uYGlSiYXpzFAly9RQHybzl07o=|VUkDfRcKGyUgLdJn+iw6RJE+r68= ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILpbPHA1jL0MHzBI8qb2X0mHDx3UlrKCdbz1IspvaJW9 |1|dshOpqJI2zQxEpj1pleDmtkijIY=|ZYV8bCeLNDdyE7STDPaO2TzYUEQ= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnGgQUmsCG23b6iYxRHq5MU9xd8Q/p8j3EyZn9hvs4IsBoCgeNXjyXK28x7Mt7tmfjrF/4jLcq4o2TTAwF6eVQZ4KXoBa73dYqYDmYTVKTwzZL9CsJTWHTsSnU8V/J3Tml+hIFrjZzWP34+lL9xyOVin5R0PT/OCG49ecb5tt2FxTZeyWI47B/bCDGXV9g1tjZ8+mnbLXpIdQ9+6GllRZrEGvXWm6z/U3YHO84dcG0IZJ7QsEaAiLSBC/t83So4MDQgdttm+aHZXds4jej5E3QwUex8JkVVn0X7Nr4yKMDkSk7ABD6AFhpa4ESXysqI33CUaSBROAuu4lmfOkLmyRZK2vQ6soiOW8iBgCEl/q8MSOEpZeAi3faYbUnOpLzLDBcCoAuSDoexrTixxlhJmRDeS3PlcXmzvkJl7RRKUYZZcPQOd2w9ipCIAD1PevNlnmmZcfkRe0RRvAyF1mqcO/x5Ovtq9QLbycFHYfh/3LcPuDOWBtT+mVd5FeNUMsZ6+8= |1|HJd+aDFM66x8jJT1zUZV59ceL10=|PfHQu/Yg35QPBKk7FvNO/46b76o= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP+XwUozGye03WJ6zC7yoJQaYF8HiUQKmZwnQO0wSxMm9x9nBdPEx1bmyZHHUbMnwQnoeAMmd6hgK6H8hbxzEas=

OK, so now we have set it up so that user1 on web-t-002 can access web-t-001 as user1. A reasonable question to ask at this point might be, can I go the other way without any extra steps?

Let's try it and see!

From web-t-001 as user1 lets run

ssh user1@web-t-002

And see what happens

We get the same warning message we got before, and if we enter yes we then see

user1@yyy.yyy.yyy.yyy: Permission denied (publickey).

What's going on?

We haven't added the public key for user1 from web-t-002 to the authorized_keys file on web-t-001.

Let's do that now.

Similar to before we'll get the content of the id_rsa.pub from web-t-001 for user1 and copy it to the authorized_keys file on web-t-002 for user1.

When we do this we are now able to connect to web-t-002 from web-t-001 as user1

OK, SSH has 4 main files involved:

  • id_rsa.pub (public key): The SSH key pair is used to authenticate the identity of a user or process that wants to access a remote system using the SSH protocol. The public key is used by both the user and the remote server to encrypt messages.
  • id_rsa (private key): The private key is secret, known only to the user, and should be encrypted and stored safely
  • known_hosts: stores the public keys and fingerprints of the hosts accessed by a user
  • authorized_keys: file containing all of the authorized public keys that have been generated. This is what tells the server that it’s Ok to use the key to allow a connection

Using with GHA

ssh-action from AppleBoy

A general way to access your server with GHA (say for CICD) is to use the GitHub action from appleboy called ssh-action

There are 3 key components needed to get this to work:

  1. host
  2. username
  3. key

Each of these can/should be put into repository secrets. Setting those up is outside the scope of this article. For details on how to set repository secrets, see this article.

Using the servers from above we could set up the following secrets

  • SSH_HOST: web-t-001 IP Address
  • SSH_KEY: the content from /home/user1/.ssh/id_rsa ( from web-t-002 )
  • SSH_USERNAME: user1

And then set up an GitHub Action like this

name: Test Workflow

on:
  workflow_dispatch:

jobs:
  deploy:
    runs-on: ubuntu-22.04
    steps:
      - name: deploy code
        uses: appleboy/ssh-action@v0.1.10
        with:
          host: ${{ secrets.SSH_HOST }}
          port: 22
          key: ${{ secrets.SSH_KEY }}
          username: ${{ secrets.SSH_USERNAME }}

          script: |
            echo "This is a test" >  ~/test.txt

Using this set up we've made the docker image that runs the GHA to be (basically) known as web-t-001 and it has access as user1 in the same way we did in the terminal.

When this action is run it will ssh into web-t-001 as user1 and create a file called test.txt in the home directory. The content of that file will be "This is a test"

  1. I'm using these keys so that I can gain access to the server as root ↩︎

Using justpath to go on a pyrrhic adventure to clean up my PATH

A while ago I heard about a project called justpath from Jeff Tripplet on Mastodon. It seemed like a neat project to try and clean up my path and I figured, what the heck, let me give it a try.

I installed it and when I ran it for the first time, the output looked like this

 1 /Users/ryan/.cargo/bin
 2 /usr/local/opt/ruby/bin (resolves to /usr/local/Cellar/ruby/3.2.2_1/bin)
 3 /Users/ryan/google-cloud-sdk/bin
 4 /Users/ryan/.pyenv/shims
 5 /opt/homebrew/Cellar/pyenv-virtualenv/1.2.3/shims
 6 /opt/homebrew/bin
 7 /opt/homebrew/sbin
 8 /Library/Frameworks/Python.framework/Versions/3.12/bin
 9 /usr/local/bin (duplicates: 3)
10 /usr/local/sbin
11 /Library/Frameworks/Python.framework/Versions/3.11/bin
12 /Library/Frameworks/Python.framework/Versions/3.10/bin
13 /usr/local/bin (duplicates: 3)
14 /System/Cryptexes/App/usr/bin (resolves to /System/Volumes/Preboot/Cryptexes/App/usr/bin)
15 /usr/bin
16 /bin
17 /usr/sbin
18 /sbin
19 /var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin (resolves to /private/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin, directory does not exist)
20 /var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin (resolves to /private/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin, directory does not exist)
21 /var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin (resolves to /private/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin, directory does not exist)
22 /opt/X11/bin
23 /Library/Apple/usr/bin
24 /Library/TeX/texbin (resolves to /usr/local/texlive/2024basic/bin/universal-darwin)
25 /usr/local/share/dotnet
26 ~/.dotnet/tools (directory does not exist)
27 /Library/Frameworks/Mono.framework/Versions/Current/Commands (resolves to /Library/Frameworks/Mono.framework/Versions/6.12.0/Commands)
28 /Applications/Postgres.app/Contents/Versions/latest/bin (resolves to /Applications/Postgres.app/Contents/Versions/14/bin)
29 /Applications/Xamarin Workbooks.app/Contents/SharedSupport/path-bin (directory does not exist)
30 /Users/ryan/.local/bin (duplicates: 2)
31 /usr/local/bin (duplicates: 3)
32 /Users/ryan/.local/bin (duplicates: 2)

That's a lot to look at, but helpfully there are a few flags to get only the 'bad' items

justpath --invalid

justpath --duplicates

Running justpath --invalid got this

18 /var/run/com.apple.security.cryptexd/codex.system/bootstrap/ usr/local/bin (resolves to /private/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin, directory does not exist)
19 /var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin (resolves to /private/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin, directory does not exist)
20 /var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin (resolves to /private/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin, directory does not exist)
26 ~/.dotnet/tools (directory does not exist)
29 /Applications/Xamarin Workbooks.app/Contents/SharedSupport/path-bin (directory does not exist)

and running justpath --duplicates got this

 8 /usr/local/bin (duplicates: 3)
12 /usr/local/bin (duplicates: 3)
30 /Users/ryan/.local/bin (duplicates: 2)
31 /usr/local/bin (duplicates: 3)
32 /Users/ryan/.local/bin (duplicates: 2)

Great! Now I know what is invalid and what is duplicated. Surely justpath will have a command to clean this up, right?

Anakin Padmeme Meme

Turns out not so much, and for good reasons:

justpath does not alter the path itself, it provides the corrected version that a user can apply further. A child process the one that Python is running in cannot alter the parent environment PATH. As for reading the PATH - justpath relies on what is available from os.environ['PATH'].

So I posted on Mastodon to see how others may have approached this and Jeff replied back

I ran the tool and copied the output before starting from an empty PATH.

Then I ran justpath and added them back one at a time.

mise helped me cut at least half a dozen weird/duplicate path statements alone. I had quite a bit of tool overlap.

I poked around with trying to find where the invalid values were coming from and found this answer

Cryptexes are used to update parts of macOS quickly, without requiring a full rebuild of the SSV (see this answer for details). I don't know whether removing these paths will break the installation of such cryptexes. But if you want to take the risk, you can remove /etc/paths.d/10-cryptex (or move it to a safe place in case you need it later on). PS: Invalid entries in PATH don't really hurt, they primarily slow down (a very little bit) the lookup of new commands run from the shell the first time.

This supported my assumption that the invalid PATH variables were likely due to macOS upgrades (I guessed that based on the existence of invalid PATH variables on other Macs in my house that didn't have a programmer / developer using them)

So, with that I found 2 files that were in /etc/paths.d and moved them to my desktop. Once that was done I restarted my terminal and got this output

 1 /Users/ryan/.cargo/bin
 2 /usr/local/opt/ruby/bin (resolves to /usr/local/Cellar/ruby/3.2.2_1/bin)
 3 /Users/ryan/google-cloud-sdk/bin
 4 /Users/ryan/.pyenv/shims
 5 /opt/homebrew/Cellar/pyenv-virtualenv/1.2.3/shims
 6 /opt/homebrew/bin
 7 /opt/homebrew/sbin
 8 /Library/Frameworks/Python.framework/Versions/3.12/bin
 9 /usr/local/bin (duplicates: 3)
10 /usr/local/sbin
11 /Library/Frameworks/Python.framework/Versions/3.11/bin
12 /Library/Frameworks/Python.framework/Versions/3.10/bin
13 /usr/local/bin (duplicates: 3)
14 /System/Cryptexes/App/usr/bin (resolves to /System/Volumes/Preboot/Cryptexes/App/usr/bin)
15 /usr/bin
16 /bin
17 /usr/sbin
18 /sbin
19 /opt/X11/bin
20 /Library/Apple/usr/bin
21 /Library/TeX/texbin (resolves to /usr/local/texlive/2024basic/bin/universal-darwin)
22 /usr/local/share/dotnet
23 /Library/Frameworks/Mono.framework/Versions/Current/Commands (resolves to /Library/Frameworks/Mono.framework/Versions/6.12.0/Commands)
24 /Applications/Postgres.app/Contents/Versions/latest/bin (resolves to /Applications/Postgres.app/Contents/Versions/14/bin)
25 /Users/ryan/.local/bin (duplicates: 2)
26 /usr/local/bin (duplicates: 3)
27 /Users/ryan/.local/bin (duplicates: 2)

This just left the duplicates that were being added. There are a few spots where applications (or people!) will add items to PATH and there are MANY opinions on which one is The Right Way TM. I use zshell and looked in each of these files (~/.profile, ~/.zshrc, ~/.zprofile) and found that only my .zshrc file contained the addition to PATH for the duplicates I was seeing

  • /usr/local/bin
  • /Users/ryan/.local/bin

With that, I simply commented them out of my .zshrc, restarted my terminal and now I am down to only two duplicates duplicate

 8 /usr/local/bin (duplicates: 2)
12 /usr/local/bin (duplicates: 2)

I looked everywhere trying to find where this duplicate was coming from. I tried a few variations of find

find . -type f -name '.*' 2>/dev/null -exec grep -H '/usr/local/bin' {} \;

But the results were only the commented out lines from .zshrc.

I did find this issue on the Homebrew repo and thought I might be onto something, but it was for a different path so that doesn't seem to be the culprit.

I eventually gave up on trying to find the one true source of the duplicate entry (though I suspect that there is something adding it in the same way that Homebrew is can add a duplicate PATH variable) because I found this command

typeset -U path

I added it to my .zshrc, restarted my terminal and voila, a clean PATH

 1 /Users/ryan/.cargo/bin
 2 /Users/ryan/google-cloud-sdk/bin
 3 /Users/ryan/.pyenv/shims
 4 /opt/homebrew/Cellar/pyenv-virtualenv/1.2.3/shims
 5 /opt/homebrew/bin
 6 /opt/homebrew/sbin
 7 /Library/Frameworks/Python.framework/Versions/3.12/bin
 8 /usr/local/bin
 9 /usr/local/sbin
10 /Library/Frameworks/Python.framework/Versions/3.11/bin
11 /Library/Frameworks/Python.framework/Versions/3.10/bin
12 /System/Cryptexes/App/usr/bin (resolves to /System/Volumes/Preboot/Cryptexes/App/usr/bin)
13 /usr/bin
14 /bin
15 /usr/sbin
16 /sbin
17 /opt/X11/bin
18 /Library/Apple/usr/bin
19 /Library/TeX/texbin (resolves to /usr/local/texlive/2024basic/bin/universal-darwin)
20 /usr/local/share/dotnet
21 /Library/Frameworks/Mono.framework/Versions/Current/Commands (resolves to /Library/Frameworks/Mono.framework/Versions/6.12.0/Commands)
22 /Applications/Postgres.app/Contents/Versions/latest/bin (resolves to /Applications/Postgres.app/Contents/Versions/16/

I still want to try and figure out where that extra /usr/local/bin is coming from, but for now, I have been able to clean up my PATH.

So, the question is, does this really matter? In the answer I found the last statement really caught my attention

PS: Invalid entries in PATH don't really hurt, they primarily slow down (a very little bit) the lookup of new commands run from the shell the first time.

So the answer is, probably not, BUT just knowing that I had duplicates and invalid entries in my PATH was like a splinter in my brain 1 that needed to be excised.

What are your experiences with your PATH? Have you gone to these lengths to clean it up, figured, "Meh, it's not hurting anything, why bother?"

  1. Yes, that is a Matrix reference ↩︎

Page 1 / 13