Putting it All Together

In this final post I'll be writing up how everything fits together. As a recap, here are the steps I go through to create and publish a new post

Create Post

  1. Create .md for my new post
  2. write my words
  3. edit post
  4. Change status from draft to published

Publish Post

  1. Run make html to generate the SQLite database that powers my site's search tool1
  2. Run make vercel to deploy the SQLite database to vercel
  3. Run git add <filename> to add post to be committed to GitHub
  4. Run git commit -m <message> to commit to GitHub
  5. Post to Twitter with a link to my new post

My previous posts have gone over how each step was automated, but now we'll 'throw it all together'.

I updated my Makefile with a new command:

tweet:
    ./tweet.sh

When I run make tweet it will calls tweet.sh. I wrote about the tweet.sh file in Auto Generating the Commit Message so I won't go deeply into here. What it does is automate steps 1 - 5 above for the Publish Post section above.

And that's it really. I've now been able to automate the file creation and publish process.

Admittedly these are the 'easy' parts. The hard part is the actual writing, but it does remove a ton pf potential friction from my workflow and this will hopefully lead to more writing this year.

  1. make vercel actually runs make html so this isn't really a step that I need to do. ↩︎

Automating the file creation

In my last post Auto Generating the Commit Message I indicated that this post I would "throw it all together and to get a spot where I can run one make command that will do all of this for me".

I decided to take a brief detour though as I realized I didn't have a good way to create a new post, i.e. the starting point wasn't automated!

In this post I'm going to go over how I create the start to a new post using Makefile and the command make newpost

My initial idea was to create a new bash script (similar to the tweet.sh file), but as a first iteration I went in a different direction based on this post How to Slugify Strings in Bash.

The command that the is finally arrived at in the post above was

newpost:
    vim +':r templates/post.md' $(BASEDIR)/content/blog/$$(date +%Y-%m-%d)-$$(echo -n $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md

which was really close to what I needed. My static site is set up a bit differently and I'm not using vim (I'm using VS Code) to write my words.

The first change I needed to make was to remove the use of vim from the command and instead use touch to create the file

newpost:
    touch $(BASEDIR)/content/blog/$$(date +%Y-%m-%d)-$$(echo -n $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md

The second was to change the file path for where to create the file. As I've indicated previously, the structure of my content looks like this:

content
├── musings
├── pages
├── productivity
├── professional\ development
└── technology

giving me an updated version of the command that looks like this:

touch content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md

When I run the command make newpost title='Automating the file creation' category='productivity' I get a empty new files created.

Now I just need to populate it with the data.

There are seven bits of meta data that need to be added, but four of them are the same for each post

Author: ryan
Tags:
Series: Remove if Not Needed
Status: draft

That allows me to have the newpost command look like this:

newpost:
    touch content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "Author: ryan" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "Tags: " >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "Series: Remove if Not Needed"  >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "Status: draft"  >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md

The remaining metadata to be added are:

  • Title:
  • Date
  • Slug

Of these, Date and Title are the most straightforward.

bash has a command called date that can be formatted in the way I want with %F. Using this I can get the date like this

echo "Date: $$(date +%F)" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md

For Title I can take the input parameter title like this:

echo "Title: $${title}" > content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md

Slug is just Title but slugified. Trying to figure out how to do this is how I found the article above.

Using a slightly modified version of the code that generates the file, we get this:

printf "Slug: " >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo "$${title}" | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md

One thing to notice here is that printf. I needed/wanted to echo -n but make didn't seem to like that. This StackOverflow answer helped me to get a fix (using printf) though I'm sure there's a way I can get it to work with echo -n.

Essentially, since this was a first pass, and I'm pretty sure I'm going to end up re-writing this as a shell script I didn't want to spend too much time getting a perfect answer here.

OK, with all of that, here's the entire newpost recipe I'm using now:

newpost:
    touch content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "Title: $${title}" > content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "Date: $$(date +%F)" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "Author: ryan" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "Tags: " >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    printf "Slug: " >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "$${title}" | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "Series: Remove if Not Needed"  >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
    echo "Status: draft"  >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md

This allows me to type make newpost and generate a new file for me to start my new post in!1

  1. When this post was originally published the slug command didn't account for making all of the text lower case. This was fixed in a subsequent commit ↩︎

Auto Generating the Commit Message

In my first post of this series I outlined the steps needed in order for me to post. They are:

  1. Run make html to generate the SQLite database that powers my site's search tool1
  2. Run make vercel to deploy the SQLite database to vercel
  3. Run git add <filename> to add post to be committed to GitHub
  4. Run git commit -m <message> to commit to GitHub
  5. Post to Twitter with a link to my new post

In this post I'll be focusing on how I automated step 4, Run git commit -m <message> to commit to GitHub.

Automating the "git commit ..." part of my workflow

In order for my GitHub Action to auto post to Twitter, my commit message needs to be in the form of "New Post: ...". What I'm looking for is to be able to have the commit message be something like this:

New Post: Great New Post https://ryancheley.com/yyyy/mm/dd/great-new-post/

This is basically just three parts from the markdown file, the Title, the Date, and the Slug.

In order to get those details, I need to review the structure of the markdown file. For Pelican writing in markdown my file is structured like this:

Title:
Date:
Tags:
Slug:
Series:
Authors:
Status:

My words start here and go on for a bit.

In the last post I wrote about how to git add the files in the content directory. Here, I want to take the file that was added to git and get the first 7 rows, i.e. the details from Title to Status.

The file that was updated that needs to be added to git can be identified by running

find content -name '*.md' -print | sed 's/^/"/g' | sed 's/$/"/g' | xargs git add

Running git status now will display which file was added with the last command and you'll see something like this:

 git status
On branch main
Untracked files:
  (use "git add <file>..." to include in what will be committed)
        content/productivity/auto-generating-the-commit-message.md

What I need though is a more easily parsable output. Enter the porcelin flag which, per the docs

Give the output in an easy-to-parse format for scripts. This is similar to the short output, but will remain stable across Git versions and regardless of user configuration. See below for details.

which is exactly what I needed.

Running git status --porcelain you get this:

❯ git status --porcelain
?? content/productivity/more-writing-automation.md

Now, I just need to get the file path and exclude the status (the ?? above in this case2), which I can by piping in the results and using sed

❯ git status --porcelain | sed s/^...//
content/productivity/more-writing-automation.md

The sed portion says

  • search the output string starting at the beginning of the line (^)
  • find the first three characters (...). 3
  • replace them with nothing (//)

There are a couple of lines here that I need to get the content of for my commit message:

  • Title
  • Slug
  • Date
  • Status4

I can use head to get the first n lines of a file. In this case, I need the first 7 lines of the output from git status --porcelain | sed s/^...//. To do that, I pipe it to head!

git status --porcelain | sed s/^...// | xargs head -7

That command will return this:

Title: Auto Generating the Commit Message
Date: 2022-01-24
Tags: Automation
Slug: auto-generating-the-commit-message
Series: Auto Deploying my Words
Authors: ryan
Status: draft

In order to get the Title, I'll pipe this output to grep to find the line with Title

git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: '

which will return this

Title: Auto Generating the Commit Message

Now I just need to remove the leading Title: and I've got the title I'm going to need for my Commit message!

git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: ' | sed -e 's/Title: //g'

which return just

Auto Generating the Commit Message

I do this for each of the parts I need:

  • Title
  • Slug
  • Date
  • Status

Now, this is getting to have a lot of parts, so I'm going to throw it into a bash script file called tweet.sh. The contents of the file look like this:

TITLE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: ' | sed -e 's/Title: //g'`
SLUG=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Slug: ' | sed -e 's/Slug: //g'`
POST_DATE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Date: ' | sed -e 's/Date: //g' | head -c 10 | grep '-' | sed -e 's/-/\//g'`
POST_STATUS=` git status --porcelain | sed s/^...// | xargs head -7 | grep 'Status: ' | sed -e 's/Status: //g'`

You'll see above that the Date piece is a little more complicated, but it's just doing a find and replace on the - to update them to / for the URL.

Now that I've got all of the pieces I need, it's time to start putting them together

I define a new variable called URL and set it

URL="https://ryancheley.com/$POST_DATE/$SLUG/"

and the commit message

MESSAGE="New Post: $TITLE $URL"

Now, all I need to do is wrap this in an if statement so the command only runs when the STATUS is published

if [ $POST_STATUS = "published" ]
then
    MESSAGE="New Post: $TITLE $URL"

    git commit -m "$MESSAGE"

    git push github main
fi

Putting this all together (including the git add from my previous post) and the tweet.sh file looks like this:

# Add the post to git
find content -name '*.md' -print | sed 's/^/"/g' | sed 's/$/"/g' | xargs git add


# Get the parts needed for the commit message
TITLE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: ' | sed -e 's/Title: //g'`
SLUG=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Slug: ' | sed -e 's/Slug: //g'`
POST_DATE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Date: ' | sed -e 's/Date: //g' | head -c 10 | grep '-' | sed -e 's/-/\//g'`
POST_STATUS=` git status --porcelain | sed s/^...// | xargs head -7 | grep 'Status: ' | sed -e 's/Status: //g'`

URL="https://ryancheley.com/$POST_DATE/$SLUG/"

if [ $POST_STATUS = "published" ]
then
    MESSAGE="New Post: $TITLE $URL"

    git commit -m "$MESSAGE"

    git push github main
fi

When this script is run it will find an updated or added markdown file (i.e. article) and add it to git. It will then parse the file to get data about the article. If the article is set to published it will commit the file with a message and will push to github. Once at GitHub, the Tweeting action I wrote about will tweet my commit message!

In the next (and last) article, I'm going to throw it all together and to get a spot when I can run one make command that will do all of this for me.

Caveats

The script above works, but if you have multiple articles that you're working on at the same time, it will fail pretty spectacularly. The final version of the script has guards against that and looks like this

  1. make vercel actually runs make html so this isn't really a step that I need to do. ↩︎
  2. Other values could just as easily be M or A ↩︎
  3. Why the first three characters, because that's how porcelain outputs the status ↩︎
  4. I will also need the Status to do some conditional logic otherwise I may have a post that is in draft status that I want to commit and the GitHub Action will run posting a tweet with an article and URL that don't actually exist yet. ↩︎

git add filename automation

In my last post I mentioned the steps needed in order for me to post. They are:

  1. Run make html to generate the SQLite database that powers my site's search tool1
  2. Run make vercel to deploy the SQLite database to vercel
  3. Run git add <filename> to add post to be committed to GitHub
  4. Run git commit -m <message> to commit to GitHub
  5. Post to Twitter with a link to my new post

In that post I focused on number 5, posting to Twitter with a link to the post using GitHub Actions.

In this post I'll be focusing on how I automated step 3, "Run git add <filename> to add post to be committed to GitHub".

Automating the git add ... part of my workflow

I have my pelican content set up so that the category of a post is determined by the directory a markdown file is placed in. The structure of my content folder looks like this:

content
├── musings
├── pages
├── productivity
├── professional\ development
└── technology

If you just just git status on a directory it will give you the status of all of the files in that directory that have been changed, added, removed. Something like this:

 git status
On branch main
Untracked files:
  (use "git add <file>..." to include in what will be committed)
        content/productivity/more-writing-automation.md
        Makefile
        metadata.json

That means that when you run git add . all of those files will be added to git. For my purposes all that I need is the one updated file in the content directory.

The command find does a great job of taking a directory and allowing you to search for what you want in that directory. You can run something like

find content -name '*.md' -print

And it will return essentially what you're looking for. Something like this:

content/pages/404.md
content/pages/curriculum-vitae.md
content/pages/about.md
content/pages/brag.md
content/productivity/adding-the-new-file.md
content/productivity/omnifocus-3.md
content/productivity/making-the-right-choice-or-how-i-learned-to-live-with-limiting-my-own-technical-debt-and-just-be-happy.md
content/productivity/auto-tweeting-new-post.md
content/productivity/my-outlook-review-process.md
content/productivity/rules-and-actions-in-outlook.md
content/productivity/auto-generating-the-commit-message.md
content/productivity/declaring-omnifocus-bankrupty.md

However, because one of my categories has a space in it's name (professional development) if you pipe the output of this to xargs git add it fails with the error

fatal: pathspec 'content/professional' did not match any files

In order to get around this, you need to surround the output of the results of find with double quotes ("). You can do this by using sed

find content -name '*.md' -print | sed 's/^/"/g' | sed 's/$/"/g'

What this says is, take the output of find and pipe it to sed and use a global find and replace to add a " to the start of the line (that's what the ^ does) and then pipe that to sed again and use a global find and replace to add a " to the end of the line (that's what the '$' does).

Now, when you run

find content -name '*.md' -print | sed 's/^/"/g' | sed 's/$/"/g'

The output looks like this:

"content/pages/404.md"
"content/pages/curriculum-vitae.md"
"content/pages/about.md"
"content/pages/brag.md"
"content/productivity/adding-the-new-file.md"
"content/productivity/omnifocus-3.md"
"content/productivity/making-the-right-choice-or-how-i-learned-to-live-with-limiting-my-own-technical-debt-and-just-be-happy.md"
"content/productivity/auto-tweeting-new-post.md"
"content/productivity/my-outlook-review-process.md"
"content/productivity/rules-and-actions-in-outlook.md"
"content/productivity/auto-generating-the-commit-message.md"
"content/productivity/declaring-omnifocus-bankrupty.md"

Now, you can pipe your output to xargs git add and there is no error!

The final command looks like this:

find content -name '*.md' -print | sed 's/^/"/g' | sed 's/$/"/g' | xargs git add

In the next post, I'll walk through how I generate the commit message to be used in the automatic tweet!

  1. make vercel actually runs make html so this isn't really a step that I need to do. ↩︎

Auto Tweeting New Post

Each time I write something for this site there are several steps that I go through to make sure that the post makes it's way to where people can see it.

  1. Run make html to generate the SQLite database that powers my site's search tool1
  2. Run make vercel to deploy the SQLite database to vercel
  3. Run git add <filename> to add post to be committed to GitHub
  4. Run git commit -m <message> to commit to GitHub
  5. Post to Twitter with a link to my new post

If there's more than 2 things to do, I'm totally going to forget to do one of them.

The above steps are all automat-able, but the one I wanted to tackle first was the automated tweet. Last night I figured out how to tweet with a GitHub action.

There were a few things to do to get the auto tweet to work:

  1. Find a GitHub in the Market Place that did the auto tweet (or try to write one if I couldn't find one)
  2. Set up a twitter app with Read and Write privileges
  3. Set the necessary secrets for the report (API Key, API Key Secret, Access Token, Access Token Secret, Bearer)
  4. Test the GitHub Action

The action I chose was send-tweet-action. It's got easy to read documentation on what is needed. Honestly the hardest part was getting a twitter app set up with Read and Write privileges.

I'm still not sure how to do it, honestly. I was lucky enough that I already had an app sitting around with Read and Write from the WordPress blog I had previously, so I just regenerated the keys for that one and used them.

The last bit was just testing the action and seeing that it worked as expected. It was pretty cool running an action and then seeing a tweet in my timeline.

The TIL for this was that GitHub Actions can have conditionals. This is important because I don't want to generate a new tweet each time I commit to main. I only want that to happen when I have a new post.

To do that, you just need this in the GitHub Action:

    if: "contains(github.event.head_commit.message, '<String to Filter on>')"

In my case, the <String to Filter on> is New Post:.

The send-tweet-action has a status field which is the text tweeted. I can use the github.event.head_commit.message in the action like this:

    ${{ github.event.head_commit.message }}

Now when I have a commit message that starts 'New Post:' against main I'll have a tweet get sent out too!

This got me to thinking that I can/should automate all of these steps.

With that in mind, I'm going to work on getting the process down to just having to run a single command. Something like:

    make publish "New Post: Title of my Post https://www.ryancheley.com/yyyy/mm/dd/slug/"
  1. make vercel actually runs make html so this isn't really a step that I need to do. ↩︎

OmniFocus 3!

The OmniGroup posted on their blog the other day that they have a ship date for OmniFocus 3 … May 30.

To say that I’m excited is a bit of an understatement. I’ve been a loyal OmniFocus user for about 5 years now, and though I have declared OmniFocus bankruptcy once, I still believe it’s a super useful App and I could not imagine life without it.

OmniFocus3 will bring with it tags which is something I’ve really wanted and struggled without in using OmniFocus2. Sometimes things don’t just fall into pure GTD contexts and tags appear to be a recognition of that.

The initial release on May 30 is iOS only. The Mac Version is set to be released in 2018 but no firm date has been announced.

Only 25 days until upgrade day 😊

My Outlook Review Process

In a previous post I spoke about my use of Rules and Custom Actions in Outlook. In this post I’ll talk about my Review process which I adapted from David Allen’s Getting Things Done methodology.

There are several decisions that need to be made when reviewing email:

  1. Is this for me?
  2. Is it something that I need to do right now?

If the email is not for me then I need to decide who it is for. Once I have made that decision I Delegate the responsibility of the email.

I will forward the email (or write up a new email with the salient details) and send it to the person that I’m delegating the responsibility to. But here’s the trick … I have a rule that if I put my email address in the BCC line it will be moved to a folder called @waiting and flagged as a follow up item.

I then review the @waiting folder once a day or every other day and will send a follow up email if needed OR delete the email once the person has responded to me.

The next decision I need to make is “Do I need to do this right now?” If I don’t, then I’ll Defer a response or action to the needed time. Unfortunately Outlook doesn’t have a good defer mechanism like my beloved OmniFocus, so I will convert the email into a meeting and remove the attendees other than me to a data and time when I need to work on the email.

Finally, the email may require me to Do something right now. In that case, I focus fully on the requirements of the email and do it.

This may involve scheduling a meeting, or supplying times when I’m available for a meeting, or it might be answering a question asked of me. I will do any necessary research OR as I start working on the email that requires a decision I’ll discover that I don’t have enough information to actually make that decision.

When that happens I’ll send an email (either forwarding or creating a new one) to the person I think that can supply the information and, you guessed it, I’ll BCC myself so that the email is put into my @waiting folder.

Using the above methodology I’m able to manage the massive amounts of email that I receive. Hopefully this helps someone out there who is drowning in email.

Rules and Actions in Outlook or How I stayed sane after coming back to 1000+ emails

One of the best things about the holidays is getting to take some time off and visit family and friends.

One of the worst things about the holidays is coming back to work and an inbox with more than 1200 emails. Granted, I was out of the office for 4 business days so it’s only about 300 emails per day, but still … it’s a lot1.

I have two strategies for dealing with the many emails I get:

  1. Outlook Rules (Rules)
  2. Outlook Custom Actions (Actions)

Rules

Rules allow you to apply actions automatically as they are received in your inbox. This is useful if you work (or manage) one or more tech support desks and you need to segregate the email received into folders to review later.

This is exactly the kind of thing that I do. I currently have more than 50 rules set up in Outlook and they help process about 50%-60% of the emails that I receive.

The other 40% - 50% can’t be ‘Rule’ drive.

For these types of emails I use Actions.

Actions

Actions are amazing because they can have keyboard shortcuts assigned to them.2

I have 3 Actions that I use consistently to help manage my email that isn’t automatically routed with Rules:

  1. Archive (Ctrl + Shift + 9)
  2. Review (Ctrl + Shift + 5)
  3. Review with Boss (Ctrl + Shift + 1)

I archive all internal emails I receive, and probably about 90% of the external emails I receive so that I can refer back to them when needed. I have to archive them into a .pst file because my IS department only allocates a finite (some might say small finite) amount of space.

Using the Keyboard Shortcut Ctrl + Shift + 9 will automatically move an email to the archive. This is really helpful for emails that were sent to me for informational purposes OR that have a long email chain and a resolution was reached without me.

The Review Action will mark the email with a Flag and move it to a @Review folder. I do this with emails that will take more than 2 minutes to process. This works well for longer emails, or emails that need me to make a decision and I am not ready to make that decision yet.

The Review with Boss action is the one action that is used the least of these three. It is useful when I receive an email and need guidance from my boss. If I just need to send her an email about it then I’ll usually mark it as Review and follow up with it during my review process or if it will take less than 2 minutes to write up an email I’ll take care of it right then and there.

Usually emails to her take longer than 2 minutes to compose because they need to be short 3

In my next post I’ll go over my review process. This is where I go through the 25 - 100 emails that have been marked as Review and answer the emails that need attention.

  1. I used to get even more. When my company upgraded to Exchange Server 2016 we got the ability for Out of Office Notifications to appear in a message BEFORE a person sent it. This has cut down on the amount of email I receive when I’m out of the office by about 30% - 50% ↩︎
  2. If you’re not using keyboard shortcuts, you’re really missing out on some power user productivity hacks that will help you out. ↩︎
  3. This idea is cribbed from Blaise Pascal’s statement, “Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte” … in English, “I have made this longer than usual because I have not had time to make it shorter.” Hat tip to Quote Investigator ↩︎

Making the Right Choice, or How I Learned to Live with Limiting My Own Technical Debt and Just Be Happy

One of the things that comes up in my day job is trying to make sure that reports that we create are correct, not only from a data perspective, but from an architectural perspective. There are hundreds of legacy reports with legacy SQL code that has been written by 10’s of developers (some actual developers and some not so actual developers) over the last 10+ years.

Today a request came in to update a productivity report to include a new user. The request included their user ID from the application where their productivity is being tracked from.

This request looked exactly like another report and request that I’ve seen that involved running productivity from the same system with the same aspects (authorizations work).

I immediately thought that the reports were the same and set out to add the user id to a table ReferralsTeam which includes fields UserID and USerName.

I also thought that documenting what needed to be done for this would be a good thing to be done.

I documented the fix and linked the Confluece article to the JIRA issue and then I realized my mistake. This wasn’t the same report. It wasn’t even the same department!

OK, so two things:

  1. There are two reports that do EXACTLY the same thing for two different departments
  2. The report for the other department has user ids hard coded in the SQL

What to do?

The easy way is to just update the stored procedure with the hard coded user ids with the new one and call it a day

The right way:

  1. Update the table ReferralsTeam to have a new column called department ... or better yet create a second table called Departments with fields DepartmentID and DepartmentName and add the DepartmentID to the ReferralsTeam table.
  2. Populate the new column with the correct data for the current team that has records in it
  3. Update the the various stored procedures that use the ReferralsTeam table in them to include a parameter that is used to filter the new column that was added to keep the data consistent
  4. Add the User IDs from the report that has the user IDs hard coded, i.e. the new department
  5. Update the report that uses the hard coded user ids to use the dynamic stored procedure
  6. Verify the results
  7. Build a user interface to allow the data to be updated outside of SQL Server Management Studio
  8. Give access to that user interface to the Department Managers so they can manage it on their own

So, which one would you do?

In this case, I updated the hard coded stored procedure to include the new user id to get that part of the request done. This helps satisfy the requester and allows them to have the minimum amount of down time.

I then also create a new JIRA issue so that we can look at doing steps 1 - 6 above and assigned to the report developer. Steps 7 & 8 are in a separate JIRA issue that is assigned to the Web Developers.

Doing things the right way will sometimes take longer to implement in the short run, but in the long run we’ve removed the need for Managers in these departments to ask to have the reports updated, we prevent bad/stale filters from being used, and we can eliminate a duplicative report!

One interesting note, the reason I caught the duplication was because of a project that we’ve been working on to document all of the hundreds of reports we have. I searched Confluence for the report name and it’s recipients were unexpected for me. That lead me to question all I had done and really evaluate the best course of action. While I kind of went out of order (and this is why I started documented one process that I didn’t mean to) I was still able to catch my mistake and rectify it.

Documentation is a pain in the ass in the moment, but holy crap it can really pay off in unexpected ways in the long run.

Declaring Omnifocus Bankrupty

I've been an on-again, off-again user of the Getting Things Done methodology for several years now. I'm also a geek, so I indulge my inner geekiness and like to have 'tools' to help me with my hobbies / neuroses. Enter Omnifocus an amazing GTD application created by The Omni Group.

I have always enjoyed how easy it is to sync between each of my devices using the Omni Sync Server so that my iPhone knows what changes I made on my iMac. It's pretty sweet, but lately I've gotten overwhelmed with the cruft in my Omnifocus database. So much so that I've actually stopped using OmniFocus as my GTD application of choice and have 'gone rogue' and am not using anything right now. Actually, I haven't used anything for several weeks now. It's starting to get to me.

Tonight I decided, the hell with it. I'm ignoring my 'todo' list anyway, why now just declare 'OmniFocus / GTD bankruptcy' and start the whole darn thing over again.

In order to make 'all my troubles go away' I found this article on the OmniGroup's support forum ... which BTW is a great place for all things OmniFocus!

using the instructions I found where the ofocus file was located and changed it's name from this:

OmniFocus Start

to this:

OmniFocus End

Then I just followed the steps 5 - 11 and magically all of my tasks were gone. Just. Like. That.

Then I had to update my iOS OmniFocus, but that wasn't an issue. Just selected the 'Keep Sync Database' to over write the database on iOS and I was all set.

Doing this loses ALL data, including the Contexts, and Perspectives, but I can create ones that I need easily enough. There's this guy called MacSparky that's kind of a savant about this stuff. I'm sure he's got a post or two that can help.

I don't know that I'll do better this time, or that I won't just do this again in 6 months, or 12 months, or 5 years ... but for right now, it's what I need to do so I can get back to Getting Things Done.