Talk Python Build 10 Apps Review

Michael Kennedy over at Talk Python had a sale on his courses over the holidays so I took the plunge and bought them all. I have been listening to the podcast for several months now so I knew that I wouldn’t mind listening to him talk during a course (which is important!).

The first course I watched was ‘Python Jumpstart by Building 10 Apps’. The apps were:

  • App 1: Hello (you Pythonic) world
  • App 2: Guess that number game
  • App 3: Birthday countdown app
  • App 4: Journal app and file I/O
  • App 5: Real-time weather client
  • App 6: LOLCat Factory
  • App 7: Wizard Battle App
  • App 8: File Searcher App
  • App 9: Real Estate Analysis App
  • App 10: Movie Search App

For each app you learn a specific set of skills related to either Python or writing ‘Pyhonic’ code. I think the best part was that since it was all self paced I was able to spend time where I wanted to exploring ideas and concepts that wouldn’t have been available in traditional classrooms.

Also, since I’m fully adulted it can be hard to find time to watch and interact with courses like this so being able to watch them when I wanted to was a bonus.

Hello (you Pythonic) world is what you would expect from any introductory course. You write the basic ‘Hello World’ script, but with a twist. For this app you interact with it so that it asks your name and then it will output ‘Hello username my name is HAL!’ … although because I am who I am HAL wasn’t the name in the course, it was jut the name I chose for the app.

My favorite app to build and use was the Wizard App (app 7). It is a text adventure influenced by dungeons and dragons and teaches about classes and inheritance an polymorphism. It ws pretty cool.

The version that you are taught to make only has 4 creatures and ends pretty quickly. I enhanced the game to have it randomly create up to 250 creatures (some of them poisonous) and you level up during the game so that you can feel like a real character in an RPG.

The journal application was interesting because I finally started to get my head wrapped around file I/o. I’m not sure why I’ve had such a difficult time internalizing the concept, but the course seemed to help me better understand what was going on in terms of file streaming and reading data to do a thing.

My overall experience with the course was super positive. I’m really glad that I watched it and have already started to try to use the things that I’ve learned to improve code I’ve previously written.

With all of the good, there is some not so good.

The course uses it’s own player, which is fine, but it’s missing some key features:

  1. Time reaming
  2. Speed controls (i.e. it only plays at one speed, 1x)

In addition, sometimes the player would need to be reloaded which could be frustrating.

Overall though it was a great course and I’m glad I was able to do it.

Next course: Mastering PyCharm!

My Map Art Project

I’d discovered a python package called osmnx which will take GIS data and allow you to draw maps using python. Pretty cool, but I wasn’t sure what I was going to do with it.

After a bit of playing around with it I finally decided that I could make some pretty cool Fractures.

I’ve got lots of Fracture images in my house and I even turned my diplomas into Fractures to hang up on the wall at my office, but I hadn’t tried to make anything like this before.

I needed to figure out what locations I was going to do. I decided that I wanted to do 9 of them so that I could create a 3 x 3 grid of these maps.

I selected 9 cities that were important to me and my family for various reasons.

Next writing the code. The script is 54 lines of code and doesn’t really adhere to PEP8 but that just gives me a chance to do some reformatting / refactoring later on.

In order to get the desired output I needed several libraries:

osmnx (as I’d mentioned before)

If you’ve never used PIL before it’s the ‘Python Image Library’ and according to it’s home page it

adds image processing capabilities to your Python interpreter. This library supports many file formats, and provides powerful image processing and graphics capabilities.

OK, let’s import some libraries!

import osmnx as ox, geopandas as gpd, os
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw

Next, we establish the configurations:

ox.config(log_file=True, log_console=False, use_cache=True)

The ox.config allows you to specify several options. In this case, I’m:

  1. Specifying that the logs be saved to a file in the log directory
  2. Supress the output of the log file to the console (this is helpful to have set to True when you’re first running the script to see what, if any, errors you have.
  3. The use_chache=True will use a local cache to save/retrieve http responses instead of calling API repetitively for the same request URL

This option will help performance if you have the run the script more than once.

OSMX has many different options to generate maps. I played around with the options and found that the walking network within 750 meters of my address gave me the most interesting lines.

AddressDistance = 750
AddressDistanceType = 'network'
AddressNetworkType = 'walk'

Now comes some of the most important decisions (and code!). Since I’ll be making this into a printed image I want to make sure that the image and resulting file will be of a high enough quality to render good results. I also want to start with a white background (although a black background might have been kind of cool). I also want to have a high DPI. Taking these needs into consideration I set my plot variables:

PlotBackgroundColor = '#ffffff'
PlotNodeSize = 0
PlotFigureHeight = 40
PlotFigureWidth = 40
PlotFileType = 'png'
PlotDPI = 300
PlotEdgeLineWidth = 10.0

I played with the PlotEdgeLineWidth a bit until I got a result that I liked. It controls how thick the route lines are and is influenced by the PlotDPI. For the look I was going for 10.0 worked out well for me. I’m not sure if that means a 30:1 ratio for PlotDPI to PlotEdgeLineWidth would be universal but if you like what you see then it’s a good place to start.

One final piece was deciding on the landmarks that I was going to use. I picked nine places that my family and I had been to together and used addresses that were either of the places that we stayed at (usually hotels) OR major landmarks in the areas that we stayed. Nothing special here, just a text file with one location per line set up as

Address, City, State

For example:

1234 Main Street, Anytown, CA

So we just read that file into memory:

landmarks = open('/Users/Ryan/Dropbox/Ryan/Python/landmarks.txt', 'r')

Next we set up some scaffolding so we can loop through the data effectively

landmarks = landmarks.readlines()
landmarks = [item.rstrip() for item in landmarks]

fill = (0,0,0,0)

city = []

The loop below is doing a couple of things:

  1. Splits the landmarks array into base elements by breaking it apart at the commas (I can do this because of the way that the addresses were entered. Changes may be needed to account for my complex addresses (i.e. those with multiple address lines (suite numbers, etc) or if local addresses aren’t constructed in the same way that US addresses are)
  2. Appends the second and third elements of the parts array and replaces the space between them with an underscore to convert Anytown, CA to Anytown_CA
for element in landmarks:
	parts = element.split(',')
	city.append(parts[1].replace(' ', '', 1)+'_'+parts[2].replace(' ', ''))

This next line isn’t strictly necessary as it could just live in the loop, but it was helpful for me when writing to see what was going on. We want to know how many items are in the landmarks

rows = len(landmarks)

Now to loop through it. A couple of things of note:

The package includes several graph_from_... functions. They take as input some type, like address, i.e. graph_from_address (which is what I’m using) and have several keyword arguments.

In the code below I’m using the ith landmarks item and setting the distance, distance_type, network_type and specifying an option to make the map simple by setting simplify=‘true’

To add some visual interest to the map I’m using this line

ec = ['#cc0000' if data['length'] >=100 else '#3366cc' for u, v, key, data in G.edges(keys=True, data=True)]

If the length of the part of the map is longer than 100m then the color is displayed as #cc0000 (red) otherwise it will be #3366cc (blue)

The plot_graph is what does the heavy lifting to generate the image. It takes as input the output from the graph_from_address and ec to identify what and how the map will look.

Next we use the PIL library to add text to the image. It takes into memory the image file and saves out to a directory called /images/. My favorite part of this library is that I can choose what font I want to use (whether it’s part of the system fonts or a custom user font) and the size of the font. For my project I used San Francisco at 512.

Finally, there is an exception for the code that adds text. The reason for this is that when I was playing with adding text to the image I found that for 8 of 9 maps having the text in the upper left hand corner worker really well. It was just that last one (San Luis Obispo, CA) that didn’t.

So, instead of trying to find a different landmark, I decided to take a bit of artistic license and put the San Luis Obispo text in the upper right hard corner.

Once the script is all set simply typing python in my terminal window from the directory where the file is saved generated the files.

All I had to do what wait and the images were saved to my /images/ directory.

Next, upload to Fracture and order the glass images!

I received the images and was super excited. However, upon opening the box and looking at them I noticed something wasn’t quite right

Napa with the text slightly off the image
Napa with the text slightly off the image

As you can see, the name place is cut off on the left. Bummer.

No reason to fret though! Fracture has a 100% satisfaction guarantee. So I emailed support and explained the situation.

Within a couple of days I had my bright and shiny fractures to hang on my wall

Napa with the text properly displaying
Napa with the text properly displaying

So that my office wall is no longer blank and boring:

but interesting and fun to look at

Migrating from Square Space to Word Press

This weekend I migrated my site from Square Space to WordPress. I had been planning to do this for a while (ever since a Hover ad read on ATP earlier this summer). This weekend was the last weekend before my Square Spacesubscription was set to expire so I finally made the switch.

Why I did it

Square Space offers a beautiful interface and great templates to get you started. They make everything about setting up a blog, portfolio or online store as easy as it can get. But … that’s kind of where it ends for me. While the set up is amazingly easy, the actually content posting (for me this means my writing) was more difficult than I would have liked.

In order to get something posted to my Square Space site I would write something in anyone of a number of Plain Text Editors (BBEdit, Drafts, Editorial, Ulysses). Then I would preview the generated HTML to verify it looked the way I wanted it to. Finally, I would post my MarkDown to the Square Space Blog App on iOS and do it All. Over. Again.

To say that it was frustrating is a bit of an understatement. I looked really hard to see what APIs existed and found that there used to be an API but that Square Space removed them for some reason. So no direct posting to my blog by my favorite text editors.

So, with Hover having a discount on domains, and me getting an AWSaccount where I could host WordPress and a rich set of WordPress APIs to post directly from some of my favorite text editors, it seemed like a no brainer to make the switch.

How I set up my WordPress Install

The AWS ecosystem has some amazing documentation on how to do just about anything that you want. So, instead of laboriously taking screenshots and writing up what I did, I’ll just link to Amazon’s Launch a WordPress Website tutorial

Exporting from Square Space to WordPress

For all the pain it was to get content into SquareSpace, it was a breeze to get it out. Again, no need to get screenshots or write it up if I can just link to it instead!

What I hope to gain from it

As I wrote earlier my main reason for leaving Square Space was the difficulty I had getting content in. So, now that I’m on a WordPress site, what am I hoping to gain from it?

  1. Easier to post my writing
  2. See Item 1

Writing is already really hard for me. I struggle with it and making it difficult to get my stuff out into the world makes it that much harder. My hope is that not only will I write more, but that my writing will get better because I’m writing more.

Ulysses integration

With all of that, what has my experience been with writing my first post to my WordPress site?

This entire post was written and edited in Ulysses. I was able to preview my post in Ulysses. I was able topost my content to the site with Ulysses. Basically, Ulysses is a kick ass app and on day one of the conversion, I’m about as happy with a decision that I can be given the short amount of time since I’ve made it.

Making Background Images

I’m a big fan of podcasts. I’ve been listening to them for 4 or 5 years now. One of my favorite Podcast Networks, Relay just had their second anniversary. They offer memberships and after listening to hours and hours of All The Great Shows I decided that I needed to become a member.

One of the awesome perks of Relay membership is a set of Amazing background images.

This is fortuitous as I’ve been looking for some good backgrounds for my iMac, and so it seemed like a perfect fit.

On my iMac I have several spaces configured. One for Writing, one for Podcast and one for everything else. I wanted to take the backgrounds from Relay and have them on the Writing space and the Podcasting space, but I also wanted to be able to distinguish between them. One thing I could try to do would be to open up an image editor (Like Photoshop, Pixelmater or Acorn) and add text to them one at a time (although I’m sure there is a way to script them) but I decided to see if I could do it using Python.

Turns out, I can.

This code will take the background images from my /Users/Ryan/Relay 5K Backgrounds/ directory and spit them out into a subdirectory called Podcasting

from PIL import Image, ImageStat, ImageFont, ImageDraw
from os import listdir
from os.path import isfile, join

# Declare Text Attributes
TextFontSize = 400
TextFontColor = (128,128,128)
font = ImageFont.truetype("~/Library/Fonts/Inconsolata.otf", TextFontSize)

mypath = '/Users/Ryan/Relay 5K Backgrounds/'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]

rows = len(onlyfiles)

for i in range(rows):
    img =[i])
    width, height = img.size
    draw = ImageDraw.Draw(img)
    TextXPos = 0.6 * width
    TextYPos = 0.85 * height
    draw.text((TextXPos, TextYPos),'Podcasting',TextFontColor,font=font)
    draw.text'/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+onlyfiles[i])
    print('/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+onlyfiles[i]+' succesfully saved!')

This was great, but it included all of the images, and some of them are really bright. I mean, like really bright.

So I decided to use something I learned while helping my daughter with her Science Project last year and determine the brightness of the images and use only the dark ones.

This lead me to update the code to this:

from PIL import Image, ImageStat, ImageFont, ImageDraw
from os import listdir
from os.path import isfile, join

def brightness01( im_file ):
   im ='L')
   stat = ImageStat.Stat(im)
   return stat.mean[0]

# Declare Text Attributes
TextFontSize = 400
TextFontColor = (128,128,128)
font = ImageFont.truetype("~/Library/Fonts/Inconsolata.otf", TextFontSize)

mypath = '/Users/Ryan/Relay 5K Backgrounds/'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]

darkimages = []

rows = len(onlyfiles)

for i in range(rows):
    if brightness01(mypath+onlyfiles[i]) <= 65:

darkimagesrows = len(darkimages)

for i in range(darkimagesrows):
    img =[i])
    width, height = img.size
    draw = ImageDraw.Draw(img)
    TextXPos = 0.6 * width
    TextYPos = 0.85 * height
    draw.text((TextXPos, TextYPos),'Podcasting',TextFontColor,font=font)
    draw.text'/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+darkimages[i])
    print('/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+darkimages[i]+' succesfully saved!')

I also wanted to have backgrounds generated for my Writing space, so I tacked on this code:

for i in range(darkimagesrows):
    img =[i])
    width, height = img.size
    draw = ImageDraw.Draw(img)
    TextXPos = 0.72 * width
    TextYPos = 0.85 * height
    draw.text((TextXPos, TextYPos),'Writing',TextFontColor,font=font)
    draw.text'/Users/Ryan/Relay 5K Backgrounds/Writing/'+darkimages[i])
    print('/Users/Ryan/Relay 5K Backgrounds/Writing/'+darkimages[i]+' succesfully saved!')

The print statements at the end of the for loops were so that I could tell that something was actually happening. The images were VERY large (close to 10MB for each one) so the PIL library was taking some time to process the data and I was concerned that something had frozen / stopped working

This was a pretty straightforward project, but it was pretty fun. It allowed me to go from this:

To this:

For the text attributes I had to play around with them for a while until I found the color, font and font size that I liked and looked good (to me).

The Positioning of the text also took a bit of experimentation, but a little trial and error and I was all set.

Also, for the brightness level of 65 I just looked at the images that seemed to work and found a threshold to use. The actual value may vary depending on the look you’re doing for.

The why of a decision

As a a manager no one will ever agree with every decision you make. Not the people you manage, and not the people that manage you. But if you always know why you made a decision and you can articulate that decision, then you’ll be on a good footing when someone asks you, “How did you know to do that?” or “How did you know to make that decision?”

One of the best lessons I learned from my boss LB is that the decision is less important than the why of the decision. Make no mistake, bad decisions are bad decisions, but they are much less likely to be made if you know why you made it.

Once I was able to internalize that lesson, it freed me to actually make decisions.

When faced with a decision, I tend to ask these questions:

  1. What do I know?
  2. How do I know it (i.e. how confident am I in the information I know)?
  3. What do I gain by waiting for more information?
  4. What’s the worst that happens if I make the wrong decision?
  5. What’s the worst that happens if I make no decision now?
  6. Who can I talk to about this decision?

Having answers to these questions doesn’t guarantee that my decision will be right, but it does help me to understand why I’m making the decision that I’m making. It will also help me to explain the decision later on if needed.

One of the things I try to tell the people I work with is this:

The decision itself is less important than why you made the decision. If you don’t know why you made a decision, then you shouldn’t be making the decision yet.”

Know why you made a decision and you’ll be better equipped to make the decision.

Making the Right Choice, or How I Learned to Live with Limiting My Own Technical Debt and Just Be Happy

One of the things that comes up in my day job is trying to make sure that reports that we create are correct, not only from a data perspective, but from an architectural perspective. There are hundreds of legacy reports with legacy SQL code that has been written by 10’s of developers (some actual developers and some not so actual developers) over the last 10+ years.

Today a request came in to update a productivity report to include a new user. The request included their user ID from the application where their productivity is being tracked from.

This request looked exactly like another report and request that I’ve seen that involved running productivity from the same system with the same aspects (authorizations work).

I immediately thought that the reports were the same and set out to add the user id to a table ReferralsTeam which includes fields UserID and USerName.

I also thought that documenting what needed to be done for this would be a good thing to be done.

I documented the fix and linked the Confluece article to the JIRA issue and then I realized my mistake. This wasn’t the same report. It wasn’t even the same department!

OK, so two things:

  1. There are two reports that do EXCATLY the same thing for two different departments
  2. The report for the other department has user ids hard coded in the SQL

What to do?

The easy way is to just update the stored procedure with the hard coded user ids with the new one and call it a day

The right way:

  1. Update the table ReferralsTeam to have a new column called department … or better yet create a second table called Departments with fields DepartmentID and DepartmentName and add the DepartmentID to the ReferralsTeam table.
  2. Populate the new column with the correct data for the current team that has records in it
  3. Update the the various stored procedures that use the ReferralsTeam table in them to include a parameter that is used to filter the new column that was added to keep the data consistent
  4. Add the User IDs from the report that has the user IDs hard coded, i.e. the new department
  5. Update the report that uses the hard coded user ids to use the dynamic stored procedure
  6. Verify the results
  7. Build a user interface to allow the data to be updated outside of SQL Server Management Studio
  8. Give access to that user interface to the Department Managers so they can manage it on their own

So, which one would you do?

In this case, I updated the hard coded stored procedure to include the new user id to get that part of the request done. This helps satisfy the requester and allows them to have the minimum amount of down time.

I then also create a new JIRA issue so that we can look at doing steps 1 – 6 above and assigned to the report developer. Steps 7 & 8 are in a separate JIRA issue that is assigned to the Web Developers.

Doing things the right way will sometimes take longer to implement in the short run, but in the long run we’ve removed the need for Managers in these departments to ask to have the reports updated, we prevent bad/stale filters from being used, and we can eliminate a duplicative report!

One interesting note, the reason I caught the duplication was because of a project that we’ve been working on to document all of the hundreds of reports we have. I searched Confluence for the report name and it’s recipients were unexpected for me. That lead me to question all I had done and really evaluate the best course of action. While I kind of went out of order (and this is why I started documented one process that I didn’t mean to) I was still able to catch my mistake and rectify it.

Documentation is a pain in the ass in the moment, but holy crap it can really pay off in unexpected ways in the long run.

The Technical Debt of Others

The Technical Debt of Others

Technical Debt as defined on technopendia is:

a concept in programming that reflects the extra development work that arises when code that is easy to implement in the short run is used instead of applying the best overall solution.

In the management of software development we have to make these types of easy-to-implement-and-we-need-to-ship versus need-to-do-it-right-but-it-will-take-longer decisions all of the time.

These decisions can lead to the dreaded working as designed answer to a bug report.

This is infuriating.

It’s even more infuriating when you are on the receiving end of this.

A recent feature enhancement in the EHR we use touted an

Alert to let proscribing providers know that a medication is a duplicate.

For anyone in the medical field you can know what a nightmare it can be to prescribe a duplicate medication from a patient safety perspective, so we’d obviously want to have this feature on.

During our testing we noticed that if a medication was prescribed in a dose, say 75mg, and stopped and then started again at a new dose, say 50mg, the Duplicate Medication Alert would be presented.

We dutifully submitted a bug report to the vendor and the responded

The Medication is considered a true duplicate as when a medication is stopped it is stopped for that day it is still considered active till (sic) the end of the day due to the current application logic, which cannot be altered or changed. What your providers/users may do is enter a DUR Reason and Acknowledge with something along the lines of "New Prescription". These DUR reasons can be added via Tools > Preferences > Medications > DUR > Override Reason tab – type in the desired DUR Override Reason > Select Add > OK to save.

If functionality and logic outside of this is desired this will need to be submitted as an Idea as well since this is currently functioning off of development's intended design.

Then the design is broken.

From a technical perspective I know exactly what is going on. This particular vendor stores date values as varchar(8) but stores datetime values as datetime. There may be some really good reasons for making this design decision.

However, when the medication tables were designed, the developers asked the question, "Will we EVER care about the time a medication is started or stopped?"

They answered no and decided to set up a start date (and by extension an end date) for medications to not respect the time that a prescription started or stopped and therefore set them as varchar(8) and not as DATETIME.

But now they’ve rolled out this awesome feature. A feature that would actually allow providers to recognize duplicate medications potentially saving lives. But because they don’t store the time of the stopped medication, their logic can only look at the date. When it sees the same medication (but in different doses) active on the same date a warning appears letting the provider know that they have a duplicate medication (even though they don’t).

Additionally, this warning serves no purpose other than to be one more damned click from a provider’s perspective because the vendor is not storing (ie ignoring) the time.

When clinicians complain about the impact of EHRs on their ability to deliver effective care … when they complain about EHRs not fulfilling their promise of increased patient safety, these are the types of things that they are complaining about.

I think this response from one of the clinicians sums up this issue

I don't see the logic with the current "intended design" in considering a medication that has just been inactivated STILL ACTIVE until the end of the day. A prescriber would stop current and start new meds all in one sitting (which includes changing doses of the same med), not wait until the next day to do the second step. It decreases workflow efficiency to have to enter a reason when no real reason exists (since there IS no active entry on med list). The whole point is to alert a prescriber to an existing entry of a medication and resolve it by inactivating the duplicate, if appropriate (otherwise, enter reason for having a duplicate), before sending out a new Rx.

While it's relatively easy to follow and resolve the duplication alert if the inactivation and new prescribing is done by the same prescriber, I can see a scenario where prescriber A stops an old ibuprofen 600mg Rx[^2] (say PCP) and patient then goes to see prescriber B (say IC[^3]) who then tries to Rx ibuprofen 800mg…. and end up getting this duplication alert. The second prescriber would almost be lost as to why that message is showing up.

The application logic should augment the processes the application was designed to faciliate, but right now it is a hindrance. (emphasis added)

I know that sometimes we need to build it fast so that we can ship, but developers need to remember, forever is a long freaking time.

When you make a forever decision, be prepared to have push back from users of your software when those decision are markedly ridiculous. And be prepared to be scoffed at when you answer their bug report with a Working-as–Designed response.

[^2]: Rx = prescription

[^3]: IC = Immediate Care

Getting CPHIMS(R) Certified – Part III

I walked into the testing center at 8:30 (a full 30 minutes before my exam start time as the email suggested I do).

I signed in and was given a key for a locker for my belongings and offered use of the restroom.

I was then asked to read some forms and then was processed. My pockets were turned out and my glasses inspected. I signed in (again) and had the signature on my ID scrutinized with how I signed on test day. It only took three tries … apparently 19 year old me doesn’t sign his name like 39 year old me.

Now it was test time … even if I could remember any of the questions I wouldn’t be able to write about them … but I can’t remember them so it’s not a problem.

It took me 80 minutes to get through the real test of 115 questions (15 are there as ‘test’ questions that don’t actually count). The only real issues I had were:

  • construction noise outside the window to my left
  • the burping guy to my right … seriously bro, cut down on the breakfast burritos
  • one question that I read incorrectly 4 different times. On the fifth time I finally realized my mistake and was able to answer correctly (I think). As it turned out I had guessed what I thought was the correct answer but it was still a good feeling to get the number through a calculation instead of just guessing it

When the test was completed and my questions scored the results came back. A passing score is 600 out of 800. I scored 669 … I am officially CPHIMS. The scoring breakdown even shows areas where I didn’t do so well, so I know what to focus on for the future. For reference, they are:

  • Testing and Evaluation (which is surprising for me)
  • Analysis (again, surprising)
  • Privacy and Security (kind of figured this as it’s not part of my everyday job)

Final Thoughts

When I set this goal for myself at the beginning of the year it was just something that I wanted to do. I didn’t really have a reason for it other than I thought it might be neat.

After passing the exam I am really glad that I did. I’ve heard myself say things and think about things differently, like implementation using Pilots versus Big Bang or By Location versus By Feature.

I’m also asking questions differently of my colleagues and my supervisors to help ensure that the we are doing things for the right reason at the right time.

I can’t wait to see what I try to do next

Getting CPHIMS(R) Certified – Part II

Signing up for the actual exam may have been the most difficult and confusing part. I had to be verified as someone that could take the test, and then my membership needed to be verified (or something).

I received my confirmation email that I could sign up for the exam and read through it to make sure I understood everything. Turns out, when you sign up for the CPHIMS you need to use your FULL name (and I had just used my middle and last name).

One email to the HIMSS people and we’re all set (need to remember that for next time … this exam is the real deal!)

I was going to be in Oceanside for the Fourth of July Holiday and decided to sign up to take the exam in San Diego on the fifth. With a test date in hand I started on my study plan.

Every night when I got home I would spend roughly 45 minutes reading the study book, and going over Flash Cards that I had made with topics that I didn’t understand. Some nights I took off, but it was a solid 35 days of studying for 45 minutes.

Now, 2 things I did not consider:

  1. Scheduling an exam on the fifth is a little like scheduling an exam on Jan 1 … not the best idea in the world
  2. The place my family and I go to in Oceanside always has a ton of friends and family for the weekend (30+) and it would be a less than ideal place to do any last minute studying / cramming

I spent some of the preceding weekend reading and reviewing flash cards, but once the full retinue of friends and family arrived it was pretty much over. I had some chances to read on the beach, but for the most part my studying stopped.

The morning of the fifth came. I made the 40 minutes drive from Oceanside to the testing center to take the CPHIMS exam for real.

Getting CPHIMS(R) Certified – Part I

One of my professional goals for 2017 was to get my CPHIMS (Certified Professional in Healthcare Information and Management Systems). The CPHIMS certification is offered through HIMSS which “Demonstrates you meet an international standard of professional knowledge and competence in healthcare information and management systems”.

There was no requirement for my job to get this certification, I just thought that it would be helpful for me if I better understood the Information and Management Systems part of Healthcare.

With not much more than an idea, I started on my journey to getting certification. I did some research to see what resources were available to me and found a Practice Exam, a Book and a multitude of other helpful study aids. I decided to start with the Practice Exam and see what I’d need after that.

In early March I signed up for the Practice Exam. I found all sorts of reasons to put off taking the exam, but then I noticed that my Practice Exam had an expiration date in May. One Sunday, figure “what the hell, let’s just get this over with” I sat down at my iMac and started the exam.

I really had no idea what to expect other than 100 questions. After about 20 minutes I very nearly stopped. Not because the exam was super difficult, but because I had picked a bad time to take a practice exam. My head wasn’t really in the game, and my heart just wanted to go watch baseball.

But I powered on through. The practice exam was nice in that it would give you immediate feedback if you got the question right or wrong. It wouldn’t be like that on test day, but it was good to know where I stood as I went through this practice version.

After 50 minutes I completed the exam and saw that I had a score of 70. I figured that wouldn’t be a passing score, but then saw that the cutoff point was 68. So I passed the practice test.

OK, now it was time to get serious. Without any studying or preparation (other than the 8+ years in HIT) I was able to pass what is arguably a difficult exam.

The next thing to do was to sign up for the real thing ..