Migrating from Square Space to Word Press

This weekend I migrated my site from Square Space to WordPress. I had been planning to do this for a while (ever since a Hover ad read on ATP earlier this summer). This weekend was the last weekend before my Square Spacesubscription was set to expire so I finally made the switch.

Why I did it

Square Space offers a beautiful interface and great templates to get you started. They make everything about setting up a blog, portfolio or online store as easy as it can get. But … that’s kind of where it ends for me. While the set up is amazingly easy, the actually content posting (for me this means my writing) was more difficult than I would have liked.

In order to get something posted to my Square Space site I would write something in anyone of a number of Plain Text Editors (BBEdit, Drafts, Editorial, Ulysses). Then I would preview the generated HTML to verify it looked the way I wanted it to. Finally, I would post my MarkDown to the Square Space Blog App on iOS and do it All. Over. Again.

To say that it was frustrating is a bit of an understatement. I looked really hard to see what APIs existed and found that there used to be an API but that Square Space removed them for some reason. So no direct posting to my blog by my favorite text editors.

So, with Hover having a discount on domains, and me getting an AWSaccount where I could host WordPress and a rich set of WordPress APIs to post directly from some of my favorite text editors, it seemed like a no brainer to make the switch.

How I set up my WordPress Install

The AWS ecosystem has some amazing documentation on how to do just about anything that you want. So, instead of laboriously taking screenshots and writing up what I did, I’ll just link to Amazon’s Launch a WordPress Website tutorial

Exporting from Square Space to WordPress

For all the pain it was to get content into SquareSpace, it was a breeze to get it out. Again, no need to get screenshots or write it up if I can just link to it instead!

What I hope to gain from it

As I wrote earlier my main reason for leaving Square Space was the difficulty I had getting content in. So, now that I’m on a WordPress site, what am I hoping to gain from it?

  1. Easier to post my writing
  2. See Item 1

Writing is already really hard for me. I struggle with it and making it difficult to get my stuff out into the world makes it that much harder. My hope is that not only will I write more, but that my writing will get better because I’m writing more.

Ulysses integration

With all of that, what has my experience been with writing my first post to my WordPress site?

This entire post was written and edited in Ulysses. I was able to preview my post in Ulysses. I was able topost my content to the site with Ulysses. Basically, Ulysses is a kick ass app and on day one of the conversion, I’m about as happy with a decision that I can be given the short amount of time since I’ve made it.

Making Background Images

I’m a big fan of podcasts. I’ve been listening to them for 4 or 5 years now. One of my favorite Podcast Networks, Relay just had their second anniversary. They offer memberships and after listening to hours and hours of All The Great Shows I decided that I needed to become a member.

One of the awesome perks of Relay membership is a set of Amazing background images.

This is fortuitous as I’ve been looking for some good backgrounds for my iMac, and so it seemed like a perfect fit.

On my iMac I have several spaces configured. One for Writing, one for Podcast and one for everything else. I wanted to take the backgrounds from Relay and have them on the Writing space and the Podcasting space, but I also wanted to be able to distinguish between them. One thing I could try to do would be to open up an image editor (Like Photoshop, Pixelmater or Acorn) and add text to them one at a time (although I’m sure there is a way to script them) but I decided to see if I could do it using Python.

Turns out, I can.

This code will take the background images from my /Users/Ryan/Relay 5K Backgrounds/ directory and spit them out into a subdirectory called Podcasting

from PIL import Image, ImageStat, ImageFont, ImageDraw 
from os import listdir
from os.path import isfile, join

# Declare Text Attributes
TextFontSize = 400
TextFontColor = (128,128,128)
font = ImageFont.truetype("~/Library/Fonts/Inconsolata.otf", TextFontSize)

mypath = '/Users/Ryan/Relay 5K Backgrounds/'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
onlyfiles.remove('.DS_Store')

rows = len(onlyfiles)

for i in range(rows):
    img = Image.open(mypath+onlyfiles[i])
    width, height = img.size
    draw = ImageDraw.Draw(img)
    TextXPos = 0.6 * width 
    TextYPos = 0.85 * height
    draw.text((TextXPos, TextYPos),'Podcasting',TextFontColor,font=font)
    draw.text
    img.save('/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+onlyfiles[i])
    print('/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+onlyfiles[i]+' succesfully saved!')

This was great, but it included all of the images, and some of them are really bright. I mean, like really bright.

So I decided to use something I learned while helping my daughter with her Science Project last year and determine the brightness of the images and use only the dark ones.

This lead me to update the code to this:

from PIL import Image, ImageStat, ImageFont, ImageDraw 
from os import listdir
from os.path import isfile, join

def brightness01( im_file ):
   im = Image.open(im_file).convert('L')
   stat = ImageStat.Stat(im)
   return stat.mean[0]

# Declare Text Attributes
TextFontSize = 400
TextFontColor = (128,128,128)
font = ImageFont.truetype("~/Library/Fonts/Inconsolata.otf", TextFontSize)

mypath = '/Users/Ryan/Relay 5K Backgrounds/'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
onlyfiles.remove('.DS_Store')

darkimages = []

rows = len(onlyfiles)

for i in range(rows):
    if brightness01(mypath+onlyfiles[i]) <= 65:
        darkimages.append(onlyfiles[i])

darkimagesrows = len(darkimages)

for i in range(darkimagesrows):
    img = Image.open(mypath+darkimages[i])
    width, height = img.size
    draw = ImageDraw.Draw(img)
    TextXPos = 0.6 * width 
    TextYPos = 0.85 * height
    draw.text((TextXPos, TextYPos),'Podcasting',TextFontColor,font=font)
    draw.text
    img.save('/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+darkimages[i])
    print('/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+darkimages[i]+' succesfully saved!')

I also wanted to have backgrounds generated for my Writing space, so I tacked on this code:

for i in range(darkimagesrows):
    img = Image.open(mypath+darkimages[i])
    width, height = img.size
    draw = ImageDraw.Draw(img)
    TextXPos = 0.72 * width 
    TextYPos = 0.85 * height
    draw.text((TextXPos, TextYPos),'Writing',TextFontColor,font=font)
    draw.text
    img.save('/Users/Ryan/Relay 5K Backgrounds/Writing/'+darkimages[i])
    print('/Users/Ryan/Relay 5K Backgrounds/Writing/'+darkimages[i]+' succesfully saved!')

The print statements at the end of the for loops were so that I could tell that something was actually happening. The images were VERY large (close to 10MB for each one) so the PIL library was taking some time to process the data and I was concerned that something had frozen / stopped working

This was a pretty straightforward project, but it was pretty fun. It allowed me to go from this:

Cortex Original

To this:

Cortex with 'Podcasting' Text

For the text attributes I had to play around with them for a while until I found the color, font and font size that I liked and looked good (to me).

The Positioning of the text also took a bit of experimentation, but a little trial and error and I was all set.

Also, for the brightness level of 65 I just looked at the images that seemed to work and found a threshold to use. The actual value may vary depending on the look you’re doing for.

The why of a decision

As a a manager no one will ever agree with every decision you make. Not the people you manage, and not the people that manage you. But if you always know why you made a decision and you can articulate that decision, then you’ll be on a good footing when someone asks you, “How did you know to do that?” or “How did you know to make that decision?”

One of the best lessons I learned from my boss LB is that the decision is less important than the why of the decision. Make no mistake, bad decisions are bad decisions, but they are much less likely to be made if you know why you made it.

Once I was able to internalize that lesson, it freed me to actually make decisions.

When faced with a decision, I tend to ask these questions:

  1. What do I know?
  2. How do I know it (i.e. how confident am I in the information I know)?
  3. What do I gain by waiting for more information?
  4. What’s the worst that happens if I make the wrong decision?
  5. What’s the worst that happens if I make no decision now?
  6. Who can I talk to about this decision?

Having answers to these questions doesn’t guarantee that my decision will be right, but it does help me to understand why I’m making the decision that I’m making. It will also help me to explain the decision later on if needed.

One of the things I try to tell the people I work with is this:

The decision itself is less important than why you made the decision. If you don’t know why you made a decision, then you shouldn’t be making the decision yet.”

Know why you made a decision and you’ll be better equipped to make the decision.

Making the Right Choice, or How I Learned to Live with Limiting My Own Technical Debt and Just Be Happy

One of the things that comes up in my day job is trying to make sure that reports that we create are correct, not only from a data perspective, but from an architectural perspective. There are hundreds of legacy reports with legacy SQL code that has been written by 10’s of developers (some actual developers and some not so actual developers) over the last 10+ years.

Today a request came in to update a productivity report to include a new user. The request included their user ID from the application where their productivity is being tracked from.

This request looked exactly like another report and request that I’ve seen that involved running productivity from the same system with the same aspects (authorizations work).

I immediately thought that the reports were the same and set out to add the user id to a table ReferralsTeam which includes fields UserID and USerName.

I also thought that documenting what needed to be done for this would be a good thing to be done.

I documented the fix and linked the Confluece article to the JIRA issue and then I realized my mistake. This wasn’t the same report. It wasn’t even the same department!

OK, so two things:

  1. There are two reports that do EXCATLY the same thing for two different departments
  2. The report for the other department has user ids hard coded in the SQL

What to do?

The easy way is to just update the stored procedure with the hard coded user ids with the new one and call it a day

The right way:

  1. Update the table ReferralsTeam to have a new column called department … or better yet create a second table called Departments with fields DepartmentID and DepartmentName and add the DepartmentID to the ReferralsTeam table.
  2. Populate the new column with the correct data for the current team that has records in it
  3. Update the the various stored procedures that use the ReferralsTeam table in them to include a parameter that is used to filter the new column that was added to keep the data consistent
  4. Add the User IDs from the report that has the user IDs hard coded, i.e. the new department
  5. Update the report that uses the hard coded user ids to use the dynamic stored procedure
  6. Verify the results
  7. Build a user interface to allow the data to be updated outside of SQL Server Management Studio
  8. Give access to that user interface to the Department Managers so they can manage it on their own

So, which one would you do?

In this case, I updated the hard coded stored procedure to include the new user id to get that part of the request done. This helps satisfy the requester and allows them to have the minimum amount of down time.

I then also create a new JIRA issue so that we can look at doing steps 1 – 6 above and assigned to the report developer. Steps 7 & 8 are in a separate JIRA issue that is assigned to the Web Developers.

Doing things the right way will sometimes take longer to implement in the short run, but in the long run we’ve removed the need for Managers in these departments to ask to have the reports updated, we prevent bad/stale filters from being used, and we can eliminate a duplicative report!

One interesting note, the reason I caught the duplication was because of a project that we’ve been working on to document all of the hundreds of reports we have. I searched Confluence for the report name and it’s recipients were unexpected for me. That lead me to question all I had done and really evaluate the best course of action. While I kind of went out of order (and this is why I started documented one process that I didn’t mean to) I was still able to catch my mistake and rectify it.

Documentation is a pain in the ass in the moment, but holy crap it can really pay off in unexpected ways in the long run.

The Technical Debt of Others

The Technical Debt of Others

Technical Debt as defined on technopendia is:

a concept in programming that reflects the extra development work that arises when code that is easy to implement in the short run is used instead of applying the best overall solution.

In the management of software development we have to make these types of easy-to-implement-and-we-need-to-ship versus need-to-do-it-right-but-it-will-take-longer decisions all of the time.

These decisions can lead to the dreaded working as designed answer to a bug report.

This is infuriating.

It’s even more infuriating when you are on the receiving end of this.

A recent feature enhancement in the EHR we use touted an

Alert to let proscribing providers know that a medication is a duplicate.

For anyone in the medical field you can know what a nightmare it can be to prescribe a duplicate medication from a patient safety perspective, so we’d obviously want to have this feature on.

During our testing we noticed that if a medication was prescribed in a dose, say 75mg, and stopped and then started again at a new dose, say 50mg, the Duplicate Medication Alert would be presented.

We dutifully submitted a bug report to the vendor and the responded

The Medication is considered a true duplicate as when a medication is stopped it is stopped for that day it is still considered active till (sic) the end of the day due to the current application logic, which cannot be altered or changed. What your providers/users may do is enter a DUR Reason and Acknowledge with something along the lines of "New Prescription". These DUR reasons can be added via Tools > Preferences > Medications > DUR > Override Reason tab – type in the desired DUR Override Reason > Select Add > OK to save.

If functionality and logic outside of this is desired this will need to be submitted as an Idea as well since this is currently functioning off of development's intended design.

Then the design is broken.

From a technical perspective I know exactly what is going on. This particular vendor stores date values as varchar(8) but stores datetime values as datetime. There may be some really good reasons for making this design decision.

However, when the medication tables were designed, the developers asked the question, "Will we EVER care about the time a medication is started or stopped?"

They answered no and decided to set up a start date (and by extension an end date) for medications to not respect the time that a prescription started or stopped and therefore set them as varchar(8) and not as DATETIME.

But now they’ve rolled out this awesome feature. A feature that would actually allow providers to recognize duplicate medications potentially saving lives. But because they don’t store the time of the stopped medication, their logic can only look at the date. When it sees the same medication (but in different doses) active on the same date a warning appears letting the provider know that they have a duplicate medication (even though they don’t).

Additionally, this warning serves no purpose other than to be one more damned click from a provider’s perspective because the vendor is not storing (ie ignoring) the time.

When clinicians complain about the impact of EHRs on their ability to deliver effective care … when they complain about EHRs not fulfilling their promise of increased patient safety, these are the types of things that they are complaining about.

I think this response from one of the clinicians sums up this issue

I don't see the logic with the current "intended design" in considering a medication that has just been inactivated STILL ACTIVE until the end of the day. A prescriber would stop current and start new meds all in one sitting (which includes changing doses of the same med), not wait until the next day to do the second step. It decreases workflow efficiency to have to enter a reason when no real reason exists (since there IS no active entry on med list). The whole point is to alert a prescriber to an existing entry of a medication and resolve it by inactivating the duplicate, if appropriate (otherwise, enter reason for having a duplicate), before sending out a new Rx.

While it's relatively easy to follow and resolve the duplication alert if the inactivation and new prescribing is done by the same prescriber, I can see a scenario where prescriber A stops an old ibuprofen 600mg Rx[^2] (say PCP) and patient then goes to see prescriber B (say IC[^3]) who then tries to Rx ibuprofen 800mg…. and end up getting this duplication alert. The second prescriber would almost be lost as to why that message is showing up.

The application logic should augment the processes the application was designed to faciliate, but right now it is a hindrance. (emphasis added)

I know that sometimes we need to build it fast so that we can ship, but developers need to remember, forever is a long freaking time.

When you make a forever decision, be prepared to have push back from users of your software when those decision are markedly ridiculous. And be prepared to be scoffed at when you answer their bug report with a Working-as–Designed response.

[^2]: Rx = prescription

[^3]: IC = Immediate Care

Getting CPHIMS(R) Certified – Part III

I walked into the testing center at 8:30 (a full 30 minutes before my exam start time as the email suggested I do).

I signed in and was given a key for a locker for my belongings and offered use of the restroom.

I was then asked to read some forms and then was processed. My pockets were turned out and my glasses inspected. I signed in (again) and had the signature on my ID scrutinized with how I signed on test day. It only took three tries … apparently 19 year old me doesn’t sign his name like 39 year old me.

Now it was test time … even if I could remember any of the questions I wouldn’t be able to write about them … but I can’t remember them so it’s not a problem.

It took me 80 minutes to get through the real test of 115 questions (15 are there as ‘test’ questions that don’t actually count). The only real issues I had were:

  • construction noise outside the window to my left
  • the burping guy to my right … seriously bro, cut down on the breakfast burritos
  • one question that I read incorrectly 4 different times. On the fifth time I finally realized my mistake and was able to answer correctly (I think). As it turned out I had guessed what I thought was the correct answer but it was still a good feeling to get the number through a calculation instead of just guessing it

When the test was completed and my questions scored the results came back. A passing score is 600 out of 800. I scored 669 … I am officially CPHIMS. The scoring breakdown even shows areas where I didn’t do so well, so I know what to focus on for the future. For reference, they are:

  • Testing and Evaluation (which is surprising for me)
  • Analysis (again, surprising)
  • Privacy and Security (kind of figured this as it’s not part of my everyday job)

Final Thoughts

When I set this goal for myself at the beginning of the year it was just something that I wanted to do. I didn’t really have a reason for it other than I thought it might be neat.

After passing the exam I am really glad that I did. I’ve heard myself say things and think about things differently, like implementation using Pilots versus Big Bang or By Location versus By Feature.

I’m also asking questions differently of my colleagues and my supervisors to help ensure that the we are doing things for the right reason at the right time.

I can’t wait to see what I try to do next

Getting CPHIMS(R) Certified – Part II

Signing up for the actual exam may have been the most difficult and confusing part. I had to be verified as someone that could take the test, and then my membership needed to be verified (or something).

I received my confirmation email that I could sign up for the exam and read through it to make sure I understood everything. Turns out, when you sign up for the CPHIMS you need to use your FULL name (and I had just used my middle and last name).

One email to the HIMSS people and we’re all set (need to remember that for next time … this exam is the real deal!)

I was going to be in Oceanside for the Fourth of July Holiday and decided to sign up to take the exam in San Diego on the fifth. With a test date in hand I started on my study plan.

Every night when I got home I would spend roughly 45 minutes reading the study book, and going over Flash Cards that I had made with topics that I didn’t understand. Some nights I took off, but it was a solid 35 days of studying for 45 minutes.

Now, 2 things I did not consider:

  1. Scheduling an exam on the fifth is a little like scheduling an exam on Jan 1 … not the best idea in the world
  2. The place my family and I go to in Oceanside always has a ton of friends and family for the weekend (30+) and it would be a less than ideal place to do any last minute studying / cramming

I spent some of the preceding weekend reading and reviewing flash cards, but once the full retinue of friends and family arrived it was pretty much over. I had some chances to read on the beach, but for the most part my studying stopped.

The morning of the fifth came. I made the 40 minutes drive from Oceanside to the testing center to take the CPHIMS exam for real.

Getting CPHIMS(R) Certified – Part I

One of my professional goals for 2017 was to get my CPHIMS (Certified Professional in Healthcare Information and Management Systems). The CPHIMS certification is offered through HIMSS which “Demonstrates you meet an international standard of professional knowledge and competence in healthcare information and management systems”.

There was no requirement for my job to get this certification, I just thought that it would be helpful for me if I better understood the Information and Management Systems part of Healthcare.

With not much more than an idea, I started on my journey to getting certification. I did some research to see what resources were available to me and found a Practice Exam, a Book and a multitude of other helpful study aids. I decided to start with the Practice Exam and see what I’d need after that.

In early March I signed up for the Practice Exam. I found all sorts of reasons to put off taking the exam, but then I noticed that my Practice Exam had an expiration date in May. One Sunday, figure “what the hell, let’s just get this over with” I sat down at my iMac and started the exam.

I really had no idea what to expect other than 100 questions. After about 20 minutes I very nearly stopped. Not because the exam was super difficult, but because I had picked a bad time to take a practice exam. My head wasn’t really in the game, and my heart just wanted to go watch baseball.

But I powered on through. The practice exam was nice in that it would give you immediate feedback if you got the question right or wrong. It wouldn’t be like that on test day, but it was good to know where I stood as I went through this practice version.

After 50 minutes I completed the exam and saw that I had a score of 70. I figured that wouldn’t be a passing score, but then saw that the cutoff point was 68. So I passed the practice test.

OK, now it was time to get serious. Without any studying or preparation (other than the 8+ years in HIT) I was able to pass what is arguably a difficult exam.

The next thing to do was to sign up for the real thing ..

Updating my LinkedIn Profile

I've been trying to update my LinkedIn Profile for a couple of weeks now (maybe a couple of months) and I keep hitting a roadblock. Not really sure why …

Since being 'promoted' from Director of NextGen Support Services to Director of Business Informatics, I've wanted to update the Profile but haven't really had the 'time' to do it.

So a couple of weeks ago I decided that start in earnest on the update. I've done more research than I can stand but I don't feel like I'm any closer to an update that I like.

I think part of the problem is that I don't really know* what who the summary is for. Is it for me or for other people. People that are reading my summary (for whatever reason people read LinkedIn summaries)?

If it's for me then I guess I'd write about the things that I really like to do, like data analysis and bits of programming to get to solutions to hard problems. If it's for other people then I guess I need to be genuine about who I am while also 'selling' myself to the prospective others.

Maybe the best thing is to write it for me and then hope for the best. I kind of like that. Besides, if someone else reads it and they don't like it then that's a good indication about how well I would get along with that person in a professional setting anyway and might be best to avoid them.

And if they do like it then all the better that they will also like me … the real me.

HIMSS Recap

I’ve gone through all of my notes, reviewed all of the presentations and am feeling really good about my experience at HIMSS.

Takeaways:

  1. We need to get ADT enabled for the local hospitals
  2. We need to have a governance system set up for a variety of things, including data, reporting, and IT based projects

Below are the educational sessions (in no particular order) I attended and my impressions. Mostly a collection of interesting facts (I’ve left the Calls to Action for my to do list).

Choosing the Right IT Projects to Deliver Strategic Value presented by Tom Selva and Seth Katz they really hit home the idea that there is a relationship between culture and governance. The culture of the organization has to be ready to accept the accountability that will come with governance. They also indicated that process is the most important part of governance. Without process you CANNOT have governance.

In addition to great advice, they had great implementation strategies including the idea of requiring all IT projects to have an elevator pitch and a more formal 10 minute presentation on why the project should be done and in what way it aligned with the strategy of the organization.

Semantic data analysis for interoperability presented by Richard E. Biehl, Ph.D. showed me that there was an aspect of data that I hadn’t ever had to think about. What to do when multiple systems are brought together and define the same word or concept in different ways. Specifically,, “Semantic challenge is the idea of a shared meaning or the data that is shared”. The example on relating the concept of a migraine from ICD to SNOMED and how they can result in mutually exclusive definitions of the same ‘idea’ was something I hadn’t ever really considered before.

Next Generation IT Governance: Fully-Integrated and Operationally-Led presented by Ryan Bosch, MD, MBAEHS and Fran Turisco, MBA hit home the idea of Begin with the End in mind. If you know where you’re going it’s much easier to know how to get there. This is something I’ve always instinctively felt, however, distilling it to this short, easy to remember statement was really powerful for me.

Link to HIMSS Presentation

Developing a “Need-Based” Population Management System presented by
Rick Lang and Tim Hediger hammered home the idea that “Collaboration and Partnering are KEY to success”. Again, something that I know but it’s always nice to hear it out loud.

Link to HIMSS Presentation

Machine Intelligence for Reducing Clinical Variation presented by Todd Stewart, MD and F.X. Campion, MD, FACP was one of the more technical sessions I attended. They spoke about how Artificial Intelligence and Machine Learning don’t replace normal analysis, but instead allow us to focus on what hypothesis we should test in the first place. They also introduced the idea (to me anyway) that data has shape and that shape can be analyzed to lead to insight. They also spoke about ‘Topological Data Analysis’ which is something I want to learn more about.

Link to HIMSS Presentation

Driving Patient Engagement through mobile care management presented by Susan Beaton spoke about using Health Coaches to help patients learn to implement parts of the care plan. They also spoke about how “Mobile engagement can lead to increased feeling of control for members” These are aspects that I’d like to see my organization look to implement in the coming months / years

Link to HIMSS Presentation

Expanding Real time notifications for care transitions presented by
Elaine Fontaine spoke about using demographic data to determine the best discharge plan for the patient. In one of the presentations I saw (Connecticut Hospitals Drive Policy with Geospatial Analysis presented by Pat Charmel) the presenter had indicated that as much as 60% of healthcare costs are determined by demographics. If we can keep this in mind we can help control healthcare costs much more effectively, but it lead me to ask:

  • how much do we know
  • how much can we know
  • what aspects of privacy do we need to think about before embarking on such a path?

Link to HIMSS Presentation

Your Turn: Data Quality and Integrity which was more of an interactive session when asked the question “What would a National Patient Identifier be useful for?” most attendees in audience felt that it would help with information sharing

Predictive Analytics: A Foundation for Care Management presented by Jessica Taylor, RN and Amber Sloat, RN I saw that while California has been thinking about and preparing for value based care for some time, the rest of the country is just coming around to the idea. The hospital that these Nurses work for are doing some very innovative things, but they’re things that we’ve been doing for years. The one thing they did seem to have that we don’t is an active HIE that helps to keep track of patients in near real time. I would love to have! One of the benefits of a smaller state perhaps (they were from Maine)?

Link to HIMSS Presentation

A model of data maturity to support predictive analytics presented by Daniel O’Malley, MS was full of lots of charts and diagrams on what the University of Virginia was doing, but it was short on how they got there. I would have liked to have seen more information on roadblocks that they encountered during each of the stages of the maturity. That being said, because the presentation has the charts and diagrams, I feel like I’ll be able to get something out of the talk that will help back at work.

Link to HIMSS Presentation

Emerging Impacts on Artificial Intelligence on Healthcare IT presented by James Golden, Ph.D. and Christopher Ross, MBA. They had a statistic that 30% of all data in the world is healthcare data! That was simply amazing to me. They also had data showing that medical knowledge doubles every THREE years. This means that between the time you started medical school and the time you were a full fledged doctor the amount of medical knowledge could have increased 4 to 8 fold! How can anyone keep up with that kind of knowledge growth? The simple answer is that they can’t and that’s why AI and ML are so important for medicine. But equally important is how the AI/ML are trained.

Link to HIMSS Presentation