Categories
Uncategorized

Presenting Data – Referee Crew Calls in the NFL

One of the great things about computers is their ability to take tabular data and turn them into pictures that are easier to interpret. I’m always amazed when given the opportunity to show data as a picture, more people don’t jump at the chance.

For example, this piece on ESPN regarding the difference in officiating crews and their calls has some great data in it regarding how different officiating crews call games.

One thing I find a bit disconcerting is:

  1. One of the rows is missing data so that row looks ‘odd’ in the context of the story and makes it look like the writer missed a big thing … they didn’t (it’s since been fixed)
  2. This tabular format is just begging to be displayed as a picture.

Perhaps the issue here is that the author didn’t know how to best visualize the data to make his story, but I’m going to help him out.

If we start from the underlying premise that not all officiating crews call games in the same way, we want to see in what ways they differ.

The data below is a reproduction of the table from the article:

REFEREE DEF. OFFSIDE ENCROACH FALSE START NEUTRAL ZONE TOTAL
Triplette, Jeff 39 2 34 6 81
Anderson, Walt 12 2 39 10 63
Blakeman, Clete 13 2 41 7 63
Hussey, John 10 3 42 3 58
Cheffers, Cartlon 22 0 31 3 56
Corrente, Tony 14 1 31 8 54
Steratore, Gene 19 1 29 5 54
Torbert, Ronald 9 4 31 7 51
Allen, Brad 15 1 28 6 50
McAulay, Terry 10 4 23 12 49
Vinovich, Bill 8 7 29 5 49
Morelli, Peter 12 3 24 9 48
Boger, Jerome 11 3 27 6 47
Wrolstad, Craig 9 1 31 5 46
Hochuli, Ed 5 2 33 4 44
Coleman, Walt 9 2 25 4 40
Parry, John 7 5 20 6 38

The author points out:

Jeff Triplette’s crew has called a combined 81 such penalties — 18 more than the next-highest crew and more than twice the amount of two others

The author goes on to talk about his interview with Mike Pereira (who happens to be pimping promoting his new book).

While the table above is helpful it’s not an image that you can look at and ask, “Man, what the heck is going on?” There is a visceral aspect to it that says, something is wrong here … but I can’t really be sure about what it is.

Let’s sum up the defensive penalties (Defensive Offsides, Encroachment, and Neutral Zone Infractions) and see what the table looks like:

REFEREE DEF Total OFF Total TOTAL
Triplette, Jeff 47 34 81
Anderson, Walt 24 39 63
Blakeman, Clete 22 41 63
Hussey, John 16 42 58
Cheffers, Cartlon 25 31 56
Corrente, Tony 23 31 54
Steratore, Gene 25 29 54
Torbert, Ronald 20 31 51
Allen, Brad 22 28 50
McAulay, Terry 26 23 49
Vinovich, Bill 20 29 49
Morelli, Peter 24 24 48
Boger, Jerome 20 27 47
Wrolstad, Craig 15 31 46
Hochuli, Ed 11 33 44
Coleman, Walt 15 25 40
Parry, John 18 20 38

Now we can see what might actually be going on, but it’s still a bit hard for those visual people. If we take this data and then generate a scatter plot we might have a picture to show us the issue. Something like this:

The horizontal dashed blue lines represent the average defensive calls per crew while the vertical dashed blue line represents the average offensive calls per crew. The gray box represents the area containing plus/minus 2 standard deviations from the mean for both offensive and defensive penalty calls.

Notice anything? Yeah, me too. Jeff Triplette’s crew is so far out of range for defensive penalties it’s like they’re watching a different game, or reading from a different play book.

What I’d really like to be able to do is this same analysis but on a game by game basis. I don’t think this would really change the way that Jeff Triplette and his crew call games, but it may point out some other inconsistencies that are worth exploring.

Code for this project can be found on my GitHub Repo

Categories
Uncategorized

Web Scrapping – Passer Data (Part III)

In Part III I’m reviewing the code to populate a DataFrame with Passer data from the current NFL season.

To start I use the games DataFrame I created in Part II to create 4 new DataFrames:

  • reg_season_games – All of the Regular Season Games
  • pre_season_games – All of the Pre Season Games
  • gameshome – The Home Games
  • gamesaway – The Away Games

A cool aspect of the DataFrames is that you can treat them kind of like temporary tables (at least, this is how I’m thinking about them as I am mostly a SQL programmer) and create other temporary tables based on criteria. In the code below I’m taking the nfl_start_date that I defined in Part II as a way to split the data frame into pre / and regular season DataFrame. I then take the regular season DataFrame and split that into home and away DataFrames. I do this so I don’t double count the statistics for the passers.

#Start Section 3

reg_season_games = games.loc[games['match_date'] >= nfl_start_date]
pre_season_games = games.loc[games['match_date'] < nfl_start_date]

gameshome = reg_season_games.loc[reg_season_games['ha_ind'] == 'vs']
gamesaway = reg_season_games.loc[reg_season_games['ha_ind'] == '@']

Next, I set up some variables to be used later:

BASE_URL = 'http://www.espn.com/nfl/boxscore/_/gameId/{0}'

#Create the lists to hold the values for the games for the passers
player_pass_name = []
player_pass_catch = []
player_pass_attempt = []
player_pass_yds = []
player_pass_avg = []
player_pass_td = []
player_pass_int = []
player_pass_sacks = []
player_pass_sacks_yds_lost = []
player_pass_rtg = []
player_pass_week_id = []
player_pass_result = []
player_pass_team = []
player_pass_ha_ind = []
player_match_id = []
player_id = [] #declare the player_id as a list so it doesn't get set to a str by the loop below

headers_pass = ['match_id', 'id', 'Name', 'CATCHES','ATTEMPTS', 'YDS', 'AVG', 'TD', 'INT', 'SACKS', 'YRDLSTSACKS', 'RTG']

Now it’s time to start populating some of the list variables I created above. I am taking the week_id, result, team_x, and ha_ind columns from the games DataFrame (I’m sure there is a better way to do this, and I will need to revisit it in the future)

player_pass_week_id.append(gamesaway.week_id)
player_pass_result.append(gamesaway.result)
player_pass_team.append(gamesaway.team_x)
player_pass_ha_ind.append(gamesaway.ha_ind)

Now for the looping (everybody’s favorite part!). Using BeautifulSoup I get the div of class col column-one gamepackage-away-wrap. Once I have that I get the table rows and then loop through the data in the row to get what I need from the table holding the passer data. Some intersting things happening below:

  • The Catches / Attempts and Sacks / Yrds Lost are displayed as a single column each (even though each column holds 2 statistics). In order to fix this I use the index() method and get all of the data to the left of a character (- and / respectively for each column previously mentioned) and append the resulting 2 items per column (so four in total) to 2 different lists (four in total).

The last line of code gets the ESPN player_id, just in case I need/want to use it later.

for index, row in gamesaway.iterrows():
    print(index)
    try:
        request = requests.get(BASE_URL.format(index))
        table_pass = BeautifulSoup(request.text, 'lxml').find_all('div', class_='col column-one gamepackage-away-wrap')

        pass_ = table_pass[0]
        player_pass_all = pass_.find_all('tr')


        for tr in player_pass_all:
            for td in tr.find_all('td', class_='sacks'):
                for t in tr.find_all('td', class_='name'):
                    if t.text != 'TEAM':
                        player_pass_sacks.append(int(td.text[0:td.text.index('-')]))
                        player_pass_sacks_yds_lost.append(int(td.text[td.text.index('-')+1:]))
            for td in tr.find_all('td', class_='c-att'):
                for t in tr.find_all('td', class_='name'):
                    if t.text != 'TEAM':
                        player_pass_catch.append(int(td.text[0:td.text.index('/')]))
                        player_pass_attempt.append(int(td.text[td.text.index('/')+1:]))
            for td in tr.find_all('td', class_='name'):
                for t in tr.find_all('td', class_='name'):
                    for s in t.find_all('span', class_=''):
                        if t.text != 'TEAM':
                            player_pass_name.append(s.text)
            for td in tr.find_all('td', class_='yds'):
                for t in tr.find_all('td', class_='name'):
                    if t.text != 'TEAM':
                        player_pass_yds.append(int(td.text))
            for td in tr.find_all('td', class_='avg'):
                for t in tr.find_all('td', class_='name'):
                    if t.text != 'TEAM':
                        player_pass_avg.append(float(td.text))
            for td in tr.find_all('td', class_='td'):
                for t in tr.find_all('td', class_='name'):
                    if t.text != 'TEAM':
                        player_pass_td.append(int(td.text))
            for td in tr.find_all('td', class_='int'):
                for t in tr.find_all('td', class_='name'):
                    if t.text != 'TEAM':
                        player_pass_int.append(int(td.text))
            for td in tr.find_all('td', class_='rtg'):
                for t in tr.find_all('td', class_='name'):
                    if t.text != 'TEAM':
                        player_pass_rtg.append(float(td.text))
                        player_match_id.append(index)
            #The code below cycles through the passers and gets their ESPN Player ID
            for a in tr.find_all('a', href=True):
                player_id.append(a['href'].replace("http://www.espn.com/nfl/player/_/id/","")[0:a['href'].replace("http://www.espn.com/nfl/player/_/id/","").index('/')])

    except Exception as e:
        pass

With all of the data from above we now populate our DataFrame using specific headers (that’s why we set the headers_pass variable above):

player_passer_data = pd.DataFrame(np.column_stack((
player_match_id,
player_id,
player_pass_name,
player_pass_catch,
player_pass_attempt,
player_pass_yds,
player_pass_avg,
player_pass_td,
player_pass_int,
player_pass_sacks,
player_pass_sacks_yds_lost,
player_pass_rtg
)), columns=headers_pass)

An issue that I ran into as I was playing with the generated DataFrame was that even though I had set the numbers generated in the for loop above to be of type int anytime I would do something like a sum() on the DataFrame the numbers would be concatenated as though they were strings (because they were!).

After much Googling I came across a useful answer on StackExchange (where else would I find it, right?)

What it does is to set the data type of the columns from string to int

player_passer_data[['TD', 'CATCHES', 'ATTEMPTS', 'YDS', 'INT', 'SACKS', 'YRDLSTSACKS','AVG','RTG']] = player_passer_data[['TD', 'CATCHES', 'ATTEMPTS', 'YDS', 'INT', 'SACKS', 'YRDLSTSACKS','AVG','RTG']].apply(pd.to_numeric)

OK, so I’ve got a DataFrame with passer data, I’ve got a DataFrame with away game data, now I need to join them. As expected, pandas has a way to join DataFrame data … with the join method obviously!

I create a new DataFrame called game_passer_data which joins player_passer_data with games_away on their common key match_id. I then have to use set_index to make sure that the index stays set to match_id … If I don’t then the index is reset to an auto-incremented integer.

game_passer_data = player_passer_data.join(gamesaway, on='match_id').set_index('match_id')

This is great, but now game_passer_data has all of these extra columns. Below is the result of running game_passer_data.head() from the terminal:

id          Name  CATCHES  ATTEMPTS  YDS  AVG  TD  INT  SACKS  \
match_id
400874518  2577417  Dak Prescott       22        30  292  9.7   0    0      4
400874674  2577417  Dak Prescott       23        32  245  7.7   2    0      2
400874733  2577417  Dak Prescott       18        27  247  9.1   3    1      2
400874599  2577417  Dak Prescott       21        27  247  9.1   3    0      0
400874599    12482  Mark Sanchez        1         1    8  8.0   0    0      0

           YRDLSTSACKS                        ...                          \
match_id                                      ...
400874518           14                        ...
400874674           11                        ...
400874733           14                        ...
400874599            0                        ...
400874599            0                        ...

           ha_ind  match_date                  opp result          team_x  \
match_id
400874518       @  2016-09-18  washington-redskins      W  Dallas Cowboys
400874674       @  2016-10-02  san-francisco-49ers      W  Dallas Cowboys
400874733       @  2016-10-16    green-bay-packers      W  Dallas Cowboys
400874599       @  2016-11-06     cleveland-browns      W  Dallas Cowboys
400874599       @  2016-11-06     cleveland-browns      W  Dallas Cowboys

          week_id prefix_1             prefix_2               team_y  \
match_id
400874518       2      wsh  washington-redskins  Washington Redskins
400874674       4       sf  san-francisco-49ers  San Francisco 49ers
400874733       6       gb    green-bay-packers    Green Bay Packers
400874599       9      cle     cleveland-browns     Cleveland Browns
400874599       9      cle     cleveland-browns     Cleveland Browns

                                                         url
match_id
400874518  http://www.espn.com/nfl/team/_/name/wsh/washin...
400874674  http://www.espn.com/nfl/team/_/name/sf/san-fra...
400874733  http://www.espn.com/nfl/team/_/name/gb/green-b...
400874599  http://www.espn.com/nfl/team/_/name/cle/clevel...
400874599  http://www.espn.com/nfl/team/_/name/cle/clevel...

That is nice, but not exactly what I want. In order to remove the extra columns I use the drop method which takes 2 arguments:

  • what object to drop
  • an axis which determine what types of object to drop (0 = rows, 1 = columns):

Below, the object I define is a list of columns (figured that part all out on my own as the documentation didn’t explicitly state I could use a list, but I figured, what’s the worst that could happen?)

game_passer_data = game_passer_data.drop(['opp', 'prefix_1', 'prefix_2', 'url'], 1)

Which gives me this:

id          Name  CATCHES  ATTEMPTS  YDS  AVG  TD  INT  SACKS  \
match_id
400874518  2577417  Dak Prescott       22        30  292  9.7   0    0      4
400874674  2577417  Dak Prescott       23        32  245  7.7   2    0      2
400874733  2577417  Dak Prescott       18        27  247  9.1   3    1      2
400874599  2577417  Dak Prescott       21        27  247  9.1   3    0      0
400874599    12482  Mark Sanchez        1         1    8  8.0   0    0      0

           YRDLSTSACKS    RTG ha_ind  match_date result          team_x  \
match_id
400874518           14  103.8      @  2016-09-18      W  Dallas Cowboys
400874674           11  114.7      @  2016-10-02      W  Dallas Cowboys
400874733           14  117.4      @  2016-10-16      W  Dallas Cowboys
400874599            0  141.8      @  2016-11-06      W  Dallas Cowboys
400874599            0  100.0      @  2016-11-06      W  Dallas Cowboys

          week_id               team_y
match_id
400874518       2  Washington Redskins
400874674       4  San Francisco 49ers
400874733       6    Green Bay Packers
400874599       9     Cleveland Browns
400874599       9     Cleveland Browns

I finally have a DataFrame with the data I care about, BUT all of the column names are wonky!

This is easy enough to fix (and should have probably been fixed earlier with some of the objects I created only containing the necessary columns, but I can fix that later). By simply renaming the columns as below:

game_passer_data.columns = ['id', 'Name', 'Catches', 'Attempts', 'YDS', 'Avg', 'TD', 'INT', 'Sacks', 'Yards_Lost_Sacks', 'Rating', 'HA_Ind', 'game_date', 'Result', 'Team', 'Week', 'Opponent']

I now get the data I want, with column names to match!

id          Name  Catches  Attempts  YDS  Avg  TD  INT  Sacks  \
match_id
400874518  2577417  Dak Prescott       22        30  292  9.7   0    0      4
400874674  2577417  Dak Prescott       23        32  245  7.7   2    0      2
400874733  2577417  Dak Prescott       18        27  247  9.1   3    1      2
400874599  2577417  Dak Prescott       21        27  247  9.1   3    0      0
400874599    12482  Mark Sanchez        1         1    8  8.0   0    0      0

           Yards_Lost_Sacks  Rating HA_Ind   game_date Result            Team  \
match_id
400874518                14   103.8      @  2016-09-18      W  Dallas Cowboys
400874674                11   114.7      @  2016-10-02      W  Dallas Cowboys
400874733                14   117.4      @  2016-10-16      W  Dallas Cowboys
400874599                 0   141.8      @  2016-11-06      W  Dallas Cowboys
400874599                 0   100.0      @  2016-11-06      W  Dallas Cowboys

          Week             Opponent
match_id
400874518    2  Washington Redskins
400874674    4  San Francisco 49ers
400874733    6    Green Bay Packers
400874599    9     Cleveland Browns
400874599    9     Cleveland Browns

I’ve posted the code for all three parts to my GitHub Repo.

Work that I still need to do:

  1. Add code to get the home game data
  2. Add code to get data for the other position players
  3. Add code to get data for the defense

When I started this project on Wednesday I had only a bit of exposure to very basic aspects of Python and my background as a developer. I’m still a long way from considering myself proficient in Python but I know more now that I did 3 days ago and for that I’m pretty excited! It’s also given my an excuse reason to write some stuff which is a nice side effect.

Categories
Uncategorized

Web Scrapping – Passer Data (Part II)

On a previous post I went through my new found love of Fantasy Football and the rationale behind the ‘why’ of this particular project. This included getting the team names and their URLs from the ESPN website.

As before, let’s set up some basic infrastructure to be used later:

from time import strptime

year = 2016 # allows us to change the year that we are interested in.
nfl_start_date = date(2016, 9, 8)
BASE_URL = 'http://espn.go.com/nfl/team/schedule/_/name/{0}/year/{1}/{2}' #URL that we'll use to cycle through to get the gameid's (called match_id)

match_id = []
week_id = []
week_date = []
match_result = []
ha_ind = []
team_list = []

Next, we iterate through the teams dictionary that I created yesterday:

for index, row in teams.iterrows():
    _team, url = row['team'], row['url']
    r=requests.get(BASE_URL.format(row['prefix_1'], year, row['prefix_2']))
    table = BeautifulSoup(r.text, 'lxml').table
    for row in table.find_all('tr')[2:]: # Remove header
        columns = row.find_all('td')
        try:
            for result in columns[3].find('li'):
                match_result.append(result.text)
                week_id.append(columns[0].text) #get the week_id for the games dictionary so I know what week everything happened
                _date = date(
                    year,
                    int(strptime(columns[1].text.split(' ')[1], '%b').tm_mon),
                    int(columns[1].text.split(' ')[2])
                )
                week_date.append(_date)
                team_list.append(_team)
                for ha in columns[2].find_all('li', class_="game-status"):
                    ha_ind.append(ha.text)
            for link in columns[3].find_all('a'): # I realized here that I didn't need to do the fancy thing from the site I was mimicking http://danielfrg.com/blog/2013/04/01/nba-scraping-data/
                match_id.append(link.get('href')[-9:])

        except Exception as e:
            pass

Again, we set up some variables to be used in the for loop. But I want to really talk about the try portion of my code and the part where the week_date is being calculated.

Although I’ve been developing and managing developers for a while, I’ve not had the need to use a construct like try. (I know, right, weird!)

The basic premise of the try is that it will execute some code and if it succeeds that code will be executed. If not, it will go to the exception portion. For Python (and maybe other languages, I’m not sure) the exception MUST have something in it. In this case, I use Python’s pass function, which basically says, ‘hey, just forget about doing anything’. I’m not raising any errors here because I don’t care if the result is ‘bad’ I just want to ignore it because there isn’t any data I can use.

The other interesting (or gigantic pain in the a$$) thing is that the way ESPN displays dates on the schedule page is as Day of Week, Month Day, i.e. Sun Sep 11. There is no year. I think this is because for the most part the regular season for an NFL is always in the same calendar year. However, this year the last game of the season, in week 17, is in January. Since I’m only getting games that have been played, I’m safe for a couple more weeks, but this will need to be addressed, otherwise the date of the last games of the 2016 season will show as January 2016, instead of January 2017.

Anyway, I digress. In order to change the displayed date to a date I can actually use is I had to get the necessary function. In order to get that I had to add the following line to my code from yesterday

from time import strptime

This allows me to make some changes to the date (see where _date is being calculated in for result in columns[3].find('li'): portion of the try:.

One of the things that confused the heck out of me initially was the way the date is being stored in the list week_date. It is in the form datetime.date(2016, 9, 1), but I was expecting it to be stored as 2016-09-01. I did a couple of things to try and fix this, especially because once the list was added to the gamesdic dictionary and then used in the games DataFrame the week_date was then stored as 1472688000000 which is the milliseconds since Jan 1, 1970 to the date of the game, but it took an embarising amount of Googling to realize discover this.

With this new discovery, I forged on. The last two things that I needed to do was to create a dictionary to hold my data with all of my columns:

gamesdic = {'match_id': match_id, 'week_id': week_id, 'result': match_result, 'ha_ind': ha_ind, 'team': team_list, 'match_date': week_date}

With dictionary in hand I was able to create a DataFrame:

games = pd.DataFrame(gamesdic).set_index('match_id')

The line above is frighteningly simple. It’s basically saying, hey, take all of the data from the gamesdic dictionary and make the match_id the index.

To get the first part, see my post Web Scrapping – Passer Data (Part I).

Categories
Uncategorized

Web Scrapping – Passer Data (Part I)

For the first time in many years I’ve joined a Fantasy Football league with some of my family. One of the reasons I have not engaged in the Fantasy football is that, frankly, I’m not very good. In fact, I’m pretty bad. I have a passing interest in Football, but my interests lie more with Baseball than football (especially in light of the NFLs policy on punishing players for some infractions of league rules, but not punishing them for infractions of societal norms (see Tom Brady and Ray Lewis respectively).

That being said, I am in a Fantasy Football league this year, and as of this writing am a respectable 5-5 and only 2 games back from making the playoffs with 3 games left.

This means that what I started on yesterday I really should have started on much sooner, but I didn’t.

I had been counting on ESPN’s ‘projected points’ to help guide me to victory … it’s working about as well as flipping a coin (see my record above).

I had a couple of days off from work this week and some time to tinker with Python, so I thought, what the hell, let’s see what I can do.

Just to see what other people had done I did a quick Google Search and found someone that had done what I was trying to do with data from the NBA in 2013.

Using their post as a model I set to work.

The basic strategy I am mimicking is to:

I start of importing some standard libraries pandas, requests, and BeautifulSoup (the other libraries are for later).

import pandas as pd
import requests
from bs4 import BeautifulSoup
import csv
import numpy as np
from datetime import datetime, date

Next, I need to set up some variables. BeautifulSoup is a Python library for pulling data out of HTML and XML files.. It’s pretty sweet. The code below is declaring a URL to scrape and then users the requests library to get the actual HTML of the page and put it into a variable called r.

url = 'http://espn.go.com/nfl/teams'
r = requests.get(url)

r has a method called text which I’ll use with BeautifulSoup to create the soup. The 'lxml' declares the parser type to be used. The default is lxml and when I left it off I was presented with a warning, so I decided to explicitly state which parser I was going to be using to avoid the warning.

soup = BeautifulSoup(r.text, 'lxml')

Next I use the find_all function from BeautifulSoup. The cool thing about find_all is that you can either pass just a tag element, i.e. li or p, but you can add an additional class_ argument (notice the underscore at the end … I missed it more than once and got an error because class is a keyword used by Python). Below I’m getting all of the `ul’ elements of the class type ‘medium-logos’.

tables = soup.find_all('ul', class_='medium-logos')

Now I set up some list variables to hold the items I’ll need for later use to create my dictionary

teams = []
prefix_1 = []
prefix_2 = []
teams_urls = []

Now, we do some actual programming:

Using a nested for loop to find all of the li elements in the variable called lis which is based on the variable tables (recall this is all of the HTML from the page I scrapped that has only the tags that match <ul class='medium-logos></ul> and all of the content between them).

The nested for loop creates 2 new variables which are used to populate the 4 lists from above. The creating of the info variable gets the a tag from the li tags. The url variable takes the href tag from the info variable. In order to add an item to a list (remember, all of the lists above are empty at this point) we have to invoke the method append on each of the lists with the data that we care about (as we look through).

The function split can be used on a string (which url is). It allows you to take a string apart based on a passed through value and convert the output into a list. This is super useful with URLs since there are many cases where we’re trying to get to the path. Using split('/') allows the URL to be broken into it’s constituent parts. The negative indexes used allows you to go from right to left instead of left to right.

To really break this down a bit, if we looked at just one of the URLs we’d get this:

http://www.espn.com/nfl/team/_/name/ten/tennessee-titans

The split('/') command will turn the URL into this:

['http:', '', 'www.espn.com', 'nfl', 'team', '_', 'name', 'ten', 'tennessee-titans']

Using the negative index allows us to get the right most 2 values that we need.

for table in tables:
    lis = table.find_all('li')
    for li in lis:
        info = li.h5.a
        teams.append(info.text)
        url = info['href']
        teams_urls.append(url)
        prefix_1.append(url.split('/')[-2])
        prefix_2.append(url.split('/')[-1])

Now we put it all together into a dictionary

dic = {'url': teams_urls, 'prefix_2': prefix_2, 'prefix_1': prefix_1, 'team': teams}
teams = pd.DataFrame(dic)

This is the end of part 1. Parts 2 and 3 will be coming later this week.

I’ve also posted all of the code to my GitHub Repo.