This is part 1 in my series on writing modern idiomatic pandas.


Effective Pandas

Introduction

This series is about how to make effective use of pandas, a data analysis library for the Python programming language. It’s targeted at an intermediate level: people who have some experience with pandas, but are looking to improve.

Prior Art

There are many great resources for learning pandas; this is not one of them. For beginners, I typically recommend Greg Reda’s 3-part introduction, especially if they’re familiar with SQL. Of course, there’s the pandas documentation itself. I gave a talk at PyData Seattle targeted as an introduction if you prefer video form. Wes McKinney’s Python for Data Analysis is still the goto book (and is also a really good introduction to NumPy as well). Jake VanderPlas’s Python Data Science Handbook, in early release, is great too. Kevin Markham has a video series for beginners learning pandas.

With all those resources (and many more that I’ve slighted through omission), why write another? Surely the law of diminishing returns is kicking in by now. Still, I thought there was room for a guide that is up to date (as of March 2016) and emphasizes idiomatic pandas code (code that is pandorable). This series probably won’t be appropriate for people completely new to python or NumPy and pandas. By luck, this first post happened to cover topics that are relatively introductory, so read some of the linked material and come back, or let me know if you have questions.

Get the Data

We’ll be working with flight delay data from the BTS (R users can install Hadley’s NYCFlights13 dataset for similar data.

import os
import zipfile

import requests
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt

if int(os.environ.get("MODERN_PANDAS_EPUB", 0)):
    import prep
import requests

headers = {
    'Referer': 'https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time',
    'Origin': 'https://www.transtats.bts.gov',
    'Content-Type': 'application/x-www-form-urlencoded',
}

params = (
    ('Table_ID', '236'),
    ('Has_Group', '3'),
    ('Is_Zipped', '0'),
)

with open('modern-1-url.txt', encoding='utf-8') as f:
    data = f.read().strip()

os.makedirs('data', exist_ok=True)
dest = "data/flights.csv.zip"

if not os.path.exists(dest):
    r = requests.post('https://www.transtats.bts.gov/DownLoad_Table.asp',
                      headers=headers, params=params, data=data, stream=True)

    with open("data/flights.csv.zip", 'wb') as f:
        for chunk in r.iter_content(chunk_size=102400): 
            if chunk:
                f.write(chunk)

That download returned a ZIP file. There’s an open Pull Request for automatically decompressing ZIP archives with a single CSV, but for now we have to extract it ourselves and then read it in.

zf = zipfile.ZipFile("data/flights.csv.zip")
fp = zf.extract(zf.filelist[0].filename, path='data/')
df = pd.read_csv(fp, parse_dates=["FL_DATE"]).rename(columns=str.lower)

df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 450017 entries, 0 to 450016
Data columns (total 33 columns):
fl_date                  450017 non-null datetime64[ns]
unique_carrier           450017 non-null object
airline_id               450017 non-null int64
tail_num                 449378 non-null object
fl_num                   450017 non-null int64
origin_airport_id        450017 non-null int64
origin_airport_seq_id    450017 non-null int64
origin_city_market_id    450017 non-null int64
origin                   450017 non-null object
origin_city_name         450017 non-null object
dest_airport_id          450017 non-null int64
dest_airport_seq_id      450017 non-null int64
dest_city_market_id      450017 non-null int64
dest                     450017 non-null object
dest_city_name           450017 non-null object
crs_dep_time             450017 non-null int64
dep_time                 441476 non-null float64
dep_delay                441476 non-null float64
taxi_out                 441244 non-null float64
wheels_off               441244 non-null float64
wheels_on                440746 non-null float64
taxi_in                  440746 non-null float64
crs_arr_time             450017 non-null int64
arr_time                 440746 non-null float64
arr_delay                439645 non-null float64
cancelled                450017 non-null float64
cancellation_code        8886 non-null object
carrier_delay            97699 non-null float64
weather_delay            97699 non-null float64
nas_delay                97699 non-null float64
security_delay           97699 non-null float64
late_aircraft_delay      97699 non-null float64
unnamed: 32              0 non-null float64
dtypes: datetime64[ns](1), float64(15), int64(10), object(7)
memory usage: 113.3+ MB

Indexing

Or, explicit is better than implicit. By my count, 7 of the top-15 voted pandas questions on Stackoverflow are about indexing. This seems as good a place as any to start.

By indexing, we mean the selection of subsets of a DataFrame or Series. DataFrames (and to a lesser extent, Series) provide a difficult set of challenges:

  • Like lists, you can index by location.
  • Like dictionaries, you can index by label.
  • Like NumPy arrays, you can index by boolean masks.
  • Any of these indexers could be scalar indexes, or they could be arrays, or they could be slices.
  • Any of these should work on the index (row labels) or columns of a DataFrame.
  • And any of these should work on hierarchical indexes.

The complexity of pandas’ indexing is a microcosm for the complexity of the pandas API in general. There’s a reason for the complexity (well, most of it), but that’s not much consolation while you’re learning. Still, all of these ways of indexing really are useful enough to justify their inclusion in the library.

Slicing

Or, explicit is better than implicit.

By my count, 7 of the top-15 voted pandas questions on Stackoverflow are about slicing. This seems as good a place as any to start.

Brief history digression: For years the preferred method for row and/or column selection was .ix.

df.ix[10:15, ['fl_date', 'tail_num']]
/Users/taugspurger/Envs/blog/lib/python3.6/site-packages/ipykernel_launcher.py:1: DeprecationWarning: 
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing

See the documentation here:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate_ix
  """Entry point for launching an IPython kernel.

fl_datetail_num
102017-01-01N756AA
112017-01-01N807AA
122017-01-01N755AA
132017-01-01N951AA
142017-01-01N523AA
152017-01-01N155AA

As you can see, this method is now deprecated. Why’s that? This simple little operation hides some complexity. What if, rather than our default range(n) index, we had an integer index like

# filter the warning for now on
import warnings
warnings.simplefilter("ignore", DeprecationWarning)
first = df.groupby('airline_id')[['fl_date', 'unique_carrier']].first()
first.head()

fl_dateunique_carrier
airline_id
193932017-01-01WN
196902017-01-01HA
197902017-01-01DL
198052017-01-01AA
199302017-01-01AS

Can you predict ahead of time what our slice from above will give when passed to .ix?

first.ix[10:15, ['fl_date', 'tail_num']]

fl_datetail_num
airline_id

Surprise, an empty DataFrame! Which in data analysis is rarely a good thing. What happened?

We had an integer index, so the call to .ix used its label-based mode. It was looking for integer labels between 10:15 (inclusive). It didn’t find any. Since we sliced a range it returned an empty DataFrame, rather than raising a KeyError.

By way of contrast, suppose we had a string index, rather than integers.

first = df.groupby('unique_carrier').first()
first.ix[10:15, ['fl_date', 'tail_num']]

fl_datetail_num
unique_carrier
VX2017-01-01N846VA
WN2017-01-01N955WN

And it works again! Now that we had a string index, .ix used its positional-mode. It looked for rows 10-15 (exclusive on the right).

But you can’t reliably predict what the outcome of the slice will be ahead of time. It’s on the reader of the code (probably your future self) to know the dtypes so you can reckon whether .ix will use label indexing (returning the empty DataFrame) or positional indexing (like the last example). In general, methods whose behavior depends on the data, like .ix dispatching to label-based indexing on integer Indexes but location-based indexing on non-integer, are hard to use correctly. We’ve been trying to stamp them out in pandas.

Since pandas 0.12, these tasks have been cleanly separated into two methods:

  1. .loc for label-based indexing
  2. .iloc for positional indexing
first.loc[['AA', 'AS', 'DL'], ['fl_date', 'tail_num']]

fl_datetail_num
unique_carrier
AA2017-01-01N153AA
AS2017-01-01N557AS
DL2017-01-01N942DL
first.iloc[[0, 1, 3], [0, 1]]

fl_dateairline_id
unique_carrier
AA2017-01-0119805
AS2017-01-0119930
DL2017-01-0119790

.ix is deprecated, but will hang around for a little while. But if you’ve been using .ix out of habit, or if you didn’t know any better, maybe give .loc and .iloc a shot. I’d recommend carefully updating your code to decide if you’ve been using positional or label indexing, and choose the appropriate indexer. For the intrepid reader, Joris Van den Bossche (a core pandas dev) compiled a great overview of the pandas __getitem__ API. A later post in this series will go into more detail on using Indexes effectively; they are useful objects in their own right, but for now we’ll move on to a closely related topic.

SettingWithCopy

Pandas used to get a lot of questions about assignments seemingly not working. We’ll take this StackOverflow question as a representative question.

f = pd.DataFrame({'a':[1,2,3,4,5], 'b':[10,20,30,40,50]})
f

ab
0110
1220
2330
3440
4550

The user wanted to take the rows of b where a was 3 or less, and set them equal to b / 10 We’ll use boolean indexing to select those rows f['a'] <= 3,

# ignore the context manager for now
with pd.option_context('mode.chained_assignment', None):
    f[f['a'] <= 3]['b'] = f[f['a'] <= 3 ]['b'] / 10
f

ab
0110
1220
2330
3440
4550

And nothing happened. Well, something did happen, but nobody witnessed it. If an object without any references is modified, does it make a sound?

The warning I silenced above with the context manager links to an explanation that’s quite helpful. I’ll summarize the high points here.

The “failure” to update f comes down to what’s called chained indexing, a practice to be avoided. The “chained” comes from indexing multiple times, one after another, rather than one single indexing operation. Above we had two operations on the left-hand side, one __getitem__ and one __setitem__ (in python, the square brackets are syntactic sugar for __getitem__ or __setitem__ if it’s for assignment). So f[f['a'] <= 3]['b'] becomes

  1. getitem: f[f['a'] <= 3]
  2. setitem: _['b'] = ... # using _ to represent the result of 1.

In general, pandas can’t guarantee whether that first __getitem__ returns a view or a copy of the underlying data. The changes will be made to the thing I called _ above, the result of the __getitem__ in 1. But we don’t know that _ shares the same memory as our original f. And so we can’t be sure that whatever changes are being made to _ will be reflected in f.

Done properly, you would write

f.loc[f['a'] <= 3, 'b'] = f.loc[f['a'] <= 3, 'b'] / 10
f

ab
011.0
122.0
233.0
3440.0
4550.0

Now this is all in a single call to __setitem__ and pandas can ensure that the assignment happens properly.

The rough rule is any time you see back-to-back square brackets, ][, you’re in asking for trouble. Replace that with a .loc[..., ...] and you’ll be set.

The other bit of advice is that a SettingWithCopy warning is raised when the assignment is made. The potential copy could be made earlier in your code.

Multidimensional Indexing

MultiIndexes might just be my favorite feature of pandas. They let you represent higher-dimensional datasets in a familiar two-dimensional table, which my brain can sometimes handle. Each additional level of the MultiIndex represents another dimension. The cost of this is somewhat harder label indexing.

My very first bug report to pandas, back in November 2012, was about indexing into a MultiIndex. I bring it up now because I genuinely couldn’t tell whether the result I got was a bug or not. Also, from that bug report

Sorry if this isn’t actually a bug. Still very new to python. Thanks!

Adorable.

That operation was made much easier by this addition in 2014, which lets you slice arbitrary levels of a MultiIndex.. Let’s make a MultiIndexed DataFrame to work with.

hdf = df.set_index(['unique_carrier', 'origin', 'dest', 'tail_num',
                    'fl_date']).sort_index()
hdf[hdf.columns[:4]].head()

airline_idfl_numorigin_airport_idorigin_airport_seq_id
unique_carrierorigindesttail_numfl_date
AAABQDFWN3ABAA2017-01-15198052611101401014003
2017-01-29198051282101401014003
N3AEAA2017-01-11198052511101401014003
N3AJAA2017-01-24198052511101401014003
N3AVAA2017-01-11198051282101401014003

And just to clear up some terminology, the levels of a MultiIndex are the former column names (unique_carrier, origin…). The labels are the actual values in a level, ('AA', 'ABQ', …). Levels can be referred to by name or position, with 0 being the outermost level.

Slicing the outermost index level is pretty easy, we just use our regular .loc[row_indexer, column_indexer]. We’ll select the columns dep_time and dep_delay where the carrier was American Airlines, Delta, or US Airways.

hdf.loc[['AA', 'DL', 'US'], ['dep_time', 'dep_delay']]

dep_timedep_delay
unique_carrierorigindesttail_numfl_date
AAABQDFWN3ABAA2017-01-15500.00.0
2017-01-29757.0-3.0
N3AEAA2017-01-111451.0-9.0
N3AJAA2017-01-241502.02.0
N3AVAA2017-01-11752.0-8.0
N3AWAA2017-01-271550.050.0
N3AXAA2017-01-161524.024.0
2017-01-17757.0-3.0
N3BJAA2017-01-25823.023.0
N3BPAA2017-01-111638.0-7.0
N3BTAA2017-01-26753.0-7.0
N3BYAA2017-01-181452.0-8.0
N3CAAA2017-01-23453.0-7.0
N3CBAA2017-01-131456.0-4.0
N3CDAA2017-01-121455.0-5.0
2017-01-28758.0-2.0
N3CEAA2017-01-21455.0-5.0
N3CGAA2017-01-18759.0-1.0
N3CWAA2017-01-271638.0-7.0
N3CXAA2017-01-31752.0-8.0
N3DBAA2017-01-191637.0-8.0
N3DMAA2017-01-131638.0-7.0
N3DRAA2017-01-27753.0-7.0
N3DVAA2017-01-091636.0-9.0
N3DYAA2017-01-101633.0-12.0
N3ECAA2017-01-15753.0-7.0
N3EDAA2017-01-091450.0-10.0
2017-01-10753.0-7.0
N3ENAA2017-01-24756.0-4.0
2017-01-261533.033.0
.....................
DLXNAATLN921AT2017-01-201156.0-3.0
N924DL2017-01-30555.0-5.0
N925DL2017-01-12551.0-9.0
N929AT2017-01-081155.0-4.0
2017-01-311139.0-20.0
N932AT2017-01-121158.0-1.0
N938AT2017-01-261204.05.0
N940AT2017-01-181157.0-2.0
2017-01-191200.01.0
N943DL2017-01-22555.0-5.0
N950DL2017-01-19558.0-2.0
N952DL2017-01-18556.0-4.0
N953DL2017-01-31558.0-2.0
N956DL2017-01-17554.0-6.0
N961AT2017-01-141233.0-6.0
N964AT2017-01-271155.0-4.0
N966DL2017-01-23559.0-1.0
N968DL2017-01-29555.0-5.0
N969DL2017-01-11556.0-4.0
N976DL2017-01-09622.022.0
N977AT2017-01-241202.03.0
2017-01-251149.0-10.0
N977DL2017-01-21603.0-2.0
N979AT2017-01-151238.0-1.0
2017-01-221155.0-4.0
N983AT2017-01-111148.0-11.0
N988DL2017-01-26556.0-4.0
N989DL2017-01-25555.0-5.0
N990DL2017-01-15604.0-1.0
N995AT2017-01-161152.0-7.0

142945 rows × 2 columns

So far, so good. What if you wanted to select the rows whose origin was Chicago O’Hare (ORD) or Des Moines International Airport (DSM). Well, .loc wants [row_indexer, column_indexer] so let’s wrap the two elements of our row indexer (the list of carriers and the list of origins) in a tuple to make it a single unit:

hdf.loc[(['AA', 'DL', 'US'], ['ORD', 'DSM']), ['dep_time', 'dep_delay']]

dep_timedep_delay
unique_carrierorigindesttail_numfl_date
AADSMDFWN424AA2017-01-231324.0-3.0
N426AA2017-01-25541.0-9.0
N437AA2017-01-13542.0-8.0
2017-01-23544.0-6.0
N438AA2017-01-11542.0-8.0
N439AA2017-01-24544.0-6.0
2017-01-31544.0-6.0
N4UBAA2017-01-181323.0-4.0
N4WNAA2017-01-271322.0-5.0
N4XBAA2017-01-09536.0-14.0
N4XEAA2017-01-21544.0-6.0
N4XFAA2017-01-311320.0-7.0
N4XGAA2017-01-281337.010.0
2017-01-30542.0-8.0
N4XJAA2017-01-20552.02.0
2017-01-211320.0-7.0
N4XKAA2017-01-261323.0-4.0
N4XMAA2017-01-161423.056.0
2017-01-191321.0-6.0
N4XPAA2017-01-091322.0-5.0
2017-01-14545.0-5.0
N4XTAA2017-01-101355.028.0
N4XUAA2017-01-131330.03.0
2017-01-141319.0-8.0
N4XVAA2017-01-28NaNNaN
N4XXAA2017-01-151322.0-5.0
2017-01-16545.0-5.0
N4XYAA2017-01-18559.09.0
N4YCAA2017-01-26545.0-5.0
2017-01-27544.0-6.0
.....................
DLORDSLCN316NB2017-01-231332.0-6.0
N317NB2017-01-091330.0-8.0
2017-01-111345.07.0
N319NB2017-01-171353.015.0
2017-01-221331.0-7.0
N320NB2017-01-131332.0-6.0
N321NB2017-01-191419.041.0
N323NB2017-01-011732.057.0
2017-01-021351.011.0
N324NB2017-01-161337.0-1.0
N326NB2017-01-241332.0-6.0
2017-01-261349.011.0
N329NB2017-01-061422.032.0
N330NB2017-01-041344.0-6.0
2017-01-121343.05.0
N335NB2017-01-311336.0-2.0
N338NB2017-01-291355.017.0
N347NB2017-01-081338.00.0
N348NB2017-01-101355.017.0
N349NB2017-01-301333.0-5.0
N352NW2017-01-061857.010.0
N354NW2017-01-041844.0-3.0
N356NW2017-01-021640.020.0
N358NW2017-01-051856.09.0
N360NB2017-01-251354.016.0
N365NB2017-01-181350.012.0
N368NB2017-01-271351.013.0
N370NB2017-01-201355.017.0
N374NW2017-01-031846.0-1.0
N987AT2017-01-081914.029.0

5582 rows × 2 columns

Now try to do any flight from ORD or DSM, not just from those carriers. This used to be a pain. You might have to turn to the .xs method, or pass in df.index.get_level_values(0) and zip that up with the indexers your wanted, or maybe reset the index and do a boolean mask, and set the index again… ugh.

But now, you can use an IndexSlice.

hdf.loc[pd.IndexSlice[:, ['ORD', 'DSM']], ['dep_time', 'dep_delay']]

dep_timedep_delay
unique_carrierorigindesttail_numfl_date
AADSMDFWN424AA2017-01-231324.0-3.0
N426AA2017-01-25541.0-9.0
N437AA2017-01-13542.0-8.0
2017-01-23544.0-6.0
N438AA2017-01-11542.0-8.0
N439AA2017-01-24544.0-6.0
2017-01-31544.0-6.0
N4UBAA2017-01-181323.0-4.0
N4WNAA2017-01-271322.0-5.0
N4XBAA2017-01-09536.0-14.0
N4XEAA2017-01-21544.0-6.0
N4XFAA2017-01-311320.0-7.0
N4XGAA2017-01-281337.010.0
2017-01-30542.0-8.0
N4XJAA2017-01-20552.02.0
2017-01-211320.0-7.0
N4XKAA2017-01-261323.0-4.0
N4XMAA2017-01-161423.056.0
2017-01-191321.0-6.0
N4XPAA2017-01-091322.0-5.0
2017-01-14545.0-5.0
N4XTAA2017-01-101355.028.0
N4XUAA2017-01-131330.03.0
2017-01-141319.0-8.0
N4XVAA2017-01-28NaNNaN
N4XXAA2017-01-151322.0-5.0
2017-01-16545.0-5.0
N4XYAA2017-01-18559.09.0
N4YCAA2017-01-26545.0-5.0
2017-01-27544.0-6.0
.....................
WNDSMSTLN635SW2017-01-151806.06.0
N645SW2017-01-221800.00.0
N651SW2017-01-011856.061.0
N654SW2017-01-211156.0126.0
N720WN2017-01-23605.0-5.0
2017-01-31603.0-7.0
N724SW2017-01-301738.0-7.0
N734SA2017-01-201839.054.0
N737JW2017-01-09605.0-5.0
N747SA2017-01-27610.00.0
N7718B2017-01-181736.0-9.0
N772SW2017-01-311738.0-7.0
N7735A2017-01-11603.0-7.0
N773SA2017-01-171743.0-2.0
N7749B2017-01-101746.01.0
N781WN2017-01-021909.059.0
2017-01-30605.0-5.0
N7827A2017-01-141644.0414.0
N7833A2017-01-06659.049.0
N7882B2017-01-15901.01.0
N791SW2017-01-261744.0-1.0
N903WN2017-01-131908.083.0
N905WN2017-01-05605.0-5.0
N944WN2017-01-02630.05.0
N949WN2017-01-01624.04.0
N952WN2017-01-29854.0-6.0
N954WN2017-01-111736.0-9.0
N956WN2017-01-061736.0-9.0
NaN2017-01-16NaNNaN
2017-01-17NaNNaN

19466 rows × 2 columns

The : says include every label in this level. The IndexSlice object is just sugar for the actual python slice object needed to remove slice each level.

pd.IndexSlice[:, ['ORD', 'DSM']]
(slice(None, None, None), ['ORD', 'DSM'])

We’ll talk more about working with Indexes (including MultiIndexes) in a later post. I have an unproven thesis that they’re underused because IndexSlice is underused, causing people to think they’re more unwieldy than they actually are. But let’s close out part one.

WrapUp

This first post covered Indexing, a topic that’s central to pandas. The power provided by the DataFrame comes with some unavoidable complexities. Best practices (using .loc and .iloc) will spare you many a headache. We then toured a couple of commonly misunderstood sub-topics, setting with copy and Hierarchical Indexing.