This is the first post in a series where I’ll show how I use pandas on real-world datasets.

For this post, we’ll look at data I collected with Cyclemeter on my daily bike ride to and from school last year. I had to manually start and stop the tracking at the beginning and end of each ride. There may have been times where I forgot to do that, so we’ll see if we can find those.

Let’s begin in the usual fashion, a bunch of imports and loading our data.

import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt

from IPython import display

Each day has data recorded in two formats, CSVs and KMLs. For now I’ve just uploaded the CSVs to the data/ directory. We’ll start with the those, and come back to the KMLs later.

!ls data | head -n 5
Cyclemeter-Cycle-20130801-0707.csv
Cyclemeter-Cycle-20130801-0707.kml
Cyclemeter-Cycle-20130801-1720.csv
Cyclemeter-Cycle-20130801-1720.kml
Cyclemeter-Cycle-20130805-0819.csv

Take a look at the first one to see how the file’s laid out.

df = pd.read_csv('data/Cyclemeter-Cycle-20130801-0707.csv')
df.head()
TimeRide TimeRide Time (secs)Stopped TimeStopped Time (secs)LatitudeLongitudeElevation (feet)Distance (miles)Speed (mph)PacePace (secs)Average Speed (mph)Average PaceAverage Pace (secs)Ascent (feet)Descent (feet)Calories
02013-08-01 07:07:100:00:011.10:00:00041.703753-91.6098929630.002.880:20:5112510.000:00:000000
12013-08-01 07:07:170:00:088.20:00:00041.703825-91.6098358520.012.880:20:5112512.560:23:27140701290
22013-08-01 07:07:220:00:1313.20:00:00041.703858-91.6098147890.012.880:20:5112512.270:26:27158701730
32013-08-01 07:07:270:00:1818.20:00:00041.703943-91.6100907870.026.600:09:065464.700:12:4776701731
42013-08-01 07:07:400:00:3131.20:00:00041.704381-91.6102587880.069.500:06:193796.370:09:2656601732
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 252 entries, 0 to 251
Data columns (total 18 columns):
Time                   252 non-null object
Ride Time              252 non-null object
Ride Time (secs)       252 non-null float64
Stopped Time           252 non-null object
Stopped Time (secs)    252 non-null float64
Latitude               252 non-null float64
Longitude              252 non-null float64
Elevation (feet)       252 non-null int64
Distance (miles)       252 non-null float64
Speed (mph)            252 non-null float64
Pace                   252 non-null object
Pace (secs)            252 non-null int64
Average Speed (mph)    252 non-null float64
Average Pace           252 non-null object
Average Pace (secs)    252 non-null int64
Ascent (feet)          252 non-null int64
Descent (feet)         252 non-null int64
Calories               252 non-null int64
dtypes: float64(7), int64(6), object(5)

Pandas has automatically parsed the headers, but it could use a bit of help on some dtypes. We can see that the Time column is a datetime but it’s been parsed as an object dtype. This is pandas’ fallback dtype that can store anything, but its operations won’t be optimized like they would on an float or bool or datetime[64]. read_csv takes a parse_dates parameter, which we’ll give a list of column names.

date_cols = ["Time", "Ride Time", "Stopped Time", "Pace", "Average Pace"]

df = pd.read_csv("data/Cyclemeter-Cycle-20130801-0707.csv",
                 parse_dates=date_cols)
display.display_html(df.head())
df.info()
TimeRide TimeRide Time (secs)Stopped TimeStopped Time (secs)LatitudeLongitudeElevation (feet)Distance (miles)Speed (mph)PacePace (secs)Average Speed (mph)Average PaceAverage Pace (secs)Ascent (feet)Descent (feet)Calories
02013-08-01 07:07:102014-08-26 00:00:011.12014-08-26041.703753-91.6098929630.002.882014-08-26 00:20:5112510.002014-08-26 00:00:000000
12013-08-01 07:07:172014-08-26 00:00:088.22014-08-26041.703825-91.6098358520.012.882014-08-26 00:20:5112512.562014-08-26 00:23:27140701290
22013-08-01 07:07:222014-08-26 00:00:1313.22014-08-26041.703858-91.6098147890.012.882014-08-26 00:20:5112512.272014-08-26 00:26:27158701730
32013-08-01 07:07:272014-08-26 00:00:1818.22014-08-26041.703943-91.6100907870.026.602014-08-26 00:09:065464.702014-08-26 00:12:4776701731
42013-08-01 07:07:402014-08-26 00:00:3131.22014-08-26041.704381-91.6102587880.069.502014-08-26 00:06:193796.372014-08-26 00:09:2656601732
<class 'pandas.core.frame.DataFrame'>
Int64Index: 252 entries, 0 to 251
Data columns (total 18 columns):
Time                   252 non-null datetime64[ns]
Ride Time              252 non-null datetime64[ns]
Ride Time (secs)       252 non-null float64
Stopped Time           252 non-null datetime64[ns]
Stopped Time (secs)    252 non-null float64
Latitude               252 non-null float64
Longitude              252 non-null float64
Elevation (feet)       252 non-null int64
Distance (miles)       252 non-null float64
Speed (mph)            252 non-null float64
Pace                   252 non-null datetime64[ns]
Pace (secs)            252 non-null int64
Average Speed (mph)    252 non-null float64
Average Pace           252 non-null datetime64[ns]
Average Pace (secs)    252 non-null int64
Ascent (feet)          252 non-null int64
Descent (feet)         252 non-null int64
Calories               252 non-null int64
dtypes: datetime64[ns](5), float64(7), int64(6)

One minor issue is that some of the dates are parsed as datetimes when they’re really just times. We’ll take care of that later. Pandas store everything as datetime64. For now we’ll keep them as datetimes, and remember that they’re really just times.

Now let’s do the same thing, but for all the files.

Use a generator expression to filter down to just csv’s that match the simple condition of having the correct naming style. I try to use lazy generators instead of lists wherever possible. In this case the list is so small that it really doesn’t matter, but it’s a good habit.

import os
csvs = (f for f in os.listdir('data') if f.startswith('Cyclemeter')
        and f.endswith('.csv'))

I see a potential problem: We’ll potentailly want to concatenate each csv together into a single DataFrame. However we’ll want to retain some idea of which specific ride an observation came from. So let’s create a ride_id variable, which will just be an integar ranging from $0 \ldots N$, where $N$ is the number of rides.

Make a simple helper function to do this, and apply it to each csv.

def read_ride(path_, i):
    """
    read in csv at path, and assign the `ride_id` variable to i.
    """
    date_cols = ["Time", "Ride Time", "Stopped Time", "Pace", "Average Pace"]

    df = pd.read_csv(path_, parse_dates=date_cols)
    df['ride_id'] = i
    return df

dfs = (read_ride(os.path.join('data', csv), i)
       for (i, csv) in enumerate(csvs))

Now concatenate together. The original indicies are meaningless, so we’ll ignore them in the concat.

df = pd.concat(dfs, ignore_index=True)
df.head()
TimeRide TimeRide Time (secs)Stopped TimeStopped Time (secs)LatitudeLongitudeElevation (feet)Distance (miles)Speed (mph)PacePace (secs)Average Speed (mph)Average PaceAverage Pace (secs)Ascent (feet)Descent (feet)Caloriesride_id
02013-08-01 07:07:102014-08-26 00:00:011.12014-08-26041.703753-91.6098929630.002.882014-08-26 00:20:5112510.002014-08-26 00:00:0000000
12013-08-01 07:07:172014-08-26 00:00:088.22014-08-26041.703825-91.6098358520.012.882014-08-26 00:20:5112512.562014-08-26 00:23:271407012900
22013-08-01 07:07:222014-08-26 00:00:1313.22014-08-26041.703858-91.6098147890.012.882014-08-26 00:20:5112512.272014-08-26 00:26:271587017300
32013-08-01 07:07:272014-08-26 00:00:1818.22014-08-26041.703943-91.6100907870.026.602014-08-26 00:09:065464.702014-08-26 00:12:47767017310
42013-08-01 07:07:402014-08-26 00:00:3131.22014-08-26041.704381-91.6102587880.069.502014-08-26 00:06:193796.372014-08-26 00:09:26566017320

Great! The data itself is clean enough that we didn’t have to do too much munging.

Let’s persist the merged DataFrame. Writing it out to a csv would be fine, but I like to use pandas’ HDF5 integration (via pytables) for personal projects.

df.to_hdf('data/cycle_store.h5', key='merged',
          format='table')

I used the table format in case we want to do some querying on the HDFStore itself, but we’ll save that for next time.

That’s it for this post. Next time, we’ll do some exploratry data analysis on the data.