Spreadsheets#

Introduction#

This chapter will show you how to work with spreadsheets, for example Microsoft Excel files, in Python. We already saw how to import csv (and tsv) files in Reading and Writing Files. In this chapter we will introduce you to tools for working with data in Excel spreadsheets and Google Sheets.

If you or your collaborators are using spreadsheets for organising data that will be ingested by an analytical tool like Python, we recommend reading the paper “Data Organization in Spreadsheets” by Karl Broman and Kara Woo [Broman and Woo, 2018]. The best practices presented in this paper will save you much headache down the line when you import the data from a spreadsheet into Python to analyse and visualise. (For spreadsheets that are meant to be read by humans, we recommend the good practice tables package.)

Prerequisites#

You will need to install the pandas package for this chapter. You will also need to install the openpyxl package by running pip install openpyxl in the terminal.

Reading Excel (and Similar) Files#

pandas can read in xls, xlsx, xlsm, xlsb, odf, ods, and odt files from your local filesystem or from a URL. It also supports an option to read a single sheet or a list of sheets.

To show how this works, we’ll work with an example spreadsheet called “students.xlsx”. (With thanks to Hadley Wickham’s R4DS for the example.) The figure below shows what the spreadsheet looks like.

A look at the students spreadsheet in Excel. The spreadsheet contains information on 6 students, their ID, full name, favourite food, meal plan, and age.

The first argument to pd.read_excel() is the path to the file to read. If you have downloaded the file onto your computer and put it in a subfolder called “data” then you would want to use the path “data/students.xlsx” but we can also load it directly from the URL.

import pandas as pd
import numpy as np

students = pd.read_excel(
    "https://github.com/aeturrell/python4DS/raw/main/data/students.xlsx"
)
students
Student ID Full Name favourite.food mealPlan AGE
0 1 Sunil Huffmann Strawberry yoghurt Lunch only 4
1 2 Barclay Lynn French fries Lunch only 5
2 3 Jayendra Lyne NaN Breakfast and lunch 7
3 4 Leon Rossini Anchovies Lunch only NaN
4 5 Chidiegwu Dunkel Pizza Breakfast and lunch five
5 6 Güvenç Attila Ice cream Lunch only 6

We have six students in the data and five variables on each student. However there are a few things we might want to address in this dataset:

  • The column names are all over the place. You can provide column names that follow a consistent format; we recommend snake_case using the names argument.

pd.read_excel(
    "https://github.com/aeturrell/python4DS/raw/main/data/students.xlsx",
    names=["student_id", "full_name", "favourite_food", "meal_plan", "age"],
)
student_id full_name favourite_food meal_plan age
0 1 Sunil Huffmann Strawberry yoghurt Lunch only 4
1 2 Barclay Lynn French fries Lunch only 5
2 3 Jayendra Lyne NaN Breakfast and lunch 7
3 4 Leon Rossini Anchovies Lunch only NaN
4 5 Chidiegwu Dunkel Pizza Breakfast and lunch five
5 6 Güvenç Attila Ice cream Lunch only 6
  • age is read in as a column of objects, but it really should be numeric. Just like with read_csv(), you can supply a dtype argument to read_excel() and specify the data types for the columns of data you read in. Your options include "boolean", "int", "float", "datetime", "string", and more. But we can see right away that this isn’t going to work with the “age” column as it mixes numbers and text: so we first need to map its text to numbers.

students = pd.read_excel(
    "data/students.xlsx",
    names=["student_id", "full_name", "favourite_food", "meal_plan", "age"],
)
students["age"] = students["age"].replace("five", 5)
students
student_id full_name favourite_food meal_plan age
0 1 Sunil Huffmann Strawberry yoghurt Lunch only 4.0
1 2 Barclay Lynn French fries Lunch only 5.0
2 3 Jayendra Lyne NaN Breakfast and lunch 7.0
3 4 Leon Rossini Anchovies Lunch only NaN
4 5 Chidiegwu Dunkel Pizza Breakfast and lunch 5.0
5 6 Güvenç Attila Ice cream Lunch only 6.0

Okay, now we can apply the data types.

students = students.astype(
    {
        "student_id": "Int64",
        "full_name": "string",
        "favourite_food": "string",
        "meal_plan": "category",
        "age": "Int64",
    }
)
students.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 6 entries, 0 to 5
Data columns (total 5 columns):
 #   Column          Non-Null Count  Dtype   
---  ------          --------------  -----   
 0   student_id      6 non-null      Int64   
 1   full_name       6 non-null      string  
 2   favourite_food  5 non-null      string  
 3   meal_plan       6 non-null      category
 4   age             5 non-null      Int64   
dtypes: Int64(2), category(1), string(2)
memory usage: 462.0 bytes

It took multiple steps and trial-and-error to load the data in exactly the format we want, and this is not unexpected. Data science is an iterative process. There is no way to know exactly what the data will look like until you load it and take a look at it. The general pattern we used is load the data, take a peek, make adjustments to your code, load it again, and repeat until you’re happy with the result.

Reading Individual Sheets#

An important feature that distinguishes spreadsheets from flat files is the notion of multiple sheets. The figure below shows an Excel spreadsheet with multiple sheets. The data come from the palmerpenguins dataset [Horst, Hill, and Gorman, 2020]. Each sheet contains information on penguins from a different island where data were collected.

A look at the penguins spreadsheet in Excel. The spreadsheet contains has three sheets: Torgersen Island, Biscoe Island, and Dream Island.

You can read a single sheet using the following command (so as not to show the whole file, we’ll use .head() to just show the first 5 rows):

pd.read_excel(
    "https://github.com/aeturrell/python4DS/raw/main/data/penguins.xlsx",
    sheet_name="Torgersen Island",
).head()
species island bill_length_mm bill_depth_mm flipper_length_mm body_mass_g sex year
0 Adelie Torgersen 39.1 18.7 181.0 3750.0 male 2007
1 Adelie Torgersen 39.5 17.4 186.0 3800.0 female 2007
2 Adelie Torgersen 40.3 18.0 195.0 3250.0 female 2007
3 Adelie Torgersen NaN NaN NaN NaN NaN 2007
4 Adelie Torgersen 36.7 19.3 193.0 3450.0 female 2007

Now this relies on us knowing the names of the sheets in advance. There will be situations where you wish to read in data without peeking into the Excel spreadsheet. To read all sheets in, use sheet_name=None. The object that’s created is a dictionary with key value pairs that are sheet names and data frames respectively. Let’s look at the second key value pair (note that we have to convert the keys() and values() objects to list to then retrieve the second element of each using a subscript, ie list(dictionary.keys())[<element number>]).

To give a sense of how this works, let’s first print all of the retrieved keys:

penguins_dict = pd.read_excel(
    "https://github.com/aeturrell/python4DS/raw/main/data/penguins.xlsx",
    sheet_name=None,
)
print([x for x in penguins_dict.keys()])
['Torgersen Island', 'Biscoe Island', 'Dream Island']

Now let’s show the second entry data frame

print(list(penguins_dict.keys())[1])
list(penguins_dict.values())[1].head()
Biscoe Island
species island bill_length_mm bill_depth_mm flipper_length_mm body_mass_g sex year
0 Adelie Biscoe 37.8 18.3 174.0 3400.0 female 2007
1 Adelie Biscoe 37.7 18.7 180.0 3600.0 male 2007
2 Adelie Biscoe 35.9 19.2 189.0 3800.0 female 2007
3 Adelie Biscoe 38.2 18.1 185.0 3950.0 male 2007
4 Adelie Biscoe 38.8 17.2 180.0 3800.0 male 2007

What we really want is these three consistent datasets to be in the same single dataframe. For this, we can use the pd.concat() function. This concatenates any given iterable of data frames.

penguins = pd.concat(penguins_dict.values(), axis=0)
penguins
species island bill_length_mm bill_depth_mm flipper_length_mm body_mass_g sex year
0 Adelie Torgersen 39.1 18.7 181.0 3750.0 male 2007
1 Adelie Torgersen 39.5 17.4 186.0 3800.0 female 2007
2 Adelie Torgersen 40.3 18.0 195.0 3250.0 female 2007
3 Adelie Torgersen NaN NaN NaN NaN NaN 2007
4 Adelie Torgersen 36.7 19.3 193.0 3450.0 female 2007
... ... ... ... ... ... ... ... ...
119 Chinstrap Dream 55.8 19.8 207.0 4000.0 male 2009
120 Chinstrap Dream 43.5 18.1 202.0 3400.0 female 2009
121 Chinstrap Dream 49.6 18.2 193.0 3775.0 male 2009
122 Chinstrap Dream 50.8 19.0 210.0 4100.0 male 2009
123 Chinstrap Dream 50.2 18.7 198.0 3775.0 female 2009

344 rows × 8 columns

Reading Part of a Sheet#

Since many use Excel spreadsheets for presentation as well as for data storage, it’s quite common to find cell entries in a spreadsheet that are not part of the data you want to read in.

The figure below shows such a spreadsheet: in the middle of the sheet is what looks like a data frame but there is extraneous text in cells above and below the data.

A look at the deaths spreadsheet in Excel. The spreadsheet has four rows on top that contain non-data information; the text 'For the same of consistency in the data layout, which is really a beautiful thing, I will keep making notes up here.' is spread across cells in these top four rows. Then, there is a data frame that includes information on deaths of 10 famous people, including their names, professions, ages, whether they have kids or not, date of birth and death. At the bottom, there are four more rows of non-data information; the text 'This has been really fun, but we're signing off now!' is spread across cells in these bottom four rows.

This spreadsheet can be downloaded from here or you can load it directly from a URL. If you want to load it from your own computer’s disk, you’ll need to save it in a sub-folder called “data” first.

The top three rows and the bottom four rows are not part of the data frame. We could skip the top three rows with skiprows. Note that we set skiprows=4 since the fourth row contains column names, not the data.

pd.read_excel("data/deaths.xlsx", skiprows=4)
Name Profession Age Has kids Date of birth Date of death
0 David Bowie musician 69 True 1947-01-08 2016-01-10 00:00:00
1 Carrie Fisher actor 60 True 1956-10-21 2016-12-27 00:00:00
2 Chuck Berry musician 90 True 1926-10-18 2017-03-18 00:00:00
3 Bill Paxton actor 61 True 1955-05-17 2017-02-25 00:00:00
4 Prince musician 57 True 1958-06-07 2016-04-21 00:00:00
5 Alan Rickman actor 69 False 1946-02-21 2016-01-14 00:00:00
6 Florence Henderson actor 82 True 1934-02-14 2016-11-24 00:00:00
7 Harper Lee author 89 False 1926-04-28 2016-02-19 00:00:00
8 Zsa Zsa Gábor actor 99 True 1917-02-06 2016-12-18 00:00:00
9 George Michael musician 53 False 1963-06-25 2016-12-25 00:00:00
10 Some NaN NaN NaN NaT NaN
11 NaN also like to write stuff NaN NaN NaT NaN
12 NaN NaN at the bottom, NaT NaN
13 NaN NaN NaN NaN NaT too!

We could also set nrows to omit the extraneous rows at the bottom (another option would to be to skip a set number of rows at the end using skipfooter).

pd.read_excel("data/deaths.xlsx", skiprows=4, nrows=10)
Name Profession Age Has kids Date of birth Date of death
0 David Bowie musician 69 True 1947-01-08 2016-01-10
1 Carrie Fisher actor 60 True 1956-10-21 2016-12-27
2 Chuck Berry musician 90 True 1926-10-18 2017-03-18
3 Bill Paxton actor 61 True 1955-05-17 2017-02-25
4 Prince musician 57 True 1958-06-07 2016-04-21
5 Alan Rickman actor 69 False 1946-02-21 2016-01-14
6 Florence Henderson actor 82 True 1934-02-14 2016-11-24
7 Harper Lee author 89 False 1926-04-28 2016-02-19
8 Zsa Zsa Gábor actor 99 True 1917-02-06 2016-12-18
9 George Michael musician 53 False 1963-06-25 2016-12-25

Data Types#

In CSV files, all values are strings. This is not particularly true to the data, but it is simple: everything is a string.

The underlying data in Excel spreadsheets is more complex. A cell can be one of five things:

  • A logical, like TRUE / FALSE

  • A number, like “10” or “10.5”

  • A date, which can also include time like “11/1/21” or “11/1/21 3:00 PM”

  • A string, like “ten”

  • A currency, which allows numeric values in a limited range and four decimal digits of fixed precision

When working with spreadsheet data, it’s important to keep in mind that how the underlying data is stored can be very different than what you see in the cell. For example, Excel has no notion of an integer. All numbers are stored as floating points (real number), but you can choose to display the data with a customizable number of decimal points. Similarly, dates are actually stored as numbers, specifically the number of seconds since January 1, 1970. You can customize how you display the date by applying formatting in Excel. Confusingly, it’s also possible to have something that looks like a number but is actually a string (e.g. type '10 into a cell in Excel).

These differences between how the underlying data are stored vs. how they’re displayed can cause surprises when the data are loaded into analytical tools such as pandas. By default, pandas will guess the data type in a given column. A recommended workflow is to let pandas guess the column types initially, inspect them, and then change any data types that you want to.

Writing to Excel#

Let’s create a small data frame that we can then write out. Note that item is a category and quantity is an integer.

bake_sale = pd.DataFrame(
    {"item": pd.Categorical(["brownie", "cupcake", "cookie"]), "quantity": [10, 5, 8]}
)
bake_sale
item quantity
0 brownie 10
1 cupcake 5
2 cookie 8

You can write data back to disk as an Excel file using the <dataframe>.to_excel() function. The index=False keyword argument just writes the two columns without the index that was automatically added in the last step.

bake_sale.to_excel("data/bake_sale.xlsx", index=False)

The figure below shows what the data looks like in Excel.

Bake sale data frame created earlier in Excel.

Just like reading from a CSV, information on data type is lost when we read the data back in—you can see this is you read the data back in and check the info() for the data types. Although we kept int64 because pandas recognise that the second column was of integer type, we lost the categorical data type for “item”. This data type loss makes Excel files unreliable for caching interim results.

pd.read_excel("data/bake_sale.xlsx").info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 2 columns):
 #   Column    Non-Null Count  Dtype 
---  ------    --------------  ----- 
 0   item      3 non-null      object
 1   quantity  3 non-null      int64 
dtypes: int64(1), object(1)
memory usage: 176.0+ bytes

Formatted Output#

If you need more formatting options and more control over how you write spreadsheets, check out the documentation for openpyxl which can do pretty much everything you imagine. Generally, releasing data in spreadsheets is not the best option: but if you do want to release data in spreadsheets according to best practice, then check out gptables.