Getting Data#

Introduction#

In this chapter, you’ll learn about some different ways to extract or obtain data: from the web, from documents, and elsewhere. This chapter uses packages such as pandas-datareader and BeautifulSoup that you may need to install first.

Imports#

First we need to import the packages we’ll be using

import textwrap

import matplotlib.pyplot as plt
import pandas as pd
import requests
from bs4 import BeautifulSoup

Extracting data from files on the internet#

As you will have seen in some of the examples in this book, it’s easy to read data from the internet once you have the url and file type. Here, for instance, is an example that reads in the ‘storms’ dataset (we’ll only grab the first 10 rows):

pd.read_csv(
    "https://vincentarelbundock.github.io/Rdatasets/csv/dplyr/storms.csv", nrows=10
)
rownames name year month day hour lat long status category wind pressure tropicalstorm_force_diameter hurricane_force_diameter
0 1 Amy 1975 6 27 0 27.5 -79.0 tropical depression NaN 25 1013 NaN NaN
1 2 Amy 1975 6 27 6 28.5 -79.0 tropical depression NaN 25 1013 NaN NaN
2 3 Amy 1975 6 27 12 29.5 -79.0 tropical depression NaN 25 1013 NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
7 8 Amy 1975 6 28 18 34.0 -77.0 tropical depression NaN 30 1006 NaN NaN
8 9 Amy 1975 6 29 0 34.4 -75.8 tropical storm NaN 35 1004 NaN NaN
9 10 Amy 1975 6 29 6 34.0 -74.8 tropical storm NaN 40 1002 NaN NaN

10 rows × 14 columns

Obtaining data using APIs#

Using an API (application programming interface) is another way to draw down information from the interweb. Their just a way for one tool, say Python, to speak to another tool, say a server, and usefully exchange information. The classic use case would be to post a request for data that fits a certain query via an API and to get a download of that data back in return. (You should always preferentially use an API over webscraping a site—you can read more about why APIs are so great here.)

Because they are designed to work with any tool, you don’t actually need a programming language to interact with an API, it’s just a lot easier if you do.

Note

An API key is needed in order to access some APIs. Sometimes all you need to do is register with site, in other cases you may have to pay for access.

To see this, let’s directly use an API to get some time series data. We will make the call out to the internet using the requests package.

An API has an ‘endpoint’, the base url, and then a URL that encodes the question. Let’s see an example with the ONS API for which the endpoint is “https://api.beta.ons.gov.uk/v1”. There’s then the information that filters down to the specific time series we’re interested in. This is a long string! For the ONS API, if you ever wish to search for the ‘data’ part of the API call, you can also use the API to search for that time series by its ID. For example, to get the full URL of the vacancy postings in the service sector below, we first had to run “https://api.beta.ons.gov.uk/v1/search?content_type=timeseries&cdids=JP9Z” in a browser and extra the “uri” part. The uri part then forms the end of the API call, forming a complete URL. You can check all of these URLs yourself by pasting them into your internet browser’s address bar.

The data that are returned by APIs are typically in JSON format, which looks a lot like a nested Python dictionary and its entries can be accessed in the same way–this is what is happening when getting the series’ title in the example below. JSON is not good for analysis, so we’ll use pandas to put the data into shape.

url = "https://api.beta.ons.gov.uk/v1/data?uri=/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/timeseries/jp9z/lms/previous/v108"

# Get the data from the ONS API:
json_data = requests.get(url).json()

# Prep the data for a quick plot
title = json_data["description"]["title"]
df = (
    pd.DataFrame(pd.json_normalize(json_data["months"]))
    .assign(
        date=lambda x: pd.to_datetime(x["date"], format="%Y %b"),
        value=lambda x: pd.to_numeric(x["value"]),
    )
    .set_index("date")
)

df["value"].plot(title=title, ylim=(0, df["value"].max() * 1.2), lw=3.0);
_images/87893b6e18a442e4ff434360ea31cba1b9fc2c81b8583cf873fa47a9ddae98b3.svg
pd.DataFrame(pd.json_normalize(json_data["months"]))
date value label year month quarter sourceDataset updateDate
0 2001 MAY 568 2001 APR-JUN 2001 May LMS 2022-04-11T23:00:00.000Z
1 2001 JUN 563 2001 MAY-JUL 2001 June LMS 2024-04-15T23:00:00.000Z
2 2001 JUL 554 2001 JUN-AUG 2001 July LMS 2023-04-17T23:00:00.000Z
... ... ... ... ... ... ... ... ...
278 2024 JUL 742 2024 JUN-AUG 2024 July LMS 2024-11-12T00:00:00.000Z
279 2024 AUG 732 2024 JUL-SEP 2024 August LMS 2024-11-12T00:00:00.000Z
280 2024 SEP 727 2024 AUG-OCT 2024 September LMS 2024-11-12T00:00:00.000Z

281 rows × 8 columns

We’ve talked about reading APIs. You can also create your own to serve up data, models, whatever you like! This is an advanced topic and we won’t cover it; but if you do need to, the simplest way is to use Fast API. You can find some short video tutorials for Fast API here.

Pandas Datareader: an easier way to interact with (some) APIs#

Although it didn’t take much code to get the ONS data, it would be even better if it was just a single line, wouldn’t it? Fortunately there are some packages out there that make this easy, but it does depend on the API (and APIs come and go over time).

By far the most comprehensive library for accessing extra APIs is pandas-datareader, which provides convenient access to:

  • FRED

  • Quandl

  • World Bank

  • OECD

  • Eurostat

and more.

Let’s see an example using FRED (the Federal Reserve Bank of St. Louis’ economic data library). This time, let’s look at the UK unemployment rate:

import pandas_datareader.data as web

df_u = web.DataReader("LRHUTTTTGBM156S", "fred")

df_u.plot(title="UK unemployment (percent)", legend=False, ylim=(2, 6), lw=3.0);
_images/e1a959d4a01c511fbf7cd90da15ad4386b032cd08bb11cfe3f3961fae5cf4e27.svg

And, because it’s also a really useful one, let’s also see how to use pandas-datareader to access World Bank data.

# World Bank CO2 equivalent emissions (metric tons per capita)
# https://data.worldbank.org/indicator/EN.GHG.ALL.PC.CE.AR5
# World Bank pop
# https://data.worldbank.org/indicator/SP.POP.TOTL
# country and region codes at http://api.worldbank.org/v2/country
import textwrap

from pandas_datareader import wb

df = wb.download(  # download the data from the world bank
    indicator="EN.GHG.ALL.PC.CE.AR5",  # indicator code
    country=["US", "CHN", "IND", "Z4", "Z7"],  # country codes
    start=2019,  # start year
    end=2019,  # end year
)
df = df.reset_index()  # remove country as index
df["country"] = df["country"].apply(lambda x: textwrap.fill(x, 10))  # wrap long names
df = df.sort_values("EN.GHG.ALL.PC.CE.AR5")  # re-order
df.head()
/home/runner/micromamba/envs/codeforecon/lib/python3.10/site-packages/pandas_datareader/wb.py:592: UserWarning: Non-standard ISO country codes: Z4, Z7
  warnings.warn(
/tmp/ipykernel_6191/1642306589.py:10: FutureWarning: errors='ignore' is deprecated and will raise in a future version. Use to_numeric without passing `errors` and catch exceptions explicitly instead
  df = wb.download(  # download the data from the world bank
country year EN.GHG.ALL.PC.CE.AR5
3 India 2019 2.684385
1 East Asia\n& Pacific 2019 8.768054
2 Europe &\nCentral\nAsia 2019 9.451536
0 China 2019 10.359449
4 United\nStates 2019 18.718205
import seaborn as sns

fig, ax = plt.subplots()
sns.barplot(x="country", y="EN.GHG.ALL.PC.CE.AR5", data=df.reset_index(), ax=ax)
ax.set_title(r"CO$_2$ equivalent (metric tons per capita)", loc="right")
plt.suptitle("The USA leads the world on per-capita emissions", y=1.01)
for key, spine in ax.spines.items():
    spine.set_visible(False)
ax.set_ylabel("")
ax.set_xlabel("")
ax.yaxis.tick_right()
plt.show()
_images/d56ef5b99591bf6c54a6a6febb635c91968c173a30a5b487d1358ffb466bcca1.svg

The OECD API#

Sometimes it’s convenient to use APIs directly, and, as an example, the OECD API comes with a LOT of complexity that direct access can take advantage of. The OECD API makes data available in both JSON and XML formats, and we’ll use pandasdmx (aka the Statistical Data and Metadata eXchange (SDMX) package for the Python data ecosystem) to pull down the XML format data and turn it into a regular pandas dataframe.

Now, key to using the OECD API is knowledge of its many codes: for countries, times, resources, and series. You can find some broad guidance on what codes the API uses here but to find exactly what you need can be a bit tricky. Two tips are:

  1. If you know what you’re looking for is in a particular named dataset, eg “QNA” (Quarterly National Accounts), put https://stats.oecd.org/restsdmx/sdmx.ashx/GetDataStructure/QNA/all?format=SDMX-ML into your browser and look through the XML file; you can pick out the sub-codes and the countries that are available.

  2. Browse around on https://stats.oecd.org/ and use Customise then check all the “Use Codes” boxes to see whatever your browsing’s code names.

Let’s see an example of this in action. We’d like to see the productivity (GDP per hour) data for a range of countries since 2010. We are going to be in the productivity resource (code “PDB_LV”) and we want the USD current prices (code “CPC”) measure of GDP per employed worker (code “T_GDPEMP) from 2010 onwards (code “startTime=2010”). We’ll grab this for some developed countries where productivity measurements might be slightly more comparable. The comments below explain what’s happening in each step.

import pandasdmx as pdmx
# Tell pdmx we want OECD data
oecd = pdmx.Request("OECD")
# Set out everything about the request in the format specified by the OECD API
data = oecd.data(
    resource_id="PDB_LV",
    key="GBR+FRA+CAN+ITA+DEU+JPN+USA.T_GDPEMP.CPC/all?startTime=2010",
).to_pandas()

df = pd.DataFrame(data).reset_index()
df.head()
|   | LOCATION |  SUBJECT | MEASURE | TIME_PERIOD |        value |
|--:|---------:|---------:|--------:|------------:|-------------:|
| 0 |      CAN | T_GDPEMP |     CPC |        2010 | 78848.604088 |
| 1 |      CAN | T_GDPEMP |     CPC |        2011 | 81422.364748 |
| 2 |      CAN | T_GDPEMP |     CPC |        2012 | 82663.028058 |
| 3 |      CAN | T_GDPEMP |     CPC |        2013 | 86368.582158 |
| 4 |      CAN | T_GDPEMP |     CPC |        2014 | 89617.632446 |

Great that worked! We have data in a nice tidy format.

Other Useful APIs#

  • There is a regularly updated list of APIs over at this public APIs repo on github. It doesn’t have an economics section (yet), but it has a LOT of other APIs.

  • Berkeley Library maintains a list of economics APIs that is well worth looking through.

  • NASDAQ Data Link, which has a great deal of financial data.

  • DBnomics: publicly-available economic data provided by national and international statistical institutions, but also by researchers and private companies.

Webscraping#

Webscraping is a way of grabbing information from the internet that was intended to be displayed in a browser. But it should only be used as a last resort, and only then when permitted by the terms and conditions of a website.

If you’re getting data from the internet, it’s much better to use an API whenever you can: grabbing information in a structure way is exactly why APIs exist. APIs should also be more stable than websites, which may change frequently. Typically, if an organisation is happy for you to grab their data, they will have made an API expressly for that purpose. It’s pretty rare that there’s a major website which does permit webscraping but which doesn’t have an API; for these websites, if they don’t have an API, chances scraping is against their terms and conditions. Those terms and conditions may be enforceable by law (different rules in different countries here, and you really need legal advice if it’s not unambiguous as to whether you can scrape or not.)

There are other reasons why webscraping is not so good; for example, if you need a back-run then it might be offered through an API but not shown on the webpage. (Or it might not be available at all, in which case it’s best to get in touch with the organisation or check out WaybackMachine in case they took snapshots).

So this book is pretty down on webscraping as there’s almost always a better solution. However, there are times when it is useful.

If you do find yourself in a scraping situation, be really sure to check that’s legally allowed and also that you are not violating the website’s robots.txt rules: this is a special file on almost every website that sets out what’s fair play to crawl (conditional on legality) and what robots should not go poking around in.

In Python, you are spoiled for choice when it comes to webscraping. There are five very strong libraries that cover a real range of user styles and needs: requests, lxml, beautifulsoup, selenium, and scrapy*.

For quick and simple webscraping, my usual combo would requests, which does little more than go and grab the HTML of a webpage, and beautifulsoup, which then helps you to navigate the structure of the page and pull out what you’re actually interested in. For dynamic webpages that use javascript rather than just HTML, you’ll need selenium. To scale up and hit thousands of webpages in an efficient way, you might try scrapy, which can work with the other tools and handle multiple sessions, and all other kinds of bells and whistles… it’s actually a “web scraping framework”.

It’s always helpful to see coding in practice, so that’s what we’ll do now, but note that we’ll be skipping over a lot of important detail such as user agents, being ‘polite’ with your scraping requests, being efficient with caching and crawling.

In lieu of a better example, let’s scrape http://aeturrell.com/

url = "http://aeturrell.com/research"
page = requests.get(url)
page.text[:300]
'<!DOCTYPE html>\n<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"><head>\n\n<meta charset="utf-8">\n<meta name="generator" content="quarto-1.6.39">\n\n<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes">\n\n<meta name="author" content="Arthur Turrell">\n'

Okay, what just happened? We asked requests to grab the HTML of the webpage and then printed the first 300 characters of the text that it found.

Let’s now parse this into something humans can read (or can read more easily) using beautifulsoup:

soup = BeautifulSoup(page.text, "html.parser")
print(soup.prettify()[60000:60500])
TJDdmFjYW5jaWVzJTJDQ09WSUQtMTk=" data-index="1" data-listing-date-modified-sort="NaN" data-listing-date-sort="1651359600000" data-listing-file-modified-sort="1687564711698" data-listing-reading-time-sort="1" data-listing-word-count-sort="182">
         <div class="project-content listing-pub-info">
          <p>
           Draca, Mirko, Emma Duchini, Roland Rathelot, Arthur Turrell, and Giulia Vattuone. Revolution in Progress? The Rise of Remote Work in the UK.
           <i>
            Univers

Now we see more structure of the page and even some HTML tags such as ‘title’ and ‘link’. Now we come to the data extraction part: say we want to pull out every paragraph of text, we can use beautifulsoup to skim down the HTML structure and pull out only those parts with the paragraph tag (‘p’).

# Get all paragraphs
all_paras = soup.find_all("p")
# Just show one of the paras
all_paras[1]
<p>Blundell, Jack, Emma Duchini, Stefania Simion, and Arthur Turrell. "Pay transparency and gender equality." <i>American Economic Journal: Economic Policy</i> (2024). doi: <a href="https://www.aeaweb.org/articles?id=10.1257/pol.20220766&amp;from=f"><code>10.1257/pol.20220766</code></a></p>

To make this more readable, you can use the .text method:

all_paras[1].text
'Blundell, Jack, Emma Duchini, Stefania Simion, and Arthur Turrell. "Pay transparency and gender equality." American Economic Journal: Economic Policy (2024). doi: 10.1257/pol.20220766'

Now let’s say we didn’t care about most of the page, we only wanted to get hold of the names of projects. For this we need to identify the tag type of the element we’re interested in, in this case ‘div’, and it’s class type, in this case “project-name”. We do it like this (and show nice text in the process):

projects = soup.find_all("div", class_="project-content listing-pub-info")
projects = [x.text.strip() for x in projects]
projects[:4]
['Blundell, Jack, Emma Duchini, Stefania Simion, and Arthur Turrell. "Pay transparency and gender equality." American Economic Journal: Economic Policy (2024). doi: 10.1257/pol.20220766',
 'Botta, Federico, Robin Lovelace, Laura Gilbert, and Arthur Turrell. "Packaging code and data for reproducible research: A case study of journey time statistics." Environment and Planning B: Urban Analytics and City Science (2024): 23998083241267331. doi: 10.1177/23998083241267331',
 'Kalamara, Eleni, Arthur Turrell, Chris Redl, George Kapetanios, and Sujit Kapadia. "Making text count: economic forecasting using newspaper text." Journal of Applied Econometrics 37, no. 5 (2022): 896-919. doi: 10.1002/jae.2907',
 'Turrell, A., Speigner, B., Copple, D., Djumalieva, J. and Thurgood, J., 2021. Is the UK’s productivity puzzle mostly driven by occupational mismatch? An analysis using big data on job vacancies. Labour Economics, 71, p.102013. doi: 10.1016/j.labeco.2021.102013']

Hooray! We managed to get the information we wanted: all we needed to know was the right tags. A good tip for finding the tags of the info you want is to look at in your browser (eg Google Chrome) and then right-click on the bit you’re interested in, then hit ‘Inspect’. This will show you the HTML element of the bit of the page you clicked on.

That’s almost it for this very, very brief introduction to webscraping. We’ll just see one more thing: how to iterate over multiple pages.

Imagine we had a root webpage such as “www.codingforeconomists.com” which had subpages such as “www.codingforeconomists.com/page=1”, “www.codingforeconomists.com/page=2”, and so on. One need only iterate create the HTML strings to pass into a function that scrapes each one and return the relevant data, eg for the first 50 pages, and with a function called scraper(), one might run

start, stop = 0, 50
root_url = "www.codingforeconomists.com/page="
info_on_pages = [scraper(root_url + str(i)) for i in range(start, stop)]

That’s all we’ll cover here but remember we’ve barely scraped the surface of this big, complex topic. If you want to read about an application, it’s hard not to recommend the paper on webscraping that has undoubtedly change the world the most, and very likely has affected your own life in numerous ways: “The PageRank Citation Ranking: Bringing Order to the Web” by Page, Brin, Motwani and Winograd. For a more in-depth example of webscraping, check out realpython’s tutorial.

Webscraping Tables#

Often there are times when you don’t actually want to scrape an entire webpage and all you want is the data from a table within the page. Fortunately, there is an easy way to scrape individual tables using the pandas package.

We will read data from the first table on ‘https://simple.wikipedia.org/wiki/FIFA_World_Cup’ using pandas. The function we’ll use is read_html(), which returns a list of dataframes of all the tables it finds when you pass it a URL. If you want to filter the list of tables, use the match= keyword argument with text that only appears in the table(s) you’re interested in.

The example below shows how this works; looking at the website, we can see that the table we’re interested in (of past world cup results), has a ‘fourth place’ column while other tables on the page do not. Therefore we run:

df_list = pd.read_html(
    "https://simple.wikipedia.org/wiki/FIFA_World_Cup", match="Sweden"
)
# Retrieve first and only entry from list of dataframes
df = df_list[0]
df.head()
Years Hosts Winners Score Runner's-up Third place Score.1 Fourth place
0 1930 Details Uruguay Uruguay 4 - 2 Argentina United States [note 1] Yugoslavia
1 1934 Details Italy Italy 2 - 1 Czechoslovakia Germany 3 - 2 Austria
2 1938 Details France Italy 4 - 2 Hungary Brazil 4 - 2 Sweden
3 1950 Details Brazil Uruguay 2 - 1 Brazil Sweden [note 2] Spain
4 1954 Details Switzerland West Germany 3 - 2 Hungary Austria 3 - 1 Uruguay

This gives us the table neatly loaded into a pandas dataframe ready for further use.

Tables#

The single best solution to grab tables is probably camelot. Note that it does need you to have Ghostscript installed on your computer; you can find more information about the dependencies here. It only works with text-based PDFs and not scanned documents: basically, if you can click and drag to select text in your table in a PDF viewer, then your PDF is text-based. In that case, camelot is able to sift through the contents and grab any tables and then pass them back as csvs or even pandas dataframes.

At the time of writing, camelot had some versioning issues related to a dependency on an outdated version of sqlalchemy. You may need to install it in a separate virtual environment to use it.

Here’s a small example that assumes you have a pdf with a table in stored in a local directory:

import camelot
# Grab the pdf
tables = camelot.read_pdf(os.path.join('data', 'pdf_with_table.pdf'))

To extract any of the \(n\) tables that are retrieved into a pandas dataframe individually, use tables[0].df.

Note that camelot is not perfect–so it can produce a report on how it did when it tried to extract each table, which includes an accuracy score. This is found using, for example, tables[0].parsing_report.

Other useful PDF packages#

pdfcomments is a library that allows you to strip out comments and sticky notes from PDF. This need not strictly be about data extraction, but PyPDF2 allows you to both split a PDF into separate pages and merge multiple PDFs together; see here for the steps. (You can drag and drop pages using preview on MacOS but this library may be the easiest way to do the same thing on Windows.)

Text extraction#

Not everything is a PDF file! If you want to get the text out of .doc, .docx, .epub, .gif, .json, .jpg, .mp3, .odt, .pptx, .rtf, .xlsx, .xls and, actually, .pdf too, then textract is for you. Mostly, it’s a wrapper around a ton of other libraries. The upside is that getting the text out should be as easy as calling

import textract
text = textract.process(Path('path/to/file.extension'))

The downside is that it requires that some other (non-Python) libraries be installed and it doesn’t (yet) work on Windows.

Review#

If you know how to get data from:

  • ✅ the internet using a URL;

  • ✅ the internet using an API;

  • ✅ the internet using webscraping; and

  • ✅ PDFs, Microsoft Word Documents, and more, using tools like pdfminer.six and textract

then you have a good basic set of tools for getting the data that you need into a form you can use.