26. Webscraping and APIs#

26.1. Introduction#

This chapter will show you how to work with online data that is either obtained from webpages via webscraping or more directly over the internet via an API. An important principle is always to use an API if one is available as this is designed to pass information directly into your Python session and will save you a lot of effort.

26.1.2. Prerequisites#

You will need to install the pandas package for this chapter. We’ll use seaborn too, which you should already have installed. You will also need to install the beautifulsoup and pandas-datareader packages in your terminal using pip install beautifulsoup4 and pip install pandas-datareader respectively. We’ll also use two built-in packages, textwrap and requests.

To kick off, let’s import some of the packages we need (it’s always good practice to import the packages you need at the top of a script or notebook).

import requests
import textwrap
import pandas as pd
from bs4 import BeautifulSoup
from lets_plot import *

LetsPlot.setup_html()

26.2. Extracting Data from Files on the Internet using pandas#

It’s easy to read data from the internet once you have the url and file type. Here, for instance, is an example that reads in the ‘storms’ dataset, which is stored as a CSV file in a URL (we’ll only grab the first 10 rows):

pd.read_csv(
    "https://vincentarelbundock.github.io/Rdatasets/csv/dplyr/storms.csv", nrows=10
)
rownames name year month day hour lat long status category wind pressure tropicalstorm_force_diameter hurricane_force_diameter
0 1 Amy 1975 6 27 0 27.5 -79.0 tropical depression NaN 25 1013 NaN NaN
1 2 Amy 1975 6 27 6 28.5 -79.0 tropical depression NaN 25 1013 NaN NaN
2 3 Amy 1975 6 27 12 29.5 -79.0 tropical depression NaN 25 1013 NaN NaN
3 4 Amy 1975 6 27 18 30.5 -79.0 tropical depression NaN 25 1013 NaN NaN
4 5 Amy 1975 6 28 0 31.5 -78.8 tropical depression NaN 25 1012 NaN NaN
5 6 Amy 1975 6 28 6 32.4 -78.7 tropical depression NaN 25 1012 NaN NaN
6 7 Amy 1975 6 28 12 33.3 -78.0 tropical depression NaN 25 1011 NaN NaN
7 8 Amy 1975 6 28 18 34.0 -77.0 tropical depression NaN 30 1006 NaN NaN
8 9 Amy 1975 6 29 0 34.4 -75.8 tropical storm NaN 35 1004 NaN NaN
9 10 Amy 1975 6 29 6 34.0 -74.8 tropical storm NaN 40 1002 NaN NaN

26.3. Obtaining data using APIs#

Using an API (application programming interface) is another way to draw down information from the interweb. Their just a way for one tool, say Python, to speak to another tool, say a server, and usefully exchange information. The classic use case would be to post a request for data that fits a certain query via an API and to get a download of that data back in return. (You should always preferentially use an API over webscraping a site.)

Because they are designed to work with any tool, you don’t actually need a programming language to interact with an API, it’s just a lot easier if you do.

Note

An API key is needed in order to access some APIs. Sometimes all you need to do is register with site, in other cases you may have to pay for access.

To see this, let’s directly use an API to get some time series data. We will make the call out to the internet using the requests package.

An API has an ‘endpoint’, the base url, and then a URL that encodes the question. Let’s see an example with the ONS API for which the endpoint is “https://api.ons.gov.uk/”. The rest of the API has the form ‘key/value’, for example we’ll ask for timeseries data ‘timeseries’ followed by ‘JP9Z’ for the vacancies in the UK services sector. We then ask for ‘dataset’ followed by ‘UNEM’ to specify which overarching dataset the series we want is in. The last part asks for the data with ‘data’. Often you won’t need to know all of these details, but it’s useful to see a detailed example.

The data that are returned by APIs are typically in JSON format, which looks a lot like a nested Python dictionary and its entries can be accessed in the same way–this is what is happening when getting the series’ title in the example below. JSON is not good for analysis, so we’ll use pandas to put the data into shape.

url = "https://api.ons.gov.uk/timeseries/JP9Z/dataset/UNEM/data"

# Get the data from the ONS API:
json_data = requests.get(url).json()

# Prep the data for a quick plot
title = json_data["description"]["title"]
df = (
    pd.DataFrame(pd.json_normalize(json_data["months"]))
    .assign(
        date=lambda x: pd.to_datetime(x["date"]),
        value=lambda x: pd.to_numeric(x["value"]),
    )
    .set_index("date")
)

df["value"].plot(title=title, ylim=(0, df["value"].max() * 1.2), lw=3.0);
/var/folders/x6/ffnr59f116l96_y0q0bjfz7c0000gn/T/ipykernel_26255/2065280636.py:11: UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
  date=lambda x: pd.to_datetime(x["date"]),
_images/6654325f51be98ceb43e8afaf16e299b45208e27b8afd1e264146a206ce4e76c.svg

We’ve talked about reading APIs. You can also create your own to serve up data, models, whatever you like! This is an advanced topic and we won’t cover it; but if you do need to, the simplest way is to use Fast API. You can find some short video tutorials for Fast API here.

26.3.1. Pandas Datareader: an easier way to interact with (some) APIs#

Although it didn’t take much code to get the ONS data, it would be even better if it was just a single line, wouldn’t it? Fortunately there are some packages out there that make this easy, but it does depend on the API (and APIs come and go over time).

By far the most comprehensive library for accessing extra APIs is pandas-datareader, which provides convenient access to:

  • FRED

  • Quandl

  • World Bank

  • OECD

  • Eurostat

and more.

Let’s see an example using FRED (the Federal Reserve Bank of St. Louis’ economic data library). This time, let’s look at the UK unemployment rate:

import pandas_datareader.data as web

df_u = web.DataReader("LRHUTTTTGBM156S", "fred")

df_u.plot(title="UK unemployment (percent)", legend=False, ylim=(2, 6), lw=3.0);
_images/068d2a16f32c376ab41e2fefd8c0ca04c1280d8629214cef1a7b9340afca7d50.svg

And, because it’s also a really useful one, let’s also see how to use pandas-datareader to access World Bank data.

# World Bank CO2 emissions (metric tons per capita)
# https://data.worldbank.org/indicator/EN.ATM.CO2E.PC
# World Bank pop
# https://data.worldbank.org/indicator/SP.POP.TOTL
# country and region codes at http://api.worldbank.org/v2/country
from pandas_datareader import wb

df = wb.download(
    indicator="EN.ATM.CO2E.PC",
    country=["US", "CHN", "IND", "Z4", "Z7"],
    start=2017,
    end=2017,
)
# remove country as index for ease of plotting with seaborn
df = df.reset_index()
# wrap long country names
df["country"] = df["country"].apply(lambda x: textwrap.fill(x, 10))
# order based on size
df = df.sort_values("EN.ATM.CO2E.PC")
df.head()
/Users/aet/mambaforge/envs/py4ds2e/lib/python3.10/site-packages/pandas_datareader/wb.py:592: UserWarning: Non-standard ISO country codes: Z4, Z7
  warnings.warn(
country year EN.ATM.CO2E.PC
3 India 2017 1.704927
1 East Asia\n& Pacific 2017 5.960076
2 Europe &\nCentral\nAsia 2017 6.746232
0 China 2017 7.226160
4 United\nStates 2017 14.823245
(
    ggplot(df, aes(x="country", y="EN.ATM.CO2E.PC"))
    + geom_bar(aes(fill="country"), color="black", alpha=0.8, stat="identity")
    + scale_fill_discrete()
    + theme_minimal()
    + theme(legend_position="none")
    + ggsize(600, 400)
    + labs(
        subtitle="Carbon dioxide (metric tons per capita)",
        title="The USA leads the world on per-capita emissions",
        y="",
    )
)

26.3.2. The OECD API#

Sometimes it’s convenient to use APIs directly, and, as an example, the OECD API comes with a LOT of complexity that direct access can take advantage of. The OECD API makes data available in both JSON and XML formats, and we’ll use pandasdmx (aka the Statistical Data and Metadata eXchange (SDMX) package for the Python data ecosystem) to pull down the XML format data and turn it into a regular pandas data frame.

Now, key to using the OECD API is knowledge of its many codes: for countries, times, resources, and series. You can find some broad guidance on what codes the API uses here but to find exactly what you need can be a bit tricky. Two tips are:

  1. If you know what you’re looking for is in a particular named dataset, eg “QNA” (Quarterly National Accounts), put https://stats.oecd.org/restsdmx/sdmx.ashx/GetDataStructure/QNA/all?format=SDMX-ML into your browser and look through the XML file; you can pick out the sub-codes and the countries that are available.

  2. Browse around on https://stats.oecd.org/ and use Customise then check all the “Use Codes” boxes to see whatever your browsing’s code names.

Let’s see an example of this in action. We’d like to see the productivity (GDP per hour) data for a range of countries since 2010. We are going to be in the productivity resource (code “PDB_LV”) and we want the USD current prices (code “CPC”) measure of GDP per employed worker (code “T_GDPEMP) from 2010 onwards (code “startTime=2010”). We’ll grab this for some developed countries where productivity measurements might be slightly more comparable. The comments below explain what’s happening in each step.

import pandasdmx as pdmx
# Tell pdmx we want OECD data
oecd = pdmx.Request("OECD")
# Set out everything about the request in the format specified by the OECD API
data = oecd.data(
    resource_id="PDB_LV",
    key="GBR+FRA+CAN+ITA+DEU+JPN+USA.T_GDPEMP.CPC/all?startTime=2010",
).to_pandas()

df = pd.DataFrame(data).reset_index()
df.head()

LOCATION

SUBJECT

MEASURE

TIME_PERIOD

value

0

CAN

T_GDPEMP

CPC

2010

78848.604088

1

CAN

T_GDPEMP

CPC

2011

81422.364748

2

CAN

T_GDPEMP

CPC

2012

82663.028058

3

CAN

T_GDPEMP

CPC

2013

86368.582158

4

CAN

T_GDPEMP

CPC

2014

89617.632446

Great that worked! We have data in a nice tidy format.

26.3.3. Other Useful APIs#

  • There is a regularly updated list of APIs over at this public APIs repo on github. It doesn’t have an economics section (yet), but it has a LOT of other APIs.

  • Berkeley Library maintains a list of economics APIs that is well worth looking through.

  • NASDAQ Data Link, which has a great deal of financial data.

  • DBnomics: publicly-available economic data provided by national and international statistical institutions, but also by researchers and private companies.

26.4. Webscraping#

Webscraping is a way of grabbing information from the internet that was intended to be displayed in a browser. But it should only be used as a last resort, and only then when permitted by the terms and conditions of a website.

If you’re getting data from the internet, it’s much better to use an API whenever you can: grabbing information in a structure way is exactly why APIs exist. APIs should also be more stable than websites, which may change frequently. Typically, if an organisation is happy for you to grab their data, they will have made an API expressly for that purpose. It’s pretty rare that there’s a major website which does permit webscraping but which doesn’t have an API; for these websites, if they don’t have an API, chances scraping is against their terms and conditions. Those terms and conditions may be enforceable by law (different rules in different countries here, and you really need legal advice if it’s not unambiguous as to whether you can scrape or not.)

There are other reasons why webscraping is not so good; for example, if you need a back-run then it might be offered through an API but not shown on the webpage. (Or it might not be available at all, in which case it’s best to get in touch with the organisation or check out WaybackMachine in case they took snapshots).

So this book is pretty down on webscraping as there’s almost always a better solution. However, there are times when it is useful.

If you do find yourself in a scraping situation, be really sure to check that’s legally allowed and also that you are not violating the website’s robots.txt rules: this is a special file on almost every website that sets out what’s fair play to crawl (conditional on legality) and what robots should not go poking around in.

In Python, you are spoiled for choice when it comes to webscraping. There are five very strong libraries that cover a real range of user styles and needs: requests, lxml, beautifulsoup, selenium, and scrapy*.

For quick and simple webscraping, my usual combo would requests, which does little more than go and grab the HTML of a webpage, and beautifulsoup, which then helps you to navigate the structure of the page and pull out what you’re actually interested in. For dynamic webpages that use javascript rather than just HTML, you’ll need selenium. To scale up and hit thousands of webpages in an efficient way, you might try scrapy, which can work with the other tools and handle multiple sessions, and all other kinds of bells and whistles… it’s actually a “web scraping framework”.

It’s always helpful to see coding in practice, so that’s what we’ll do now, but note that we’ll be skipping over a lot of important detail such as user agents, being ‘polite’ with your scraping requests, being efficient with caching and crawling.

In lieu of a better example, let’s scrape the research page of http://aeturrell.com/

url = "http://aeturrell.com/research"
page = requests.get(url)
page.text[:300]
'<!DOCTYPE html>\n<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"><head>\n\n<meta charset="utf-8">\n<meta name="generator" content="quarto-1.3.361">\n\n<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes">\n\n<meta name="author" content="Arthur Turrell">'

Okay, what just happened? We asked requests to grab the HTML of the webpage and then printed the first 300 characters of the text that it found.

Let’s now parse this into something humans can read (or can read more easily) using beautifulsoup:

soup = BeautifulSoup(page.text, "html.parser")
print(soup.prettify()[60000:60500])
ing-date-modified-sort="NaN" data-listing-date-sort="1640995200000" data-listing-file-modified-sort="1687518448229" data-listing-reading-time-sort="1">
         <div class="project-content listing-pub-info">
          <p>
           Turrell, Arthur, Bradley Speigner, Jyldyz Djumalieva, David Copple, and James Thurgood. "6. Transforming Naturally Occurring Text Data into Economic Statistics." In
           <i>
            Big Data for Twenty-First-Century Economic Statistics
           </i>
     

Now we see more structure of the page and even some HTML tags such as ‘title’ and ‘link’. Now we come to the data extraction part: say we want to pull out every paragraph of text, we can use beautifulsoup to skim down the HTML structure and pull out only those parts with the paragraph tag (‘p’).

# Get all paragraphs
all_paras = soup.find_all("p")
# Just show one of the paras
all_paras[1]
<p>Kalamara, Eleni, Arthur Turrell, Chris Redl, George Kapetanios, and Sujit Kapadia. "Making text count: economic forecasting using newspaper text." <i>Journal of Applied Econometrics</i> 37, no. 5 (2022): 896-919. doi: <a href="https://doi.org/10.1002/jae.2907"><code>10.1002/jae.2907</code></a></p>

Although this paragraph isn’t too bad, you can make this more readable by stripping out HTML tags altogether with the .text method:

all_paras[1].text
'Kalamara, Eleni, Arthur Turrell, Chris Redl, George Kapetanios, and Sujit Kapadia. "Making text count: economic forecasting using newspaper text." Journal of Applied Econometrics 37, no. 5 (2022): 896-919. doi: 10.1002/jae.2907'

Now let’s say we didn’t care about most of the page, we only wanted to get hold of the names of projects. For this we need to identify the tag type of the element we’re interested in, in this case ‘div’, and it’s class type, in this case “project-name”. We do it like this (and show nice text in the process):

projects = soup.find_all("div", class_="project-content listing-pub-info")
projects = [x.text.strip() for x in projects]
projects
['Kalamara, Eleni, Arthur Turrell, Chris Redl, George Kapetanios, and Sujit Kapadia. "Making text count: economic forecasting using newspaper text." Journal of Applied Econometrics 37, no. 5 (2022): 896-919. doi: 10.1002/jae.2907',
 'Turrell, A., Speigner, B., Copple, D., Djumalieva, J. and Thurgood, J., 2021. Is the UK’s productivity puzzle mostly driven by occupational mismatch? An analysis using big data on job vacancies. Labour Economics, 71, p.102013. doi: 10.1016/j.labeco.2021.102013',
 'Haldane, Andrew G., and Arthur E. Turrell. "Drawing on different disciplines: macroeconomic agent-based models." Journal of Evolutionary Economics 29 (2019): 39-66. doi: 10.1007/s00191-018-0557-5',
 'Haldane, Andrew G., and Arthur E. Turrell. "An interdisciplinary model for macroeconomics." Oxford Review of Economic Policy 34, no. 1-2 (2018): 219-251. doi: 10.1093/oxrep/grx051',
 'Braun-Munzinger, Karen, Z. Liu, and A. E. Turrell. "An agent-based model of corporate bond trading." Quantitative Finance 18, no. 4 (2018): 591-608. doi: 10.1080/14697688.2017.1380310',
 'Turrell, A. E., M. Sherlock, and S. J. Rose. "Efficient evaluation of collisional energy transfer terms for plasma particle simulations." Journal of Plasma Physics 82, no. 1 (2016): 905820107. doi: 10.1017/S0022377816000131',
 'Turrell, A. E., M. Sherlock, and S. J. Rose. "Ultrafast collisional ion heating by electrostatic shocks." Nature Communications 6, no. 1 (2015): 8905. doi: 10.1038/ncomms9905',
 'Turrell, Arthur E., Mark Sherlock, and Steven J. Rose. "Self-consistent inclusion of classical large-angle Coulomb collisions in plasma Monte Carlo simulations." Journal of Computational Physics 299 (2015): 144-155. doi: 10.1016/j.jcp.2015.06.034',
 'Turrell, Arthur E., Mark Sherlock, and Steven J. Rose. "A Monte Carlo algorithm for degenerate plasmas." Journal of Computational Physics 249 (2013): 13-21. doi: 10.1016/j.jcp.2013.03.052',
 'Van Dijcke, David, Marcus Buckmann, Arthur Turrell, and Tomas Key. "Vacancy Posting, Firm Balance Sheets, and Pandemic Policy Interventions." Bank of England Staff Working Paper Series 1033 (2022).',
 'Botta, Federico, Robin Lovelace, Laura Gilbert, and Arthur Turrell. "Packaging code for reproducible research in the public sector." arXiv e-prints (2022), arXiv:2305.16205. doi: 10.48550/arXiv.2305.16205',
 'Draca, Mirko, Emma Duchini, Roland Rathelot, Arthur Turrell, and Giulia Vattuone. Revolution in Progress? The Rise of Remote Work in the UK. University of Warwick, Department of Economics, 2022.',
 'Duchini, Emma, Stefania Simion, Arthur Turrell, and Jack Blundell. "Pay Transparency and Gender Equality." arXiv e-prints (2020), arxiv:2006.16099. doi: 10.48550/arXiv.2006.16099',
 'Turrell, Arthur, Bradley Speigner, Jyldyz Djumalieva, David Copple, and James Thurgood. "6. Transforming Naturally Occurring Text Data into Economic Statistics." In Big Data for Twenty-First-Century Economic Statistics, pp. 173-208. University of Chicago Press, 2022. doi: 10.7208/chicago/9780226801391-008',
 'Turrell, Arthur. "Agent-based models: understanding the economy from the bottom up" In Quarterly Bulletin, Q4. Bank of England, 2016.',
 'Hill, Edward, Marco Bardoscia, and Arthur Turrell. "Solving heterogeneous general equilibrium economic models with deep reinforcement learning." arXiv arXiv:2103.16977 (2021).',
 'Turrell, Arthur, James Thurgood, David Copple, Jyldyz Djumalieva, and Bradley Speigner. "Using online job vacancies to understand the UK labour market from the bottom-up." Bank of England Staff Working Papers 742 (2018).']

Hooray! We managed to get the information we wanted: all we needed to know was the right tags. A good tip for finding the tags of the info you want is to look at in your browser (eg Google Chrome) and then right-click on the bit you’re interested in, then hit ‘Inspect’. This will show you the HTML element of the bit of the page you clicked on.

That’s almost it for this very, very brief introduction to webscraping. We’ll just see one more thing: how to iterate over multiple pages.

Imagine we had a root webpage such as “www.codingforeconomists.com” which had subpages such as “www.codingforeconomists.com/page=1”, “www.codingforeconomists.com/page=2”, and so on. One need only iterate create the HTML strings to pass into a function that scrapes each one and return the relevant data, eg for the first 50 pages, and with a function called scraper(), one might run

start, stop = 0, 50
root_url = "www.codingforeconomists.com/page="
info_on_pages = [scraper(root_url + str(i)) for i in range(start, stop)]

That’s all we’ll cover here but remember we’ve barely scraped the surface of this big, complex topic. If you want to read about an application, it’s hard not to recommend the paper on webscraping that has undoubtedly change the world the most, and very likely has affected your own life in numerous ways: “The PageRank Citation Ranking: Bringing Order to the Web” by Page, Brin, Motwani and Winograd. For a more in-depth example of webscraping, check out realpython’s tutorial.

26.4.1. Webscraping Tables#

Often there are times when you don’t actually want to scrape an entire webpage and all you want is the data from a table within the page. Fortunately, there is an easy way to scrape individual tables using the pandas package.

We will read data from the first table on ‘https://simple.wikipedia.org/wiki/FIFA_World_Cup’ using pandas. The function we’ll use is read_html(), which returns a list of data frames of all the tables it finds when you pass it a URL. If you want to filter the list of tables, use the match= keyword argument with text that only appears in the table(s) you’re interested in.

The example below shows how this works; looking at the website, we can see that the table we’re interested in (of past world cup results), has a ‘fourth place’ column while other tables on the page do not. Therefore we run:

df_list = pd.read_html(
    "https://simple.wikipedia.org/wiki/FIFA_World_Cup", match="Sweden"
)
# Retrieve first and only entry from list of data frames
df = df_list[0]
df.head()
Years Hosts Winners Score Runner's-up Third place Score.1 Fourth place
0 1930 Details Uruguay Uruguay 4 - 2 Argentina United States [note 1] Yugoslavia
1 1934 Details Italy Italy 2 - 1 Czechoslovakia Germany 3 - 2 Austria
2 1938 Details France Italy 4 - 2 Hungary Brazil 4 - 2 Sweden
3 1950 Details Brazil Uruguay 2 - 1 Brazil Sweden [note 2] Spain
4 1954 Details Switzerland West Germany 3 - 2 Hungary Austria 3 - 1 Uruguay

This gives us the table neatly loaded into a pandas data frame ready for further use.