Specification Curve

Specification Curve is a Python package that performs specification curve analysis; it helps with the problem of the “garden of forking paths” (many defensible choices) when doing analysis by running many regressions and summarising the effects in an easy to digest chart.

PyPI Status Python Version License Tests Codecov pre-commit Google Colab DOI Downloads Linux macOS Windows Source


When specifying a causal model, modellers have a number of options. These can be informed by field intelligence, priors, and even misguided attempts to find a significant result. Even with the best of intentions, research teams can reach entirely different conclusions using the same, or similar, data because of different choices made in preparing data or in modelling it.

Typically this happens when there isn’t a clear way to do “feature engineering” on the data or the data are imperfect in some way. For example, you have a high frequency time series which needs to be aggregated to a lower frequency: you could take the maximum, the minimum, or the average over each high frequency time period. A different choice may be appropriate in different settings.

There’s formal evidence that researchers really do make different decisions; one study (Silberzahn et al. 2018) gave the same research question—whether soccer referees are more likely to give red cards to players with dark skin tones than to players with light skin tones—to 29 different teams. From the abstract of that paper:

Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability.

So not only can different decisions made, but there seems to be no clearly identifiable reason for them!

There have since been a number of papers that find similar conclusions (Huntington-Klein et al. 2021).

Specification curves have been invented (Simonsohn, Simmons, and Nelson 2020) as a way to better grasp the garden of forking paths that analysts face, and help them show how sensitive their results are to alternative specifications.

More than one piece of software (Masur and Scharkow 2019) has appeared to help researchers and analysts use this technique: specification curve is one such package!


You can try out specification curve right now in Google Colab. To install the package in Colab, run !pip install specification_curve in a new code cell.

Here’s an example of using Specification Curve.

# import specification curve
# Generate some fake data
# ------------------------
import numpy as np
import pandas as pd
import specification_curve as specy

# Set seed for random numbers
seed_for_prng = 78557
# prng=probabilistic random number generator
prng = np.random.default_rng(seed_for_prng)
n_samples = 400
# Number of dimensions
n_dim = 4
c_rnd_vars = prng.random(size=(n_dim, n_samples))
y_1 = (
    0.4 * c_rnd_vars[0, :]  # THIS IS THE TRUE VALUE OF THE COEFFICIENT
    - 0.2 * c_rnd_vars[1, :]
    + 0.3 * prng.standard_normal(n_samples)
# Next line causes y_2 ests to be much more noisy
y_2 = y_1 - 0.5 * np.abs(prng.standard_normal(n_samples))
# Put data into dataframe
df = pd.DataFrame([y_1, y_2], ["y1", "y2"]).T
df["x_1"] = c_rnd_vars[0, :]
df["c_1"] = c_rnd_vars[1, :]
df["c_2"] = c_rnd_vars[2, :]
df["c_3"] = c_rnd_vars[3, :]

# Specification Curve Analysis
# -----------------------------
sc = specy.SpecificationCurve(
    df, y_endog=["y1", "y2"], x_exog="x_1", controls=["c_1", "c_2", "c_3"]
Fit complete

Grey squares (black lines when there are many specifications) show whether a variable is included in a specification or not. Blue or red markers and error bars show whether the coefficient is positive and significant (at the 0.05 level) or red and significant, respectively.


You can install Specification Curve via pip:

$ pip install specification-curve

To install the development version from git, use:

$ pip install git+https://github.com/aeturrell/specification_curve.git


You can find a full list of requirements in the pyproject.toml file. The main requirements are:

python >=3.9,<3.12
pandas ^2.0.3
statsmodels ^0.14.0
matplotlib ^3.6.1
types-setuptools >=67.6,<70.0
numpy ^1.26.4
typeguard ^4.2.1


Distributed under the terms of the MIT license.

Citing Specification Curve

You can find full citation information at the following link: https://zenodo.org/badge/latestdoi/282989537.

Using Specification Curve in your paper? Let us know by raising an issue beginning with “citation”.

Similar Packages

In RStats, there is specr (which inspired many design choices in this package) and spec_chart. Some of the example data in this package is the same as in specr, but please note that results have not been cross-checked across packages.


Huntington-Klein, Nick, Andreu Arenas, Emily Beam, Marco Bertoni, Jeffrey R Bloem, Pralhad Burli, Naibin Chen, et al. 2021. “The Influence of Hidden Researcher Decisions in Applied Microeconomics.” Economic Inquiry 59 (3): 944–60.
Masur, Philipp K., and Michael Scharkow. 2019. “Specr: Statistical Functions for Conducting Specification Curve Analyses (Version 0.2.1).” https://github.com/masurp/specr.
Silberzahn, Raphael, Eric L Uhlmann, Daniel P Martin, Pasquale Anselmi, Frederik Aust, Eli Awtrey, Štěpán Bahnı́k, et al. 2018. “Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results.” Advances in Methods and Practices in Psychological Science 1 (3): 337–56.
Simonsohn, Uri, Joseph P Simmons, and Leif D Nelson. 2020. “Specification Curve Analysis.” Nature Human Behaviour 4 (11): 1208–14.