Bayesian Inference#

In this chapter, we’ll look at how to perform analysis and regressions using Bayesian techniques.

Let’s import a few of the packages we’ll need first. The key package that we’ll be using in this chapter that you might not have seen before is PyMC, a Bayesian inference package.

In the next chapter, we’ll look at an easier way to do Bayesian inference in Python, but you’ll find life easier overall if you start here. We’ll also use ArviZ for visualisation of the results from Bayesian inference, a package that builds on Matplotlib, but this will get automatically installed when you install PyMC. You should follow the install instructions for PyMC and Bambi carefully and, if you’re confident with using different Python environments, it’s a good idea to spin up a new ‘bayes’ environment to try them out in. In case you need a refresher, the chapter on Preliminaries covers basic information on how to install new packages.

The expert Bayesian may wonder, “why these packages?” There are many others available, even just in Python. One reason is that they’re simpler to use for beginners while having enormous power for advanced users too, so they’re very much in keeping with Python’s low floor, high ceiling style. The other is that they’re considerably more mature than some of the more recent alternatives based on popular tensor and machine learning packages (though PyMC is also built on an underlying tensor package). Finally, they also have flexible computing back-ends (including Jax and Numba, and including using GPUs), which means that super users can make their models go extremely fast should they need to—you can find more information on the speed-ups that are possible here.

This chapter has benefitted from the many Bayesian blogs and forum conversations across the web, in particular the ones here, here, here, here, and here.

Here are our initial imports and settings:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import xarray as xr
import warnings
import pymc as pm
import arviz as az

# Plot settings
plt.style.use(
    "https://github.com/aeturrell/coding-for-economists/raw/main/plot_style.txt"
)
az.style.use("arviz-darkgrid")

# Pandas: Set max rows displayed for readability
pd.set_option("display.max_rows", 23)

# Set seed for random numbers
seed_for_prng = 78557
prng = np.random.default_rng(seed_for_prng)  # prng=probabilistic random number generator
# Turn off warnings
warnings.filterwarnings('ignore')
# check pymc version
print(f"Running on PyMC v{pm.__version__}")
Running on PyMC v5.10.3

Introduction#

The biggest difference between the Bayesian and frequentist approaches (that we’ve seen in the other chapters) is probably that, in Bayesian models, the parameters are not assumed to be fixed but instead are treated as random variables whose uncertainty is described using probability distributions. The data are considered fixed. You might see the ‘inverse probability’ formulation of a Bayesian model written as \(p(\theta | y)\) where the \(y\) are the data, and the \(\theta\) are the model parameters. An interesting aspect of Bayesianism is that there is just one estimator: Bayes’ theorem.

This is a contrast with the frequentist view, which holds that the data are random but the model parameters are fixed, and models are often expressed as functions, for example \(f(y | \theta)\). Frequentist inference typically involves deriving estimators for the model parameters, and these are usually created to minimise the bias, minimise the variance, or maximise the efficiency. Not so with Bayesian inference.

As a reminder, Bayes’ theorem says that \({\displaystyle p(A\mid B)={\frac {p(B\mid A)p(A)}{p(B)}}}\), where \(A\) and \(B\) are distinct events, \(p(A)\) is the probability of event \(A\) happening, and \(p(A\mid B)\) is the probability of \(A\) happening given \(B\) has happened or is true. \(p\) could stand in for a single number, a probability density function, or a probability mass function. When dealing with data and model parameters, and ignoring a rescaling factor, this can be written as:

\[ p({\boldsymbol{\theta }}|{\boldsymbol{y}})\propto p({\boldsymbol{y}}|{\boldsymbol{\theta }})p({\boldsymbol{\theta }}). \]

In these equations:

  1. \(p({\boldsymbol{\theta}})\) is the prior put on model parameters—what we think the distribution of model parameters will look like.

  2. \(p({\boldsymbol{y}}|{\boldsymbol{\theta }})\) is the likelihood of this data given a particular set of model parameters.

  3. \(p({\boldsymbol{\theta }}|{\boldsymbol{y}})\) is the posterior probability of model parameters given the observed data.

Bayesian modelling proceeds as highlighted in the review article, Bayesian statistics and modelling [van de Schoot, Depaoli, King, Kramer, Märtens, Tadesse, Vannucci, Gelman, Veen, Willemsen, and others, 2021]:

The Bayesian research cycle.

One key strength of the Bayesian approach is that it preserves uncertainty—by construction, it includes the degree of belief you have in a parameter. This makes it especially useful in cases where uncertainty is important because the uncertainties are propagated all the through to the final results. One disadvantage of the Bayesian approach is that it’s not always as fast and doesn’t always scale well to very large datasets.

Actually estimating \(p({\boldsymbol{y}}|{\boldsymbol{\theta }})p({\boldsymbol{\theta }})\) is done computationally, at least for all but the most trivial examples.

The methods used to do this are all some variation on the Monte Carlo method, a tool that goes much wider than Bayesian inference. Monte Carlo techniques use random numbers to choose samples that allow you to estimate properties of interest. It can be used to perform numerical integration and is used for discrete particle simulations in fields like physics and chemistry. Rubinstein and Kroese [2016] provides a good introduction to the Monte Carlo methods more broadly, including simulation; Bielajew [2001] is a very good free textbook on Monte Carlo methods for particle transport physics.

An illustration of Monte Carlo integration

Yoderj, Mysid; Wikipedia

Say we wanted to compute the integral of a 2D shape, a circle, to get an area. In the figure above, random pairs of numbers (say, \(x, y\)) are chosen within the unit square. By taking the ratio of the number of points that land within the 2D shape (40) to all samples (50), we can get an approximation of the area by computing (area of square) x ratio of samples \(= 3.2 \approx \pi\). This type of algorithm is a form of Monte Carlo rejection sampling.

Markov Chain Monte Carlo takes this a step further. A Markov Chain is a stochastic process where the chance of being in a particular state (for example, taking a particular sample) depends only on the previous state. These have equilibrium distributions; essentially a long-run average distribution over states. Markov Chain Monte Carlo finds the distribution you’re after (the posterior) by recording states from the chain; the more steps, the more closely the distribution of samples will match the distribution you’re trying to find out about. There are nice walkthroughs of MCMC available here, here, and here.

There are many algorithms for undertaking MCMC, with perhaps the most famous being the Metropolis-Hastings algorithm. For this, we have a probability density function to sample from, a function \(Q\) that quantifies where to make a transition, and \(\theta_0\), the first guess of the parameters. Starting from \(\theta = \theta_0\), the algorithm boils down to:

  1. starting from this point \(\theta\), determine its match with the target distribution by estimating \(p = p({\boldsymbol{y}}|{\boldsymbol{\theta}})p({\boldsymbol{\theta}})\)

  2. propose a new sample, \(\theta'\), using some transition function \(Q\), and check its match to the target (let this match be \(p'\))

  3. compute the ratio of densities, \(\frac{p'}{p}\), as a metric of favourability of the new point

  4. generate a random number \(r \thicksim \mathcal{U}[0, 1)\) and compare it to the ratio; accept the new point only if \(r < \frac{p'}{p}\)

  5. repeat

The clever trick here is that, over time and with more samples, the points you draw will be from the target (posterior) distribution because, at each loop, we are forcing the samples to stick to the posterior distribution. The set of samples is often called a trace. You can find a nice walkthrough of the Metropolis-Hastings algorithm, with code, here.

The Metropolis-Hastings algorithm is a good place to start to understand what’s going on in computational Bayesian inference, but there are more modern and powerful algorithms available today, most notably Hamiltonian Monte Carlo (HMC), which takes its inspiration from physics (where the Hamiltonian is an operator that returns the total energy of the system). Instead of sampling completely randomly, which can be inefficient, HMC sees the sampler get ‘rolled’ (launched in a smooth trajectory) along the estimates of the posterior. There’s a very good explanation of this approach here, and a ‘conceptual introduction to HMC’ may be found in Betancourt [2018].

The most commonly used HMC sampler is the No U-Turn Sampler (NUTS), and this is what we’ll be using for most of our examples.

For the rest of this chapter, you won’t need to worry about most of this though! The joy of PyMC and Bambi is that they take care of some of the hard work for us. Expert users will want to dig further into the details, but these packages make running your first Bayesian inference easier with sensible default settings.

Bayesian Modelling: A Simple Example#

We’re going to set up a very simple example using synthetic data and a known data generating process;

\[ \mu = \alpha + \beta x \]

where

\[ Y \mid \alpha, \beta, \sigma \stackrel{\text{ind}}{\thicksim} \mathcal{N}(\mu, \sigma^2) \]

For our prior distributions on the parameters, we will assume:

  • \(\alpha \thicksim \mathcal{N}(0, 10)\)

  • \(\beta \thicksim \mathcal{N}(0, 10)\)

  • \(\sigma \thicksim \mathcal{N_{+}}(1)\)

where, to be clear, these three parameters are scalars.

These priors are known as weakly informative priors because they are quite diffuse—the 10th and 90th percentile of a normal distribution with \(\sigma=10\) spans from, roughly, -13 to 13. With a weakly informative prior, a reasonably large amount of data will ensure that the prior will not be important and the posterior will hone in on a sensible range. We’re also implicitly assuming that the data are normalised so that 1 is a meaningful change and zero a sensible reference point. You may want to, for example, scale by the standard deviation of the data or take logs of (positive) predictors, so that coeficients can be interpreted as percent changes (more on rescaling later).

We know the ‘true’ values of the parameters, because we’re going to run a synthetic example, so we’ll set those first.

# True parameter values
alpha_true, beta_true, sigma_true = 1, 2.5, 1.5

# Size of dataset
size = 100

num_samples = 1000
num_chains = 2

# Predictor variable: random sample
X = prng.standard_normal(size)

# Simulate outcome variable
Y = alpha_true + beta_true * X + prng.standard_normal(size) * sigma_true

Using PyMC to do Bayesian inference#

Next, we’ll begin the Bayesian modelling part of the code. We’ll be using the powerful PyMC package [Salvatier, Wiecki, and Fonnesbeck, 2016]; check out the documentation for tutorials and examples. PyMC is likely the most popular package for probabilistic programming in Python, and its computations are built on what was originally a deep learning library. The aim of the package is to enable you to write down models using an intuitive syntax for the data generating process.

Let’s see how to specify a model in PyMC. We’ll start by putting with pm.Model() as linear_model:, to set the ‘context’; effectively to say, ‘what follows will be a model that we’ll save as linear_model. Any variables created within a given model’s context will be automatically assigned to that model. Next, we set our priors, giving the parameters meaningful names. Then, we build our model by writing down our expected outcome and the expected likelihood of observations according to that model. Up till now, we’re just rewriting in code all the bits we put down mathematically in the previous subsection. Finally, we run the ‘trace’: this is the part that estimates the posterior distribution by taking samples.

with pm.Model() as linear_model:
    # Priors for unknown model parameters
    alpha = pm.Normal("alpha", mu=0, sigma=10)
    beta = pm.Normal("beta", mu=0, sigma=10)
    sigma = pm.HalfNormal("sigma", sigma=1)

    # Expected value of outcome
    mu = alpha + beta * X

    # Likelihood (sampling distribution) of observations
    # this says, conditional on confounders, we expect normal variation
    Y_obs = pm.Normal("Y_obs", mu=mu, sigma=sigma, observed=Y)

    # Sample the posterior
    trace = pm.sample(num_samples, return_inferencedata=True, chains=num_chains)
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 4 jobs)
NUTS: [alpha, beta, sigma]
100.00% [4000/4000 00:00<00:00 Sampling 2 chains, 0 divergences]
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 4 seconds.
We recommend running at least 4 chains for robust computation of convergence diagnostics

So what just happened? The ‘NUTS sampler’ was auto-assigned. NUTS stands for the ‘No-U-turn sampler’. This is a just a fancy way of picking the points to sample \(p({\boldsymbol{y}}|{\boldsymbol{\theta }})p({\boldsymbol{\theta }})\) to build up a picture of \(p({\boldsymbol{\theta }}|{\boldsymbol{y}})\). It’s a good default option for sampling.

Let’s take a quick look at the model we just estimated using the model_to_graphviz() method—though note that this is a very simple model, and this method comes into its own in more complex cases.

pm.model_to_graphviz(linear_model)
_images/675132034ee02d9a12e1c7e37d440f48dceb4535a7befc284191c7e290393d48.svg

We’ve made some assumptions in assembling this model, and it’s a good idea to make these explicit. The first is that the outcome data are independent once controlling for \(X\), the second that the outcomes are a linear function of the inputs, and the third that, at a given \(X\), the outcome variable \(Y\) will vary normally around the average value with a consistent standard deviation \(\sigma\). These are the assumptions for Bayesian normal regression.

Understanding the results from running a Bayesian model#

As discussed in the Introduction, the set of samples forms a trace, and this trace got returned by the pm.sample method above. We can now look at the trace that we created in the 2 separate chains we ran. Here it is as a standalone object to explore:

trace
arviz.InferenceData
    • <xarray.Dataset>
      Dimensions:  (chain: 2, draw: 1000)
      Coordinates:
        * chain    (chain) int64 0 1
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
      Data variables:
          alpha    (chain, draw) float64 1.186 1.09 1.108 ... 0.9491 1.061 0.8267
          beta     (chain, draw) float64 2.551 2.219 2.541 2.639 ... 2.348 2.336 2.694
          sigma    (chain, draw) float64 1.545 1.538 1.298 1.316 ... 1.43 1.526 1.681
      Attributes:
          created_at:                 2024-01-05T15:38:22.026528
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3
          sampling_time:              4.266082048416138
          tuning_steps:               1000

    • <xarray.Dataset>
      Dimensions:                (chain: 2, draw: 1000)
      Coordinates:
        * chain                  (chain) int64 0 1
        * draw                   (draw) int64 0 1 2 3 4 5 ... 994 995 996 997 998 999
      Data variables: (12/17)
          energy_error           (chain, draw) float64 0.09413 -0.1333 ... 0.8728
          index_in_trajectory    (chain, draw) int64 1 1 -2 1 -1 -2 ... 1 -1 -1 2 -2 1
          energy                 (chain, draw) float64 192.2 190.7 ... 189.7 194.5
          acceptance_rate        (chain, draw) float64 0.7791 0.8531 ... 0.9098 0.4785
          step_size_bar          (chain, draw) float64 1.07 1.07 1.07 ... 1.079 1.079
          process_time_diff      (chain, draw) float64 0.000187 0.00019 ... 0.000173
          ...                     ...
          perf_counter_start     (chain, draw) float64 2.309e+05 ... 2.309e+05
          n_steps                (chain, draw) float64 3.0 3.0 3.0 1.0 ... 3.0 3.0 3.0
          perf_counter_diff      (chain, draw) float64 0.0001875 ... 0.0001736
          step_size              (chain, draw) float64 1.142 1.142 ... 1.059 1.059
          reached_max_treedepth  (chain, draw) bool False False False ... False False
          max_energy_error       (chain, draw) float64 0.4537 0.2801 ... 0.1599 1.357
      Attributes:
          created_at:                 2024-01-05T15:38:22.033224
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3
          sampling_time:              4.266082048416138
          tuning_steps:               1000

    • <xarray.Dataset>
      Dimensions:      (Y_obs_dim_0: 100)
      Coordinates:
        * Y_obs_dim_0  (Y_obs_dim_0) int64 0 1 2 3 4 5 6 7 ... 92 93 94 95 96 97 98 99
      Data variables:
          Y_obs        (Y_obs_dim_0) float64 3.386 -0.667 3.258 ... 1.937 3.367 1.848
      Attributes:
          created_at:                 2024-01-05T15:38:22.036109
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3

But it’s much easier to inspect some of what it holds using the pre-configured summaries, which are in table and chart form. First, a plot of the traces.

az.plot_trace(trace);
_images/8998536cc132bbfa29250468ab417429404eff9e9577958da704f178190bea35.svg

This looks like fairly meaningless noise, so let’s walk through what we’re seeing here. On the left-hand side, we can see the results of the two chains we ran for each of the three parameters. The x-axis is the parameter value, the y-axis is the probability density (no values shown). We’re actually looking at the posterior distributions here! We’ll come on to whether they make sense or not later, but for now we just know we’ve got some estimates of the posterior.

On the right-hand side, we have some traces, with the sample number on the x-axis and the parameter value on the y axis. Just looking at these gives our first sense of whether the sampling has gone well or not. Trace plots can give a sense of whether the model converged (you should see a trace that is a random scatter around a mean value), whether the chains are mixing (they should overlap randomly on the plot), and whether there are specific parameters that the model is struggling to estimate. If the sampling is not going well, neither of these will be true and the chains will not converge to occupy a small area, nor will they mix together.

From the above chart, it looks like we successfully created some well-behaved samples and have what appears to be a good estimate of the posterior.

Now let’s look at our actual estimates of these parameters by running the summary() function:

az.summary(trace, round_to=2)
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
alpha 1.03 0.15 0.71 1.30 0.0 0.0 2893.49 1470.10 1.0
beta 2.40 0.15 2.11 2.69 0.0 0.0 2880.99 1579.78 1.0
sigma 1.50 0.11 1.30 1.71 0.0 0.0 2003.68 1143.36 1.0

In Bayesian inference, the coefficient or parameter estimates are extremely easy to interpret. For \(\alpha\), we have a mean of 1.03 with a standard deviation of \(\approx 0.15\), and a 94% confidence window between approximately 0.75 and 1.3. HDI stands for highest density interval, and you can choose a different percentage using the hdi_prob= keyword. We can produce a visualisation of the parameter estimates:

az.plot_posterior(trace, var_names=['alpha'], figsize=(8, 5));
_images/9ad009d104780e6d5e3be988b6c0ac6b413380e4d32b8d231702930020a3e851.svg

This is telling us exactly what it looks like; the estimate is, with good confidence, around 1.

There were a bunch of other numbers in the trace summary; what did they mean? Let’s run through them:

  • The Gelman-Rubin \(\hat{R}\) statistic, r_hat: this is a useful metric to determine whether the traces achieved stationarity, but it does require a number of chains (at least four) to make sense as a diagnostic. For a given parameter, the statistic is the ratio of the variance in the samples between the chains to the variance of samples within the chains, ie \(\hat{R} = \frac{\text{variance between chains}}{\text{variance within chains}}\). The value of \(\hat{R}\) approaches unity if the chains are properly sampling the target distribution and a rough guide is that it should be less than 1.01.

  • Effective sample sizes, ess_bulk and ess_tail: ideally, samples would be independent but MCMC samples are correlated. These statistics give a sense how many effectively independent samples we are drawing given that correlation. The statistics are referred to either as effective samples size (ESS) or number of effective samples, \(n_\text{eff}\). The bulk version, ess_bulk, is the main statistic, while ess_tail is the effective sample size for more extreme values of the posterior (by default, the lower and upper 5th percentiles). The rule of thumb is that both should be \(>400\).

  • The Monte Carlo standard errors (MCSE), msce_mean and mcse_sd: They are measurements of the standard error of the mean and the standard error of the standard deviation of the sample chains. They provide an estimate as to how accurate the expectation values given from MCMC samples of the mean and standard deviation are. However, some believe these should not be reported.

The most useful numbers (apart from the coefficient estimates), then, are ess_bulk and ess_tail, and \(\hat{R}\). Remember that these statistics are mostly tell us how good the sampling is, rather than how appropriate the model is.

Checking the Posterior Distribution#

It would be good to compare the posteriors to the priors, to see how far we got away from those weakly informative normal (and, in one case, half-normal) priors we used and how nice the posteriors look. Remember, it’s not a good sign if our posterior is very much influenced by our prior (unless we have a reason to have a strong prior).

To do this we need to assemble all of the information on our model, priors, and posteriors in one object, for which there’s a couple of extra steps. We’ll stash all of the information about our model in the object called trace, but we’ll add a lot more info to it.

The first task is to add samples of the posterior predictive distribution.

with linear_model:
    pm.sample_posterior_predictive(trace, extend_inferencedata=True, random_seed=prng)
Sampling: [Y_obs]
100.00% [2000/2000 00:00<00:00]

Now if we look at trace again, it has been extended with ‘posterior predictive’ values.

trace
arviz.InferenceData
    • <xarray.Dataset>
      Dimensions:  (chain: 2, draw: 1000)
      Coordinates:
        * chain    (chain) int64 0 1
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
      Data variables:
          alpha    (chain, draw) float64 1.186 1.09 1.108 ... 0.9491 1.061 0.8267
          beta     (chain, draw) float64 2.551 2.219 2.541 2.639 ... 2.348 2.336 2.694
          sigma    (chain, draw) float64 1.545 1.538 1.298 1.316 ... 1.43 1.526 1.681
      Attributes:
          created_at:                 2024-01-05T15:38:22.026528
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3
          sampling_time:              4.266082048416138
          tuning_steps:               1000

    • <xarray.Dataset>
      Dimensions:      (chain: 2, draw: 1000, Y_obs_dim_2: 100)
      Coordinates:
        * chain        (chain) int64 0 1
        * draw         (draw) int64 0 1 2 3 4 5 6 7 ... 993 994 995 996 997 998 999
        * Y_obs_dim_2  (Y_obs_dim_2) int64 0 1 2 3 4 5 6 7 ... 92 93 94 95 96 97 98 99
      Data variables:
          Y_obs        (chain, draw, Y_obs_dim_2) float64 4.573 -0.9243 ... 2.184
      Attributes:
          created_at:                 2024-01-05T15:38:24.852213
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3

    • <xarray.Dataset>
      Dimensions:                (chain: 2, draw: 1000)
      Coordinates:
        * chain                  (chain) int64 0 1
        * draw                   (draw) int64 0 1 2 3 4 5 ... 994 995 996 997 998 999
      Data variables: (12/17)
          energy_error           (chain, draw) float64 0.09413 -0.1333 ... 0.8728
          index_in_trajectory    (chain, draw) int64 1 1 -2 1 -1 -2 ... 1 -1 -1 2 -2 1
          energy                 (chain, draw) float64 192.2 190.7 ... 189.7 194.5
          acceptance_rate        (chain, draw) float64 0.7791 0.8531 ... 0.9098 0.4785
          step_size_bar          (chain, draw) float64 1.07 1.07 1.07 ... 1.079 1.079
          process_time_diff      (chain, draw) float64 0.000187 0.00019 ... 0.000173
          ...                     ...
          perf_counter_start     (chain, draw) float64 2.309e+05 ... 2.309e+05
          n_steps                (chain, draw) float64 3.0 3.0 3.0 1.0 ... 3.0 3.0 3.0
          perf_counter_diff      (chain, draw) float64 0.0001875 ... 0.0001736
          step_size              (chain, draw) float64 1.142 1.142 ... 1.059 1.059
          reached_max_treedepth  (chain, draw) bool False False False ... False False
          max_energy_error       (chain, draw) float64 0.4537 0.2801 ... 0.1599 1.357
      Attributes:
          created_at:                 2024-01-05T15:38:22.033224
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3
          sampling_time:              4.266082048416138
          tuning_steps:               1000

    • <xarray.Dataset>
      Dimensions:      (Y_obs_dim_0: 100)
      Coordinates:
        * Y_obs_dim_0  (Y_obs_dim_0) int64 0 1 2 3 4 5 6 7 ... 92 93 94 95 96 97 98 99
      Data variables:
          Y_obs        (Y_obs_dim_0) float64 3.386 -0.667 3.258 ... 1.937 3.367 1.848
      Attributes:
          created_at:                 2024-01-05T15:38:22.036109
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3

Again, adding more information, we’ll now sample the predictive model based on just the prior, and extend the trace data to include it.

idata = pm.sample_prior_predictive(model=linear_model)
Sampling: [Y_obs, alpha, beta, sigma]
trace.extend(idata)
trace
arviz.InferenceData
    • <xarray.Dataset>
      Dimensions:  (chain: 2, draw: 1000)
      Coordinates:
        * chain    (chain) int64 0 1
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
      Data variables:
          alpha    (chain, draw) float64 1.186 1.09 1.108 ... 0.9491 1.061 0.8267
          beta     (chain, draw) float64 2.551 2.219 2.541 2.639 ... 2.348 2.336 2.694
          sigma    (chain, draw) float64 1.545 1.538 1.298 1.316 ... 1.43 1.526 1.681
      Attributes:
          created_at:                 2024-01-05T15:38:22.026528
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3
          sampling_time:              4.266082048416138
          tuning_steps:               1000

    • <xarray.Dataset>
      Dimensions:      (chain: 2, draw: 1000, Y_obs_dim_2: 100)
      Coordinates:
        * chain        (chain) int64 0 1
        * draw         (draw) int64 0 1 2 3 4 5 6 7 ... 993 994 995 996 997 998 999
        * Y_obs_dim_2  (Y_obs_dim_2) int64 0 1 2 3 4 5 6 7 ... 92 93 94 95 96 97 98 99
      Data variables:
          Y_obs        (chain, draw, Y_obs_dim_2) float64 4.573 -0.9243 ... 2.184
      Attributes:
          created_at:                 2024-01-05T15:38:24.852213
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3

    • <xarray.Dataset>
      Dimensions:                (chain: 2, draw: 1000)
      Coordinates:
        * chain                  (chain) int64 0 1
        * draw                   (draw) int64 0 1 2 3 4 5 ... 994 995 996 997 998 999
      Data variables: (12/17)
          energy_error           (chain, draw) float64 0.09413 -0.1333 ... 0.8728
          index_in_trajectory    (chain, draw) int64 1 1 -2 1 -1 -2 ... 1 -1 -1 2 -2 1
          energy                 (chain, draw) float64 192.2 190.7 ... 189.7 194.5
          acceptance_rate        (chain, draw) float64 0.7791 0.8531 ... 0.9098 0.4785
          step_size_bar          (chain, draw) float64 1.07 1.07 1.07 ... 1.079 1.079
          process_time_diff      (chain, draw) float64 0.000187 0.00019 ... 0.000173
          ...                     ...
          perf_counter_start     (chain, draw) float64 2.309e+05 ... 2.309e+05
          n_steps                (chain, draw) float64 3.0 3.0 3.0 1.0 ... 3.0 3.0 3.0
          perf_counter_diff      (chain, draw) float64 0.0001875 ... 0.0001736
          step_size              (chain, draw) float64 1.142 1.142 ... 1.059 1.059
          reached_max_treedepth  (chain, draw) bool False False False ... False False
          max_energy_error       (chain, draw) float64 0.4537 0.2801 ... 0.1599 1.357
      Attributes:
          created_at:                 2024-01-05T15:38:22.033224
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3
          sampling_time:              4.266082048416138
          tuning_steps:               1000

    • <xarray.Dataset>
      Dimensions:  (chain: 1, draw: 500)
      Coordinates:
        * chain    (chain) int64 0
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
      Data variables:
          alpha    (chain, draw) float64 -8.633 0.6285 0.9858 ... 0.7458 17.74 -7.64
          sigma    (chain, draw) float64 2.044 0.2639 1.188 ... 0.2742 0.01635 1.556
          beta     (chain, draw) float64 22.25 -9.76 14.79 ... -2.453 -21.49 1.471
      Attributes:
          created_at:                 2024-01-05T15:38:24.914739
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3

    • <xarray.Dataset>
      Dimensions:      (chain: 1, draw: 500, Y_obs_dim_0: 100)
      Coordinates:
        * chain        (chain) int64 0
        * draw         (draw) int64 0 1 2 3 4 5 6 7 ... 493 494 495 496 497 498 499
        * Y_obs_dim_0  (Y_obs_dim_0) int64 0 1 2 3 4 5 6 7 ... 92 93 94 95 96 97 98 99
      Data variables:
          Y_obs        (chain, draw, Y_obs_dim_0) float64 11.6 -12.59 ... -6.303
      Attributes:
          created_at:                 2024-01-05T15:38:24.916177
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3

    • <xarray.Dataset>
      Dimensions:      (Y_obs_dim_0: 100)
      Coordinates:
        * Y_obs_dim_0  (Y_obs_dim_0) int64 0 1 2 3 4 5 6 7 ... 92 93 94 95 96 97 98 99
      Data variables:
          Y_obs        (Y_obs_dim_0) float64 3.386 -0.667 3.258 ... 1.937 3.367 1.848
      Attributes:
          created_at:                 2024-01-05T15:38:22.036109
          arviz_version:              0.17.0
          inference_library:          pymc
          inference_library_version:  5.10.3

Okay, now we can compare our priors and posteriors. Let’s do it for just one variable, \(\sigma\), for which our prior was a half normal distribution.

az.plot_dist_comparison(trace, var_names=["sigma"], figsize=(8, 5));
_images/650455773426d14da86ce6e132c22349a9416f85581003bfa4c449c1d88ae42c.svg

We can see here that the weakly informative prior, which has a relatively wide initial span, has been shrunk to a posterior that is in a tight bunch around the coefficient we set in the true data generating process.

Another typical check that is run is a check on whether the model can reproduce the patterns observed in the real data. We can use the plot for posterior/prior predictive checks to do this:

az.plot_ppc(trace, group="posterior", figsize=(8, 5));
_images/8db0966140d8e03dd9c6112ad706c5ddda2aac5d4bcbc48030180975ba1bd8d6.svg

While there are lots of diagnostic charts available in arviz, in this case it’s useful to roll our own to check visually how our model predicts the data, taking into account the heterogeneity in our data in a way that the posterior predictive check chart we just saw does not.

First, we’ll extract the samples that we already made of the posterior. Then we’ll use xarray (a package for working with labelled multi-dimensional data that is a dependency of PyMC) to multiply these samples through by the input data, \(X\). This will give us an output variable mu_pp that has shape (number of chains, number of samples, number of observations). You can see this below (and we’ll have to take an average over some of these dimensions to get a vector that has shape ‘number of observations’ only).

Don’t worry too much about the details of what’s going on; mostly it’s just constructing the equation we’re trying to estimate with the results of our Bayesian inference but with the added complication of multiple chains and multiple samples.

post = trace.posterior
mu_pp = post["alpha"] + post["beta"] * xr.DataArray(X, dims=["obs_id"])
mu_pp
<xarray.DataArray (chain: 2, draw: 1000, obs_id: 100)>
array([[[3.55188362, 1.03333122, 3.84872668, ..., 0.70244921,
         2.20109028, 1.96491666],
        [3.14771649, 0.95711675, 3.4059062 , ..., 0.66932045,
         1.97281636, 1.76739603],
        [3.46486513, 0.95640019, 3.76051926, ..., 0.62684345,
         2.11948207, 1.88425439],
        ...,
        [3.27251039, 0.94473845, 3.54686758, ..., 0.63892076,
         2.02403971, 1.80575625],
        [3.22343747, 0.82840243, 3.50572246, ..., 0.51374786,
         1.93889108, 1.71430012],
        [3.36897308, 0.85850681, 3.66486309, ..., 0.52868714,
         2.02251663, 1.78710128]],

       [[3.24236989, 0.84011602, 3.5255057 , ..., 0.52451305,
         1.95395177, 1.72868388],
        [3.12756083, 0.5430565 , 3.43217715, ..., 0.20350987,
         1.74139501, 1.49903685],
        [3.18480348, 0.93707966, 3.44972599, ..., 0.64177852,
         1.97926558, 1.76848852],
        ...,
        [3.12694743, 0.80882522, 3.40016727, ..., 0.50427529,
         1.88365225, 1.66627369],
        [3.22809547, 0.92166597, 3.49993718, ..., 0.61865221,
         1.99107153, 1.77478943],
        [3.32537335, 0.6657019 , 3.63884906, ..., 0.31627997,
         1.89889261, 1.64948576]]])
Coordinates:
  * chain    (chain) int64 0 1
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
Dimensions without coordinates: obs_id

Now we can plot the mean outcomes and the data, along with confidence intervals for the mean outcomes and for all predictions.

fig, ax = plt.subplots()
ax.plot(X, Y, "o", ms=4, alpha=0.4, label="Data")
ax.plot(X, mu_pp.mean(axis=(0, 1)), label="Mean outcomes", alpha=1, lw=0.2, color="k")
az.plot_hdi(
    X,
    mu_pp,
    ax=ax,
    fill_kwargs={"alpha": 0.5, "color": "orchid", "label": "Mean Outcome 94% HPD"},
)
az.plot_hdi(
    X,
    trace["posterior_predictive"]["Y_obs"],
    ax=ax,
    fill_kwargs={"alpha": 0.2, "color": "coral", "label": "Outcome 94% HPD"},
)
ax.set_xlabel("Predictor (X)")
ax.set_ylabel("Outcome (Y)")
ax.set_title("Posterior predictive checks")
ax.legend(ncol=2, fontsize=10);
_images/4fb861b3245f96eec630a0c09bf3e19e4840c329eb5e9ba5053dfb87a65cd8c8.svg

Fixed Effects and Categorical Variables As Regressors#

How do we go about fitting a model with a dummy or fixed effect using a categorical variable in Bayesian land? (At least when it’s an exogenous variable—we’ll come back to the case when the outcome variable is discrete later).

Let’s take the previous example and modify to include a group-specific intercept for two groups—so a dummy variable that is 0 for group 0, and 1 for group 1. Our model will now be:

\[ \mu = \alpha + \beta x + \gamma d \]

where

\[ Y \mid \alpha, \beta, \sigma, \gamma \stackrel{\text{ind}}{\thicksim} \mathcal{N}(\mu, \sigma^2) \]

For our priors, we will assume:

  • \(\alpha \thicksim \mathcal{N}(0, 10)\)

  • \(\beta \thicksim \mathcal{N}(0, 10)\)

  • \(\sigma \thicksim \mathcal{N_{+}}(1)\) (the half-normal distribution)

  • \(\gamma \thicksim \mathcal{N}(0.5, 10)\)

where

\[\begin{split} \begin{align} f(y;\mu, \sigma) = \left\{\begin{array}{cll} \sqrt{\frac{2}{\pi\sigma^2}}\,\mathrm{e}^{-(y-\mu)^2/2\sigma^2} & & y \ge \mu \\[1em] 0 & & \text{otherwise}. \end{array}\right. \end{align} \end{split}\]

is the half-normal distribution.

Note that we are giving \(\gamma\) a mean prior that is not 0, because we do not expect there to be no influence at all from the group 1 contributions.

Let’s create the synthetic data for this model.

# True parameter values
α_true, β_true, σ_true, γ_true = 1, 2.5, 1.5, 6

# Size of dataset
size = 200

num_samples = 1000
num_chains = 4

# Predictor variables
X_cat = prng.standard_normal(size)
D = prng.binomial(1, 0.4, size)  # This chooses 1 or 0 with 0.4 prob for 1

# Simulate outcome variable
Y_cat = α_true + β_true * X_cat + γ_true*D + prng.standard_normal(size) * σ_true

And now a model much like we did before:

with pm.Model() as linear_model_with_dummy:
    # Priors for unknown model parameters
    α = pm.Normal("α", mu=0, sigma=10)
    β = pm.Normal("β", mu=0, sigma=10)
    γ = pm.Normal("γ", mu=0, sigma=10)
    σ = pm.HalfNormal("σ", sigma=1)

    # Expected value of outcome
    μ = α + β * X_cat + γ * D

    # Likelihood (sampling distribution) of observations
    Y_cat_obs = pm.Normal("Y_obs", mu=μ, sigma=σ, observed=Y_cat)

    # Sample the posterior
    trace = pm.sample(num_samples, return_inferencedata=True, chains=num_chains)
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [α, β, γ, σ]
100.00% [8000/8000 00:00<00:00 Sampling 4 chains, 0 divergences]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 8 seconds.

Now we check the trace summary statistics—we won’t look at the trace visually this time, because we know what the answer should be!

az.summary(trace, round_to=2)
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
α 0.78 0.13 0.54 1.03 0.0 0.0 2638.60 2944.27 1.0
β 2.45 0.10 2.26 2.64 0.0 0.0 3589.22 2433.68 1.0
γ 5.99 0.20 5.60 6.36 0.0 0.0 2727.02 2809.06 1.0
σ 1.40 0.07 1.27 1.53 0.0 0.0 3748.42 3011.60 1.0

This is close to the parameters we used to generate the data.

Priors#

This section has benefitted from Probabilistic Programming and Bayesian Methods for Hackers and the prior choice recommendations page of the Stan wiki.

Choosing Priors#

While Bambi (which we’ll come to shortly) will choose priors for you, you can override them—and there are many situations where you may wish to do this. And for PyMC, you always have to specify your priors. But how do you choose?

There are two important paradigms to be aware of when choosing priors: objective priors, where the data influence the posterior the most, and subjective priors, where the economist expresses their views in the form of the prior.

The quintessential example of an objective prior is the flat prior, literally a uniform distribution over the entire possible range of the variable that is to be estimated. By choosing this, you are saying that you put equal weight on all possible values. You would very rarely use this in practice, and priors with soft boundaries are generally preferred over those with hard boundaries.

A subjective or informative prior, however, would have more probability density or mass on certain values, biasing the final estimate toward that part of the range. You might use this if there were lots of previous evidence for something, or if there were extra information outside of the model that you wanted to incorporate. An informative prior reflects a higher degree of certainty about where the final values might end up, and will take a stronger pull to overcome.

Inbetween these two extremes are weakly informative priors, which are the default in Bambi and a good place to start in PyMC models, especially if your data are appropriately normalised.

We’ve already seen some handy priors such as the normal distribution, half-normal distribution, and Student-t distribution (plus it’s half equivalent too). You can find a full list of priors supported by PyMC here, in the section on distributions. Some priors to check out are:

  • Cauchy and half-Cauchy

  • Beta

  • Gamma

  • Binomial (discrete)

  • Bernoulli (discrete)

  • Poisson (discrete)

If you need a generic prior as a place to start for anything, some people recommend \(\mathcal{N}(0, 1)\) or \(\text{Student-t}(3,0,1)\), assuming your data is appropriately normalised and could be negative or positive.

Scaling Data#

Aim to keep all the parameters in your model “scale-free”. This means adjusting them so that a unit increase is meaningful. This will help in numerous ways, including giving a chance for priors like \(\mathcal{N}(0, 1)\) or \(\mathcal{N}(0, 10)\) to work.

Some suggested ways of scaling your data could be:

  • Scale by the standard deviation of the data via \(x \longrightarrow x/\sigma_{x}\), for example, if you were looking at test scores you might divide by the standard deviation of all the test scores.

  • In a regression, take logs of (positive-constrained) predictors and outcomes, then the coefficients can be interpreted as elasticities, \(x \longrightarrow \ln x\)

  • If a variable has a typical value, say 4, then scale by that under a logarithm, \(x \longrightarrow \ln(x/4)\)

Other Bayesian Packages#

Although PyMC and Bambi (which will be featured in Bayesian Inference Made Easier) are the most accessible and, likely, most widely-used Bayesian inference packages in Python, they’re certainly not the only ones. Here are a selection of others you might wish to check out:

  • PyStan, the Python wrapper for popular C++ Bayesian library Stan

  • cmdstanpy, another Python wrapper for Stan

  • Pyro, “deep probabilistic modelling, unifying the best of modern deep learning and Bayesian modelling”, built on Facebook’s PyTorch deep learning package

  • Bean Machine, “A universal probabilistic programming language to enable fast and accurate Bayesian analysis” from Facebook, also built on PyTorch.

  • Tensorflow Probability, “makes it easy to combine probabilistic models and deep learning on modern hardware (TPU, GPU)”, built on Google’s Tensorflow deep learning package

  • pomegranite, “implements fast and flexible probabilistic models ranging from individual probability distributions to compositional models such as Bayesian networks and hidden Markov models”.