Dustin Lennon

Dustin Lennon

Applied Scientist

2648A NW 57th St
Seattle, WA 98107
(206) 291-8893

[GH07] Figure 3.1: Mothers' education and children's test scores

jupyter notebook statsmodels pystan simple linear regression bayesian analysis non-informative priors

Simple Linear Regression Using a Bayesian Model

Simple Linear Regression Using a Bayesian Model

We replicate the results of a classical simple linear regression using a bayesian model with non-informative priors. This is effectively "Hello, World" using the PyStan package.

[1]:
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D as Line2D
from matplotlib.figure import Figure as Figure
[2]:
import os
import numpy as np
import pandas as pd
import statsmodels.api as sm
import pystan
import arviz as az
[3]:
# pystan model cache
import model_cache
Dataset

We replicate GH07, Figure 3.1. This uses the mother's education and children's test scores dataset.

[4]:
# Load data
df = pd.read_stata("./data/kidiq.dta")
df.head()
[4]:
kid_score mom_hs mom_iq mom_work mom_age
0 65 1.0 121.117529 4 27
1 98 1.0 89.361882 4 25
2 85 1.0 115.443165 4 27
3 83 1.0 99.449639 3 25
4 115 1.0 92.745710 4 27
[5]:
# data
x   = df['mom_hs']
y   = df['kid_score']

# jitter (for scatterplot)
np.random.seed(1111)
xj  = x + np.random.uniform(-0.03,0.03,size=len(x))
[6]:
# Scatterplot
fig = plt.figure()
ax = fig.gca()
pc = ax.scatter( *[xj,y], alpha=0.25, ec=None, lw=0)
Baseline Model: Classical OLS / Simple Linear Regression
[7]:
# OLS fit
X = np.column_stack(np.broadcast_arrays(1,x))
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
                            OLS Regression Results                            
==============================================================================
Dep. Variable:              kid_score   R-squared:                       0.056
Model:                            OLS   Adj. R-squared:                  0.054
Method:                 Least Squares   F-statistic:                     25.69
Date:                Mon, 07 Oct 2019   Prob (F-statistic):           5.96e-07
Time:                        17:43:26   Log-Likelihood:                -1911.8
No. Observations:                 434   AIC:                             3828.
Df Residuals:                     432   BIC:                             3836.
Df Model:                           1                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const         77.5484      2.059     37.670      0.000      73.502      81.595
x1            11.7713      2.322      5.069      0.000       7.207      16.336
==============================================================================
Omnibus:                       11.077   Durbin-Watson:                   1.464
Prob(Omnibus):                  0.004   Jarque-Bera (JB):               11.316
Skew:                          -0.373   Prob(JB):                      0.00349
Kurtosis:                       2.738   Cond. No.                         4.11
==============================================================================

Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[8]:
# Add a trend line based on OLS fit
yhat = results.predict([[1,0],[1,1]])
trend = Line2D([0,1], yhat, color='#ff7f0e')
ax.add_artist(trend)
ax.figure
[8]:
Non-informative Bayesian Model
[9]:
model_code = """
    data {
        int<lower=0> N;
        vector[N] x;
        vector[N] y;
    }
    parameters {
        real alpha;
        real beta;
        real<lower=0> sigma;
    }
    model {
        y ~ normal(alpha + beta * x, sigma);
    }
"""

mod = model_cache.retrieve(model_code)
Using cached StanModel
[10]:
model_data = {
    'N' : len(x),
    'x' : x,
    'y' : y
}

fit = mod.sampling(data=model_data, iter=1000, chains=4, seed=4792)
[11]:
print(fit)
Inference for Stan model: anon_model_6fddc89ca61cc8fa76fc51c1448e28ab.
4 chains, each with iter=1000; warmup=500; thin=1; 
post-warmup draws per chain=500, total post-warmup draws=2000.

        mean se_mean     sd   2.5%    25%    50%    75%  97.5%  n_eff   Rhat
alpha  77.47    0.07   2.01  73.38  76.16  77.55  78.77  81.19    895    1.0
beta   11.83    0.08   2.27   7.41  10.35  11.76  13.31  16.23    901    1.0
sigma  19.93    0.02   0.69  18.66  19.47  19.92  20.37  21.41   1099    1.0
lp__   -1511    0.04   1.21  -1514  -1511  -1511  -1510  -1510    801    1.0

Samples were drawn using NUTS at Mon Oct  7 17:43:28 2019.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).
Posterior Samples
[12]:
samp = fit.extract(permuted=True)

def posterior_plot(samp, k):
    fig = plt.gcf()
    ax = fig.gca()
    az.plot_kde(samp[k], ax = ax)
    ax.set_title(k)
    ax.figure
[13]:
posterior_plot(samp, 'alpha')
[14]:
posterior_plot(samp, 'beta')
[15]:
posterior_plot(samp, 'sigma')
Bibliography

Bibliography

[GH07] Gelman, A., and Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. Chapter 3.