POLS 2580

Quantifying Uncertainty:
Confidence Intervals &
Hypothesis Tests

Updated Dec 4, 2025

Overview

Class Plan

  • Setup
  • Announcements
  • Feedback
  • Topics:
    • Sampling distributions and standard errors
    • Confidence intervals
    • Hypothesis testing
    • Quantifying uncertainty for regression
    • Instrumental Variables

Setup: Packages for today

## Pacakges for today
the_packages <- c(
  ## R Markdown
  "kableExtra","DT","texreg","htmltools",
  ## Tidyverse
  "tidyverse", "lubridate", "forcats", "haven", "labelled",
  ## Extensions for ggplot
  "ggmap","ggrepel", "ggridges", "ggthemes", "ggpubr", 
  "patchwork",
  "GGally", "scales", "dagitty", "ggdag", "ggforce",
  # Data 
  "COVID19","maps","mapdata","qss","tidycensus", "dataverse", 
  # Analysis
  "DeclareDesign", "easystats", "zoo", "boot", "modelr" ,"purrr"
)

## Define a function to load (and if needed install) packages

ipak <- function(pkg){
    new.pkg <- pkg[!(pkg %in% installed.packages()[, "Package"])]
    if (length(new.pkg)) 
        install.packages(new.pkg, dependencies = TRUE)
    sapply(pkg, require, character.only = TRUE)
}

## Install (if needed) and load libraries in the_packages
ipak(the_packages)
   kableExtra            DT        texreg     htmltools     tidyverse 
         TRUE          TRUE          TRUE          TRUE          TRUE 
    lubridate       forcats         haven      labelled         ggmap 
         TRUE          TRUE          TRUE          TRUE          TRUE 
      ggrepel      ggridges      ggthemes        ggpubr     patchwork 
         TRUE          TRUE          TRUE          TRUE          TRUE 
       GGally        scales       dagitty         ggdag       ggforce 
         TRUE          TRUE          TRUE          TRUE          TRUE 
      COVID19          maps       mapdata           qss    tidycensus 
         TRUE          TRUE          TRUE          TRUE          TRUE 
    dataverse DeclareDesign     easystats           zoo          boot 
         TRUE          TRUE          TRUE          TRUE          TRUE 
       modelr         purrr 
         TRUE          TRUE 

New packages

To easily load survey data for our question, we’ll need the anesr package, which loads data from the American National Election Studies into R

# # Uncomment to uninstall package to download NES survey data
# library(devtools)
# devtools::install_github("jamesmartherus/anesr")
require(anesr)

Announcements

  • No lab this week.
  • Instead work on Assignment 3 (Due November 19)
  • Upload a qmd file to lab assignment for this week to show progress

Assignment 3

Assignment 3

  • Download the template for the final paper here

  • Due November 19

  • This week’s lab: Upload what progress you’ve made to canvas

  • Goals for Assignment 3

    • Find your paper and replication archive
    • Start filling in Introduction, Theory & Expectations, Design, Data
    • Begin exploring the replication data and code

Introduction

  • Clearly articulates the general research question

  • Concisely summarizes the specific paper you’re replicating

  • Describes your empirical strategy and extension to this paper

  • Provides an outline of the rest of the paper and previews your results.

Theory and Expectations

  • What is the phenomena you are studying? (Research Question/Outcome of Interest)

  • Why should we care about this phenomena? (Context/Motivation/Importance)

Then it should turn to addressing:

  • What are the factors scholars think explain this phenomena? (Literature Review/Key predictors/Alternative explanations)

Which should lead to a discussion of

  • What are the empirical implications of these claims? (Testable hypotheses)

Design

  • Describe the empirical research design of the paper you are replicating:

  • Explain how the paper tests its theoretical expectations in terms of the empirical estimates of its models (e.g. the regression coefficients of a linear models).

  • Try writing out the core model in terms of potential outcomes or a regression model.

Data

  • Identify the source or sources of data for the paper

  • Define the unit of observation and number of observations in your data

  • Explain how your key concepts (outcome variable, key predictors, covariates/alternative explanations) are operationalized and measured

  • Describe the distributions of these data using a table and/or figure(s).

  • Interpret these distributions for your reader.

    • What does a typical observation in the data look like.
    • Are some data skewed.
    • Are their big outliers?
    • How does the paper address these concerns?
    • How does the paper leverage these data in it’s emprical design

Replication and Extension

  • Choose a portion of the paper’s core results to replicate

  • Extension options

    • Additional exploratory data analysis to explain the logic of the design
    • Additional analyses of main result
      • Subgroup heterogeneity
      • Alternative specifications/models/inference

Sampling distributions and standard errors

Goals:

  • A sampling distribution is a theoretical distribution of estimates obtained in repeated sampling

    • What could have happened?
  • A standard error (SE) is the standard deviation of the sampling distribution

  • We can calculate SEs via simulation and analytically

  • We can use SEs to construct confidence intervals and conduct hypothesis tests allowing us to quantify uncertainty

Populations and samples

  • Population: All the cases from which you could have sampled

  • Parameter: A quantity or quantities of interest often generically called \(\theta\) (“theta”). What we want to learn about our population

  • Sample: A (random) draw of observations from that population

  • Sample Size: The number of observations in your draw (without replacement)

Estimators, estimates, and statistics

  • Estimator: A rule for calculating an estimate of our parameter of interest.

  • Estimate: The value produced by some estimator for some parameter from some data. Often called \(\hat{\theta}\)

  • Unbiased estimators: \(E(\hat{\theta})=E(\theta)\) On average, the estimates produced by some estimator will be centered around the truth

  • Consistent estimates: \(\lim_{n\to \infty} \hat{\theta_N} = \theta\) As the sample size increases, the estimates from an estimator converge in probability to the parameter value

  • Statistic: A summary of the data (mean, regression coefficient, \(R^2\)). An estimator without a specified target of inference

Distrubtions and standard errors

  • Sampling Distribution: How some estimate would vary if you took repeated samples from the population

  • Standard Error: The standard deviation of the sampling distribution

  • Resampling Distribution: How some estimate would vary if you took repeated samples from your sample WITH REPLACEMENT

    • “Sampling from our sample, as the sample was sampled from the population.”

Sampling distributions

  • Treat the 2024 NES pilot as the population

  • Take repeated samples of size N = 10, 30, 300

  • For each sample of size N, calculate the sample mean of age

  • Plot the distribution of sample means (i.e. the sampling distribution)

# Load Data
load(url("https://pols2580.paultesta.org/files/data/nes24.rda"))

# ---- Population ----

# Population average
mu_age <- mean(df$age, na.rm=T)
# Population standard deviation
sd_age <- sd(df$age, na.rm = T)

# ---- Function to Take Repeated Samples From Data ----

sample_data_fn <- function(
    dat=df, var=age, samps=1000, sample_size=10,
    resample = F){
  if(resample == F){
  df <- tibble(
  sim = 1:samps,
  distribution = "Sampling",
  size = sample_size,
  sample_from = "Population",
  pop_mean = dat %>% pull(!!enquo(var)) %>% mean(., na.rm=T),
  pop_sd = dat %>% pull(!!enquo(var)) %>% sd(., na.rm=T),
  se_asymp = pop_sd / sqrt(size),
  ll_asymp = pop_mean - 1.96*se_asymp,
  ul_asymp = pop_mean + 1.96*se_asymp,
) %>% 
  mutate(
    sample = purrr::map(sim, ~ slice_sample(dat %>% select(!!enquo(var)), n = sample_size, replace = F)),
    sample_mean = purrr::map_dbl(sample, \(x) x %>% pull(!!enquo(var)) %>% mean(.,na.rm=T)),
    ll = sample_mean - 1.96*sd(sample_mean),
    ul = sample_mean + 1.96*sd(sample_mean)
  )
  }
  if(resample == T){
    df <- tibble(
  sim = 1:samps,
  distribution = "Resampling",
  size = sample_size,
  sample_from = "Sample",
  pop_mean = dat %>% pull(!!enquo(var)) %>% mean(., na.rm=T),
  pop_sd = dat %>% pull(!!enquo(var)) %>% sd(., na.rm=T),
  se_asymp = pop_sd / sqrt(size),
  ll_asymp = pop_mean - 1.96*se_asymp,
  ul_asymp = pop_mean + 1.96*se_asymp,
) %>% 
  mutate(
    sample = purrr::map(sim, ~ slice_sample(dat %>% select(!!enquo(var)), n = sample_size, replace = T)),
    sample_mean = purrr::map_dbl(sample, \(x) x %>% pull(!!enquo(var)) %>% mean(.,na.rm=T))
  )
  }
  return(df)
}

# ---- Plot Single Distribution -----

plot_distribution <- function(the_pop,the_samp, the_var, ...){
  mu_pop <- the_pop %>% pull(!!enquo(the_var)) %>% mean(., na.rm=T)
  mu_samp <- the_samp %>% pull(!!enquo(the_var)) %>% mean(., na.rm=T)
  ll <- the_pop %>% pull(!!enquo(the_var)) %>% as.numeric() %>%  min(., na.rm=T)
  ul <- the_pop %>% pull(!!enquo(the_var)) %>% as.numeric() %>% max(., na.rm=T)
  p<- the_samp %>% 
    ggplot(aes(!!enquo(the_var)))+
    geom_density()+
    geom_rug()+
    theme_void()+
    geom_vline(xintercept = mu_samp, col = "red")+
    geom_vline(xintercept = mu_pop, col = "grey40",linetype = "dashed")+
    xlim(ll,ul)
  return(p)
}

# ---- Plot multiple distributions ----

plot_samples <- function(pop, x, variable,n_rows = 4, ...){
  sample_plots <- x$sample[1:(4*n_rows)] %>% 
  purrr::map( \(x) plot_distribution(the_pop=pop, the_samp = x, 
                                     the_var = !!enquo(variable)))
  p <- wrap_elements(wrap_plots(sample_plots[1:(4*n_rows)], ncol=4))
  return(p)
  
}

# ---- Plot Combined Figure ----

plot_figure_fn <- function(
    d=df, 
    v=age, 
    sim=1000, 
    size=10,
    rows = 4){
  # Population average
  mu <- d %>% pull(!!enquo(v)) %>% mean(., na.rm=T)
  sd <- d %>% pull(!!enquo(v)) %>% sd(., na.rm=T)
  se <- sd/sqrt(size)
  # Range
  ll <- d %>% pull(!!enquo(v)) %>% as.numeric() %>%  min(., na.rm=T)
  ul <- d %>% pull(!!enquo(v)) %>% as.numeric() %>% max(., na.rm=T)
  # Population standard deviation
  # Sample data
  samp_df <- sample_data_fn(dat=d, var = !!enquo(v), samps = sim, sample_size = size)
  # Plot Population
  p_pop <- d %>%
    ggplot(aes(!!enquo(v)))+
      geom_density(col ="grey60")+
      geom_rug(col = "grey60", )+
      geom_vline(xintercept = mu, col="grey40", linetype="dashed")+
      theme_void()+
      labs(title ="Population")+
      xlim(ll,ul)+
      theme(plot.title = element_text(hjust = 0))

  
  p_samps <- plot_samples(pop=d, x= samp_df,variable = !!enquo(v),
                          n_rows = rows)
  p_samps <- p_samps + 
    ggtitle(paste("Repeated samples of size N =",size,"from the population"))+
    theme(plot.title = element_text(hjust = 0.5), 
          plot.background = element_rect(
            fill = NA, colour = 'black', linewidth = 2)
          )
  
  
  p_dist <- samp_df %>% 
  ggplot(aes(sample_mean))+
  geom_density(col="red",aes(y= after_stat(ndensity)))+
  geom_rug(col="red")+
  geom_density(data = df, aes(!!enquo(v), y= after_stat(ndensity)),
               col="grey60")+
  geom_vline(xintercept = mu, col="grey40", linetype="dashed")+
  xlim(ll,ul)+
  theme_void()+
    labs(
      title = "Sampling Distribution"
    )+  theme(plot.title = element_text(hjust = 0))
  
  range_upper_df <- tibble(
  x = seq( ((ll+ul)/2 -5), ((ll+ul)/2 +5), length.out = 20),
  xend = seq(ll-5, ul+5, length.out = 20),
  y = rep(9, 20),
  yend = rep(1, 20)
)
p_upper <- range_upper_df %>% 
  ggplot(aes(x=x, xend = xend, y=y,yend=yend))+
  geom_segment(
    arrow = arrow(length = unit(0.05, "npc"))
  )+
  theme_void()+
  coord_fixed(ylim=c(0,10),
              xlim =c(ll-5,ul+5),clip="off")
  # Lower
  range_df <- samp_df %>% 
  summarise(
    min = min(sample_mean),
    max = max(sample_mean),
    mean = mean(sample_mean)
  )
  
  plot_df <- tibble(
  id = 1:50,
  # x = sort(rnorm(50, mu, sd)),
  x = sort(runif(50, ll, ul)),
  xend = sort(rnorm(50, mu, se)),
  y = 9,
  yend = 1
)

p_lower <- plot_df %>%
  ggplot(aes(x,y, group =id))+
  geom_segment(aes(xend=xend, yend=yend),
               col = "red",arrow = arrow(length = unit(0.05, "npc"))
               )+
  theme_void()+
  coord_fixed(ylim=c(0,10),xlim = c(ll,ul),clip="off")

  
  design <-"##AAAA##
            ##AAAA##
            ##AAAA##
            BBBBBBBB
            BBBBBBBB
            #CCCCCC#
            #CCCCCC#
            #CCCCCC#
            #CCCCCC#
            DDDDDDDD
            DDDDDDDD
            ##EEEE##
            ##EEEE##
            ##EEEE##"
  
  fig <- p_pop / p_upper / p_samps / p_lower / p_dist +
    plot_layout(design = design)
  return(fig)


  
  
  
}

# ---- Samples and Figures Varying Sample Size ----
## N = 10
set.seed(1234)
samp_n10 <- sample_data_fn(sample_size  = 10, samps = 1000)
set.seed(1234)
fig_n10 <- plot_figure_fn(v=age,size = 10)

## N = 30
set.seed(1234)
samp_n30 <- sample_data_fn(sample_size  = 30, samps = 1000)
set.seed(1234)
fig_n30 <- plot_figure_fn(size = 30,rows=4)

## N = 300
set.seed(1234)
samp_n300 <- sample_data_fn(sample_size  = 300, samps = 1000)
set.seed(1234)
fig_n300 <- plot_figure_fn(size = 300)

As the sample sample size increases:

  • The width of the sampling distribution decreases (LLN)

  • The shape of the sampling distribution approximates a Normal distribution (CLT)

Standard errors

  • The standard error (SE) is simply the standard deviation of the sampling distribution.

  • The SE decreases as the sample size increases (by the LLN):

  • Approximately 95% of the sample means will be within 2 SEs of the population mean (CLT)

se_df <- tibble(
  `Sample Size` = factor(paste("N =",c(10,30, 300))),
  se = c(sd(samp_n10$sample_mean),
         sd(samp_n30$sample_mean),
         sd(samp_n300$sample_mean)),
  SE = paste("SE =", round(se,2)),
  ll = mu_age,
  ul = mu_age + se,
  y = c(.3,.3,.45),
  yend = y
)

ci_df <- tibble(
  `Sample Size` = factor(paste("N =",c(10,30, 300))),
  se = c(sd(samp_n10$sample_mean),
         sd(samp_n30$sample_mean),
         sd(samp_n300$sample_mean)),
  mu = mu_age,
  ll = round(mu_age - 1.96 *se,2),
  ul = round(mu_age + 1.96 *se,2),
  ci = paste("95 % Coverage Interval [",ll,";",ul,"]",sep=""),
  y = c(.3,.3,.45),
  yend = y
)
sim_df <- samp_n10 %>% 
  bind_rows(samp_n30) %>% 
  bind_rows(samp_n300) %>% 
  mutate(
    `Sample Size` = factor(paste("N =",size))
    ) %>% 
  left_join(ci_df) %>% 
  mutate(
    Coverage = case_when(
      sample_mean > ll_asymp & sample_mean < ul_asymp  & size == 10~ "#F8766D",
      sample_mean > ll_asymp & sample_mean < ul_asymp  & size == 30~ "#00BA38",
      sample_mean > ll_asymp & sample_mean < ul_asymp  & size == 300~ "#619CFF",
      T ~ "grey"
    )
  )



fig_se <- sim_df %>% 
  ggplot(aes(sample_mean, col = `Sample Size`))+
  geom_density()+
  geom_rug()+
  geom_vline(xintercept = mu_age, linetype = "dashed")+
  theme_minimal()+
  facet_wrap(~`Sample Size`, ncol=1)+
  ylim(0,.5)+
  guides(col="none")+
  geom_segment(
    data = se_df,
    aes(x= ll, xend =ul, y = y, yend = yend)
  )+
  geom_text(
    data = se_df,
    aes(x = ul, y =y, label = SE),
    hjust = -.25
  ) +
  labs(
    y = "",
    x = "Sampling Distributions of Sample Means",
    title = "Standard Errors decrease with Sample Size"
  )

fig_coverage <- sim_df %>% 
  ggplot(aes(sample_mean,col=`Sample Size`))+
  geom_density()+
  geom_rug(col=sim_df$Coverage)+
  geom_vline(xintercept = mu_age, linetype = "dashed")+
  theme_minimal()+
  facet_wrap(~`Sample Size`, ncol=1)+
  ylim(0,.55)+
  guides(col="none")+
  geom_segment(
    data = ci_df,
    aes(x= ll, xend =ul, y = y, yend = yend)
  )+
  geom_text(
    data = ci_df,
    aes(x = mu, y =y, label = ci),
    hjust = .5,
    nudge_y =.1
  ) +
  labs(
    y = "",
    x = "Sampling Distributions of Sample Means",
    title = "Approximately 95% of sample means are within 2 SE of the population mean"
  )

How do we calculate a standard error from a single sample?

Calculating standard errors

  • Simulation:
    • Treat sample as population
    • Sample with replacement (“bootstrapping”)
    • Estimate SE from standard deviation of resampling distribution (“plug-in principle”)
  • Analytic
    • Characterize sampling distribution from sample mean and variance via asymptotic theory (the LLT and CLT)
    • For a sample mean, \(\bar{x}\)

\[ SE_{\bar{x}} = \frac{\sigma_x}{\sqrt(n)} \]

plot_resampling_fn <- function(d=df, v=age, sim=1000, size=10,rows=3){
  # Population average
  mu <- d %>% pull(!!enquo(v)) %>% mean(., na.rm=T)
  # Population standard deviation and SE
  sd <- d %>% pull(!!enquo(v)) %>% sd(., na.rm=T)
  se <- sd/sqrt(size)
  # Range
  ll <- d %>% pull(!!enquo(v)) %>% as.numeric() %>%  min(., na.rm=T)
  ul <- d %>% pull(!!enquo(v)) %>% as.numeric() %>% max(., na.rm=T)
  # Resampling with replace
  # Draw 1 Sample
  sample <- sample_data_fn(dat=d, var = !!enquo(v), samps = 1, sample_size = size, resample = F)
  samp_df <- as.data.frame(sample$sample)
  # Resample from sample with replacement
  resamp_df <- sample_data_fn(dat=samp_df, var = !!enquo(v), samps = sim, sample_size = size, resample = T)
  # Plot Population
  p_pop <- d %>%
    ggplot(aes(!!enquo(v)))+
      geom_density(col ="grey60")+
      geom_rug(col = "grey60", )+
      geom_vline(xintercept = mu, col="grey40", linetype="dashed")+
      theme_void()+
      labs(title ="Population")+
      xlim(ll,ul)+
      theme(plot.title = element_text(hjust = 0))

  p_samp <- plot_distribution(the_pop = d,
                              the_samp = samp_df,
                              the_var = age)+
    labs(title ="Sample")+
      xlim(ll,ul)+
      theme(plot.title = element_text(hjust = 0))
  
  p_samps <- plot_samples(pop=d, x= resamp_df,variable = !!enquo(v), n_rows =rows)
  p_samps <- p_samps + 
    ggtitle(paste("Repeated samples with replacement\nof size N =",size,"from sample"))+
    theme(plot.title = element_text(hjust = 0.5), 
          plot.background = element_rect(
            fill = NA, colour = 'black', linewidth = 2)
          )
  
  # Resampling Distribution
  
  
  p_dist <- resamp_df %>% 
  ggplot(aes(sample_mean))+
  geom_density(col="red",aes(y= after_stat(ndensity)))+
  geom_rug(col="red")+
  geom_density(data = df, aes(!!enquo(v), y= after_stat(ndensity)),
               col="grey60")+
  geom_vline(xintercept = unique(resamp_df$pop_mean), col="red", linetype="solid")+
  geom_vline(xintercept = mu, col="grey40", linetype="dashed")+
  xlim(ll,ul)+
  theme_void()+
    labs(
      title = "Reampling Distribution"
    )+  theme(plot.title = element_text(hjust = 0))
  
   range_upper_df <- tibble(
  x = seq( ((ll+ul)/2 -5), ((ll+ul)/2 +5), length.out = 20),
  xend = seq(ll-5, ul+5, length.out = 20),
  y = rep(9, 20),
  yend = rep(1, 20)
)
p_upper <- range_upper_df %>% 
  ggplot(aes(x=x, xend = xend, y=y,yend=yend))+
  geom_segment(
    arrow = arrow(length = unit(0.05, "npc"))
  )+
  theme_void()+
  coord_fixed(ylim=c(0,10),
              xlim =c(ll-5,ul+5),clip="off")
  # Lower
  range_df <- resamp_df %>% 
  summarise(
    min = min(sample_mean),
    max = max(sample_mean),
    mean = mean(sample_mean)
  )
  
  plot_df <- tibble(
  id = 1:50,
  # x = sort(rnorm(50, mu, sd)),
  x = sort(runif(50, ll, ul)),
  xend = sort(rnorm(50, unique(resamp_df$pop_mean), se)),
  y = 9,
  yend = 1
)

p_lower <- plot_df %>%
  ggplot(aes(x,y, group =id))+
  geom_segment(aes(xend=xend, yend=yend),
               col = "red",arrow = arrow(length = unit(0.05, "npc"))
               )+
  theme_void()+
  coord_fixed(ylim=c(0,10),xlim = c(ll,ul),clip="off")

  
  design <-"##AAAA##
            ##AAAA##
            ##AAAA##
            ##BBBB##
            ##BBBB##
            ##BBBB##            
            CCCCCCCC
            CCCCCCCC
            #DDDDDD#
            #DDDDDD#
            #DDDDDD#
            #DDDDDD#
            EEEEEEEE
            EEEEEEEE
            ##FFFF##
            ##FFFF##
            ##FFFF##"
  
  fig <- p_pop / p_samp /p_upper / p_samps / p_lower / p_dist +
    plot_layout(design = design)
  return(fig)


  
  
  
}
set.seed(123)
resamp_n10 <- sample_data_fn(
  dat = sample_data_fn(samps = 1, sample_size = 10, resample = T)$sample %>%  as.data.frame(),
  sample_size = 10, 
  resample = T)
set.seed(123)
fig_n10_bs <- plot_resampling_fn(size=10)

set.seed(12345)
resamp_n30 <- sample_data_fn(
  dat = sample_data_fn(samps = 1, sample_size = 30, resample = T)$sample %>%  as.data.frame(),
  samps = 1000, sample_size = 30, resample = T)

set.seed(12345)
fig_n30_bs <- plot_resampling_fn(size=30)

set.seed(1234)
resamp_n300 <- sample_data_fn(
  dat = sample_data_fn(samps = 1, sample_size = 300, resample = T)$sample %>%  as.data.frame(),
  samps = 1000, sample_size = 300, resample = T)
set.seed(1234)
fig_n300_bs <- plot_resampling_fn(size=300)

Bootstrap SE Analytic SE
5.74 5.61
2.75 3.24
1.07 1.02

Confidence intervals

Confidence intervals

Confidence intervals:

  • provide a way of quantifying uncertainty about estimates

  • describe a range of plausible values for an estimate

  • are a function of the standard error of the estimate, and the a critical value determined by \(\alpha\), which describes the degree of confidence we want

Calculating a confidence interval

  • Choose level of confidence \((1-\alpha)\times 100%\)

    • \(\alpha = 0.05\), corresponds to a 95% confidence level.
  • Derive the sampling distribution of the estimator

    • Simulation: bootstrap re-sampling
    • Analytically: computing its mean and variance.
  • Compute the standard error

  • Compute the critical value \(z_{\alpha/2}\)

    • as the \(1.96 = \Phi(z_{0.5/2})\) for a 95% CI
  • Compute the lower and upper confidence limits

    • lower limit = \(\hat{\theta} - z_{\alpha/2}\times SE\)
    • upper limit = \(\hat{\theta} + z_{\alpha/2}\times SE\)
resamp_df <- 
  resamp_n10 %>% 
  bind_rows(resamp_n30) %>% 
  bind_rows(resamp_n300) %>% 
  mutate(
    `Sample Size` = factor(paste("N =",size))
    )

resamp_ci_df <- tibble(
  `Sample Size` = factor(paste("N =",c(10,30,300))),
  mu = unique(resamp_df$pop_mean),
  ll = unique(resamp_df$ll_asymp),
  ul = unique(resamp_df$ul_asymp),
  y = c(.3, .3,.5)
)

fig_ci1 <- resamp_df %>% 
  ggplot(aes(sample_mean,
             col = `Sample Size`))+
  geom_density()+
  geom_rug()+
  geom_vline(xintercept = mu_age, linetype = "dashed")+
  geom_vline(data = resamp_ci_df,
             aes(xintercept = mu,
                 col = `Sample Size`))+
  geom_segment(data = resamp_ci_df,
               aes(x = ll, xend =ul, y = y, yend =y,
                   col = `Sample Size`))+
  facet_wrap(~`Sample Size`, ncol=1)+
  theme_minimal()+
  labs(
    y = "",
    x = "Resampling Distribution",
    title = "95% Confidence Intervals"
  )
  

samp_ci_df <- samp_n10 %>% 
  bind_rows(samp_n30) %>% 
  bind_rows(samp_n300) %>% 
  mutate(
    `Sample Size` = factor(paste("N =",size))
    ) %>% 
  mutate(
    Coverage = case_when(
      pop_mean > ll & pop_mean < ul ~ "red",
      T ~ "black"
    )
  )

fig_ci2 <- samp_ci_df %>% 
  filter(sim %in% 1:100) %>% 
  filter(size == 10) %>% 
  ggplot(aes(y = sample_mean, x= sim))+
  geom_pointrange(aes(ymin = ll, ymax =ul, col=Coverage))+
  geom_hline(yintercept = mu_age, linetype = "dashed")+
  coord_flip()+
  theme_minimal()+
  guides(col = "none")+
  facet_wrap(~`Sample Size`)

fig_ci3 <- samp_ci_df %>% 
  filter(sim %in% 1:100) %>% 
  ggplot(aes(y = sample_mean, x= sim))+
  geom_pointrange(aes(ymin = ll, ymax =ul, col=Coverage))+
  geom_hline(yintercept = mu_age, linetype = "dashed")+
  coord_flip()+
  theme_minimal()+
  guides(col = "none")+
  facet_wrap(~`Sample Size`)

  • Figure 1 shows 3 confidences intervals for 3 samples of different sizes (N = 10, 30, 300). The CIs for N = 10 and N = 300, intervals contain the truth (include the population mean). By chance, the CI for N=30 falls outside of the truth.

  • Figure 2 shows that our confidence is about the property of the interval. Over repeated sampling, 95% of the intervals would contain the truth, 5% percent would not.

    • In any one sample, the population parameter either is or is not within the interval.
  • Figure 3, shows that while the width of the interval declines with the sample size, the coverage properties remains the same.

Interpreting confidence intervals

  • Confidence intervals give a range of values that are likely to include the true value of the parameter \(\theta\) with probability \((1-\alpha) \times 100\%\)

    • \(\alpha = 0.05\) corresponds to a “95-percent confidence interval”
  • Our “confidence” is about the interval

  • In repeated sampling, we expect that \((1-\alpha) \times 100\%\) of the intervals we construct would contain the truth.

  • For any one interval, the truth, \(\theta\), either falls within in the lower and upper bounds of the interval or it does not.

Hypothesis testing

What is a hypothesis test

  • A formal way of assessing statistical evidence. Combines

    • Deductive reasoning distribution of a test statistic, if the a null hypothesis were true

    • Inductive reasoning based on the test statistic we observed, how likely is it that we would observe it if the null were true?

What is a test statistic?

  • A way of summarizing data
    • difference of means
    • coefficients from a linear model
    • coefficients from a linear model divided by their standard errors
    • R^2
    • Sums of ranks

Note

Different test statistics may be more or less appropriate depending on your data and questions.

What is a null hypothesis?

  • A statement about the world

    • Only interesting if we reject it

    • Would yield a distribution of test statistics under the null

    • Typically something like “X has no effect on Y” (Null = no effect)

    • Never accept the null can only reject

What is a p-value?

A p-value is a conditional probability summarizing the likelihood of observing a test statistic as far from our hypothesis or farther, if our hypothesis were true.

How do we do hypothesis testing?

  1. Posit a hypothesis (e.g. \(\beta = 0\))

  2. Calculate the test statistic (e.g. \((\hat{\beta}-\beta)/se_\beta\))

  3. Derive the distribution of the test statistic under the null via simulation or asymptotic theory

  4. Compare the test statistic to the distribution under the null

  5. Calculate p-value (Two Sided vs One sided tests)

  6. Reject or fail to reject/retain our hypothesis based on some threshold of statistical significance (e.g. p < 0.05)

Outcomes of hypothesis tests

  • Two conclusions from of a hypothesis test: we can reject or fail to reject a hypothesis test.

  • We never “accept” a hypothesis, since there are, in theory, an infinite number of other hypotheses we could have tested.

Our decision can produce four outcomes and two types of error:

Reject \(H_0\) Fail to Reject \(H_0\)
\(H_0\) is true False Positive Correct!
\(H_0\) is false Correct! False Negative
  • Type 1 Errors: False Positive Rate (p < 0.05)
  • Type 2 Errors: False negative rate (1 - Power of test)

Quantifying uncertainty in regression

Quantifying uncertainty in regression

How do income and education shape political participation?

Let’s fit the following model

\[ y = \beta_0 + \beta_1\text{income} + \beta_2 \text{education} + \epsilon \]

m1 <- lm_robust(dv_participation ~   education + income, df)

And unpack the output

tidy(m1) %>% 
  mutate_if(is.numeric, \(x) round(x, 3)) -> m1_sum
m1_sum
         term estimate std.error statistic p.value conf.low conf.high   df
1 (Intercept)    0.312     0.080     3.910   0.000    0.155     0.468 1684
2   education    0.167     0.024     6.891   0.000    0.119     0.214 1684
3      income    0.007     0.010     0.671   0.502   -0.014     0.028 1684
           outcome
1 dv_participation
2 dv_participation
3 dv_participation
htmlreg(m1,include.ci=F) 
Statistical models
  Model 1
(Intercept) 0.31***
  (0.08)
education 0.17***
  (0.02)
income 0.01
  (0.01)
R2 0.04
Adj. R2 0.04
Num. obs. 1687
RMSE 1.29
***p < 0.001; **p < 0.01; *p < 0.05
htmlreg(m1,include.ci=T) 
Statistical models
  Model 1
(Intercept) 0.31*
  [ 0.16; 0.47]
education 0.17*
  [ 0.12; 0.21]
income 0.01
  [-0.01; 0.03]
R2 0.04
Adj. R2 0.04
Num. obs. 1687
RMSE 1.29
* 0 outside the confidence interval.
m1_coefplot <- m1_sum %>% 
  ggplot(aes(term, estimate))+
  geom_pointrange(aes(ymin = conf.low, ymax =conf.high))+
  geom_hline(yintercept = 0, linetype = "dashed")+
  coord_flip()+
  labs(
    y = "Estimate",
    x = "",
    title = "Coefficient plot"
  )+
  theme_minimal()

Estimates

The estimate column are the regression coefficients, \(\beta\)

Recall, lm_robust() calculates these:

\[ \hat{\beta} = (X'X)^{-1}X'y \]

Tip

\(\beta\)s describe substantive relationships between predictors (income, education) and the outcome (political participation)

coef(m1)
(Intercept)      income   education 
0.311609712 0.007034253 0.166755964 
X <- model.matrix(m1,data=df)
y <- model.frame(m1)$dv_participation
betas <- solve(t(X)%*%X)%*%t(X)%*%y
betas
                   [,1]
(Intercept) 0.311609712
income      0.007034253
education   0.166755964

A unit increases in education is associated with about 0.16 more acts of political participation, while a unit increase in income is associated with 0.007 more acts of participation.

Note that both income and education are measured with ordinal scales

get_value_labels(df$educ)
    No HS credential High school graduate         Some college 
                   1                    2                    3 
       2-year degree        4-year degree            Post-grad 
                   4                    5                    6 

Such that it might be unreasonable to assume cardinality (going from a 1 to 2 is the same as going from a 3 to 4)

  • Consider treating as factor / recoding variable

Standard errors & confidence intervals

The default standard errors from lm_robust() are calculated as follows

\[ SE_{\beta} = (X'X)^{-1}X'\text{diag}\left[\frac{e_i^2}{1-h_{ii}}\right]X(X'X)^{-1} \]

Which we could also obtain via bootstrapping.

The confidence intervals are calculated as follows:

\[ CI = \beta \pm 1.96\times SE_\beta \]

# 0 Set seed
set.seed(123)

# 1,000 bootstrap samples
boot <- modelr::bootstrap(df %>% select(dv_participation, income, education), 1000)
# Estimate Boostrapped Models
m1_bs <- purrr::map(boot$strap, ~ lm_robust(dv_participation ~  income + education, data = .))

# Tidy coefficients
m1_bs_df <- map_df(m1_bs, tidy, .id = "id")
m1_asymp_df <- tidy(m1) %>% 
  mutate(
    term = factor(term)
  ) %>% 
  select(term,estimate, std.error,conf.low, conf.high) %>% 
  mutate(
    ll = conf.low,
    ul = conf.high,
    y = 1.1,
    type = "Analytic"
  )

m1_bs_ci_df <- m1_bs_df %>%
  mutate(
    term = factor(term)
  ) %>% 
  group_by(term) %>% 
  summarise(
  beta = mean(estimate,na.rm=T),
  se = sd(estimate,na.rm=T)
  ) %>% 
  mutate(
  ll = beta - 1.96*se,
  ul = beta + 1.96*se,
  y = 1.05,
  type = "Bootstrap"
) 

# Compare SEs

compare_m1_se_tab <-
  tibble(
    `Predictor` = m1_bs_ci_df$term,
    Estimate = m1_asymp_df$estimate,
    `SE` = m1_asymp_df$std.error,
     `CI` = paste("[", round(m1_asymp_df$ll,2),
                  "; ", round(m1_asymp_df$ul,2),"]",
                  sep =""),
    `SE ` = m1_bs_ci_df$se,
    `CI ` = paste("[", round(m1_bs_ci_df$ll,2),
                  "; ", round(m1_bs_ci_df$ul,2),"]",
                  sep =""),
  )


# Figure
fig_m1_bs <- m1_bs_df %>% 
  ggplot(aes(estimate))+
  geom_density(aes(y=after_stat(ndensity)))+
  geom_rug()+
  geom_vline(xintercept = 0, linetype = "dashed")+
  facet_wrap(~term,scales = "free")+
  theme_minimal()+
  ylim(0, 1.2)+
  geom_vline(
    data = m1_asymp_df,
    aes(xintercept = estimate)
  ) +
  geom_segment(
    data = m1_bs_ci_df,
    aes(x = ll, xend = ul,
        y = y, yend = y,
        col = "Bootstrap")
    
  ) +
  geom_segment(
    data = m1_asymp_df,
    aes(x = ll, xend = ul,
        y = y, yend = y,
        col = "Analytic")
    
  ) +
  labs(
    col = "Confidence Interval",
    x = "Bootstrapped Sampling Distribution\n of Coefficients"
  )
Analytic
Bootstrap
Predictor Estimate SE CI SE CI
(Intercept) 0.3116 0.0797 [0.16; 0.47] 0.0805 [0.15; 0.47]
education 0.1668 0.0242 [0.12; 0.21] 0.0248 [0.12; 0.22]
income 0.0070 0.0105 [-0.01; 0.03] 0.0107 [-0.01; 0.03]

  • The main takeaway here is that for linear models, bootstrapped SEs and CIs are quite similar to those obtained via analytically (via math and asymptotic theory)

  • For common estimators and large samples, we’ll generally use analytic SEs (quicker)

  • For less common estimators (ratios of estimates), analytic estimates of the SEs may not exist. Bootstrapping will still provide valid SEs, provided we “sample from the sample, as the sample was drawn from the population”

Test statistics and p-values

The test statistic (“t-stat”) reported by lm() and lm_robust() is our observed coefficient, \(\hat{\beta}\) minus our hypothesized value \(\beta\) (e.g. 0), divided by the standard error of \(\hat{\beta}\).

\[t= \frac{\hat\beta-\beta}{\widehat{SE}_{\hat{\beta}}} \sim \text{Students's } t \text{ with } n-k \text{ degrees of freedom}\] Which follows a \(t\) distribution – like a Normal with “heavier tails” (e.g. more probability assigned to extreme values)

# Calculate t-stats

t_stat_df <- tibble(
  x= seq(-3,3,length.out = 20),
  p = dt(x,df=m1$df[1] )
)


m1_tstat_educ <- t_stat_df %>% 
  ggplot(aes(x=x,y=p))+
  stat_function(
    fun= dt, 
    args = list(df = m1$df[1]),
    geom = "line",
    xlim = c(
      min(c(-3, abs(m1$statistic[2])*-1 -1)),
      max(c(3, abs(m1$statistic[2])+1))
      )
  )+
  stat_function(
    fun= dt, 
    args = list(df = m1$df[1]),
    geom = "area",
    fill = "blue",
    alpha = .5,
    xlim = c(m1$statistic[2],4)
  )+
  stat_function(
    fun= dt, 
    args = list(df = m1$df[1]),
    geom = "area",
    fill = "blue",
    alpha = .5,
    xlim = c(-4, abs(m1$statistic[2])*-1)
  )+
  geom_vline(xintercept = m1$statistic[2],
             col = "blue",
             linetype = "dashed")+
   geom_vline(xintercept = m1$statistic[2]*-1,
             col = "blue",
             linetype = "dashed")+
  theme_minimal()+
  labs(
    title = "Education",
    subtitle = paste("t-stat = ",round(m1$statistic[2],3),
    "\nPr(>|t|) = ",
    format(round(m1$p.value[2],3),nsmall = 3),
    sep = ""
    ),
    x = "Distribution of t-stat under the Null"
  )

m1_tstat_income <- t_stat_df %>% 
  ggplot(aes(x=x,y=p))+
  stat_function(
    fun= dt, 
    args = list(df = m1$df[1]),
    geom = "line",
    xlim = c(
      min(c(-3, abs(m1$statistic[3])*-1 -1)),
      max(c(3, abs(m1$statistic[3])+1))
      )
  )+
  stat_function(
    fun= dt, 
    args = list(df = m1$df[1]),
    geom = "area",
    fill = "blue",
    alpha = .5,
    xlim = c(m1$statistic[3],4)
  )+
  stat_function(
    fun= dt, 
    args = list(df = m1$df[1]),
    geom = "area",
    fill = "blue",
    alpha = .5,
    xlim = c(-4, abs(m1$statistic[3])*-1)
  )+
  geom_vline(xintercept = m1$statistic[3],
             col = "blue",
             linetype = "dashed")+
   geom_vline(xintercept = m1$statistic[3]*-1,
             col = "blue",
             linetype = "dashed")+
  theme_minimal()+
  labs(
    title = "Income",
    subtitle = paste("t-stat = ",round(m1$statistic[3],3),
    "\nPr(>|t|) = ",
    format(round(m1$p.value[3],3),nsmall = 3),
    sep = ""
    ),
    x = "Distribution of t-stat under the Null"
  )

fig_pvalue <- m1_tstat_educ + m1_tstat_income

# Compare Pvalues

compare_m1_pvalue <-
  tibble(
    `Predictor` = m1_bs_ci_df$term,
    Estimate = m1_asymp_df$estimate,
    SE = m1_sum$std.error,
    `t-stat` = m1_sum$statistic,
     `Pr(>abs(t))` = format(round(m1_sum$p.value,3), nsmall=3)
  )
Predictor Estimate SE t-stat Pr(>abs(t))
(Intercept) 0.312 0.080 3.910 0.000
education 0.167 0.024 6.891 0.000
income 0.007 0.010 0.671 0.502

[1] 4
  • The p-value for the coefficient on education is less than 0.05, while the p-value for income is 0.50.

  • If there was no relationship between education and participation (\(H_0:\beta_2=0\)), it would be quite unlikely that we would observed a test statistic of 6.89 or larger.

  • Similarly, test statistics as larger or larger than 0.671 occurs quite frequently in a world where there is no relationship (\(H_0:\beta_3=0\)) between income and participation.

  • Thus we reject the null hypothesis for education, but fail to reject the null hypothesis for income in this model.

Predicted values

Let’s explore whether income and education condition each other’s relationship with participation using the following interaction model

\[ y = \beta_0 +\beta_1 \text{educ} + \beta_2 \text{inc} + \beta_3\text{educ}\times\text{inc} + \epsilon \]

To help our interpretations we’ll produce plots of predicted values of participation, at varying levels of income and education.

# Fit model
m2 <- lm_robust(dv_participation ~ education*income, df)


# Regression Table
m2_tab <- htmlreg(
  m2, 
  include.ci = F,
  digits = 3,
  stars = c(0.05, 0.10)
                    )

# Predicted values

# Data frame of values we want to make predictions at
pred_df <-expand_grid(
  income = sort(unique(df$income)),
  education = quantile(df$education, na.rm = T)[c(2,4)]
)

# Combine model predictions
pred_df <- cbind(pred_df, predict(m2, pred_df,
                                  interval = "confidence")$fit)

# Plot predicted values
fig_m2_pred <- pred_df %>% 
  mutate(
    Education = ifelse(education == 2, "High school","College")
  ) %>% 
  ggplot(aes(income, fit, group=Education))+
  geom_ribbon(aes(ymin = lwr, ymax = upr,
                  fill = Education),
              alpha=.5)+
  geom_line()+
  theme_minimal()+
  labs(y = "Predicted Participation",
       x = "Income",
       title = "")
Statistical models
  Model 1
(Intercept) 0.060
  (0.151)
education 0.242**
  (0.050)
income 0.048**
  (0.024)
education:income -0.011*
  (0.006)
R2 0.042
Adj. R2 0.040
Num. obs. 1687
RMSE 1.286
**p < 0.05; *p < 0.1

Low income individuals with a college degree participate at significantly higher rates than individuals with a similar levels of income with only a high school diploma.

Alternatively, we might say that the college educated tend to participate at similar levels, regardless of their level of income, while income has a marginally positive relationship with participation for those without college degrees.

Note

Is this a causal relationship? What assumptions would we need to make a causal claim about the effects of education on participation?

Instrumental Variables

Causal Inference in Observational Designs

  • Causal inference in observational and experimental studies is about counterfactual comparisons
  • In observational studies, to make causal claims we generally make some assumption of conditional independence:

\[ Y_i(1),Y_i(0), \perp D_i |X_i \]

Causal Inference in Observational Designs

The credibility of:

\[ Y_i(1),Y_i(0), \perp D_i |X_i \]

depends less on the amount of data and more on how the data were generated.

  • Selection on Observables is rarely a credible assumption (“Trust me bro”)

Causal Inference in Observational Designs

Observational designs that produce credible causal inference, leverage aspects of the world that create natural experiments

You should be able to describe the logic and assumptions of common designs in social science

  • Difference-in-Differences: Parallel Trends

  • Instrumental Variables: Instruments need to be Relevant and Exogenous

  • Regression Discontinuity: Continuity at the cutoff

Instrumental Variables

Instrumental variables are an economists favorite tool for dealing with omitted variable bias

  • We have some non random treatment whose effects we’d like to assess
  • We’re worried that these effects are confounded by some unobserved, omitted variable, that influences both the treatment and the outcome
  • We find an instrumental variable that satisfies the following:
    • Randomization
    • Excludability
    • First-stage relatioship
    • Monotonicity
  • Allowing us estimate a Local Average Treatment Effect (LATE) using the only the variation in our treatment is exogenous (uncorrelated with ommited variables)

IV Assumption: Randomization

  • No path from \(U\) to \(Z\)

IV Assumption: Excludability

  • No path from \(Z\) to \(Y\)

IV Assumption: First Stage

  • Path from \(Z\) to \(D\)

IV Assumption: Monotonicity

  • \(D_i(Z=1)\geq D_i(Z=0)\)
  • “No Defiers”

Compliance

With a binary treatment, \(D\) and binary instrument \(Z\) there are four types of compliance

Type \(D_i(Z=1)\) \(D_i(Z=0)\)
Always Takers 1 1
Never Takers 0 0
Compliers 1 0
Defiers 0 1
  • Assuming Monotonicity means there are “No Defiers”

Estimating the Local Average Treatment Effect

If we believe our assumptions of:

  • Randomization
  • Excludability
  • First-stage relationship
  • Monotonicity

Then we can estimate Local Average Treatment Effect (LATE) sometimes called the Complier Average Treatment Effect CATE)

Estimating the Local Average Treatment Effect

It can be shown that the LATE:

\[LATE = \frac{E[Y|Z=1] - E[Y|Z=0]}{E[D|Z=1]-E[D|Z=0]}= \frac{\text{Reduced Form}}{\text{First Stage}} \frac{ATE_{Z\to Y}}{ATE_{Z\to D}}\]

Note

Experimental designs can also have non-compliance. Here the reduced form is often known as the “Intent to Treat” or ITT effect

Example: Earnings and Military Service

Adapted from Edward Rubin

Example: If we want to estimate the effect of veteran status on earnings, \[\begin{align} \text{Earnings}_i = \beta_0 + \beta_1 \text{Veteran}_i + u_i \tag{1} \end{align}\]

We would love to calculate \(\color{#e64173}{\text{Earnings}_{1i}} - \color{#6A5ACD}{\text{Earnings}_{0i}}\), but we can’t.

And OLS will likely be biased for \((1)\) due to selection/omitted-variable bias.

Introductory example

Imagine that we can split veteran status into an exogenous (as-if random, unbiased) part and an endogenous (non-random, biased) part…

\[\begin{align} \text{Earnings}_i &= \beta_0 + \beta_1 \text{Veteran}_i + u_i \tag{1} \\ &= \beta_0 + \beta_1 \left(\text{Veteran}_i^{\text{Exog.}} + \text{Veteran}_i^{\text{Endog.}}\right) + u_i \\ &= \beta_0 + \beta_1 \text{Veteran}_i^{\text{Exog.}} + \underbrace{\beta_1 \text{Veteran}_i^{\text{Endog.}} + u_i}_{w_i} \\ &= \beta_0 + \beta_1 \text{Veteran}_i^{\text{Exog.}} + w_i \end{align}\]

We could use this exogenous variation in veteran status to consistently estimate \(\beta_1\).

Q: What would exogenous variation in veteran status mean?

Introductory example

Q: What would exogenous variation in veteran status mean?

  • A.2: Choices to enlist in the military that are essentially random—or at least uncorrelated with omitted variables and the disturbance.

  • A.1: .No selection bias:

\[\begin{align} \color{#e64173}{\mathop{E}\left(\text{Earnings}_{0i}\mid\text{Veteran}_i = 1\right)} - \color{#6A5ACD}{\mathop{E}\left( \text{Earnings}_{0i} \mid \text{Veteran}_i = 0 \right)} = 0 \end{align}\]

Instruments

  • Q: How do we isolate this exogenous variation in our explanatory variable?

  • A: Find an instrument (an instrumental variable).

  • Q: What’s an instrument?

  • A: An instrument is a variable that is

    • correlated with the explanatory variable of interest (relevant),
    • uncorrelated with the error term (exogenous).

Instruments

So if we want an instrument \(z_i\) for endogenous veteran status in

\[\begin{align} \text{Earnings}_i = \beta_0 + \beta_1 \text{Veteran}_i + u_i \end{align}\]

  1. Relevant: \(\mathop{\text{Cov}} \left( \text{Veteran}_i,\, z_i \right) \neq 0\)
  2. Exogenous: \(\mathop{\text{Cov}} \left( z_i,\, u_i \right) = 0\)

Instruments: Relevance

Relevance: We need the instrument to cause a change in (correlate with) our endogenous explanatory variable.

We can actually test this requirement using regression and a t test.

Example: For the veteran status, consider three potential instruments:

  • Social security number

    • Probably not relevant uncorrelated with military service
  • Physical fitness

    • Potentially relevant service may correlate with fitness
  • Vietnam War draft

    • Relevant being drafted led to service

Instruments: Exogeneity

Exogeneity: The instrument to be independent of omitted factors that affect our outcome variable—as good as randomly assigned.

\(z_i\) must be uncorrelated with our disturbance \(u_i\). Not testable.

Returning to our possible instruments

  • Social security number

    • Exogenous SSN essentially random
  • Physical fitness

    • Not Exogenous fitness correlated with many things
  • Vietnam War draft

Relevant and Exogenous

Relevant, Not Exogenous

Not Relevant and Not Exogenous

Relevant, Not Exogenous

Venn diagram explanation

In these figures (Venn diagrams)

  • Each circle illustrates a variable.
  • Overlap gives the share of correlatation between two variables.
  • Dotted borders denote omitted variables.

Take-aways

  • Figure 1: Valid instrument (relevant; exogenous)
  • Figure 2: Invalid instrument (relevant; not exogenous)
  • Figure 3: Invalid instrument (not relevant; not exogenous)
  • Figure 4: Invalid instrument (relevant; not exogenous)

IV Applications

@AndrewHeiss

IV Summary

Instrumental variables require a number of assumptions to yield credible causal claims:

  • Randomization
  • Excludability
  • First-stage relationship
  • Monotonicity

Estimation and inference of IVs next week

  • See Edward Rubin’s excellent slides

  • And Matt Blackwells notes

  • Understanding the identifying assumptions of IV can help you critique a study (even if the you don’t fully understand the math)

References