---
title: 'STAT 3202: Lab 06, Parametric vs Nonparametric Testing'
author: "Autumn 2018, OSU"
date: 'Due: Friday, October 26'
output:
html_document:
theme: spacelab
toc: yes
pdf_document: default
urlcolor: BrickRed
---
***
```{r setup, include = FALSE}
knitr::opts_chunk$set(fig.align = "center")
```
**Goal:** After completing this lab, you should be able to...
- *Use* simulation to verify significance levels.
- *Use* simulation to estimate power.
- *Understand* the pros and cons of parametric and nonparametric tests.
In this lab we will use, but not focus on...
- `R` Markdown. This document will serve as a template. It is pre-formatted and already contains chunks that you need to complete.
Some additional notes:
- Please see [**Carmen**](https://carmen.osu.edu/) for information about submission, and grading.
- You may use [this document](lab-06-assign.Rmd) as a template. You do not need to remove directions. Chunks that require your input have a comment indicating to do so.
***
# Exercise 0A - $t$-Test in `R`
Recall the [Deflategate](https://en.wikipedia.org/wiki/Deflategate) data from previous homework, in particular the measurements for the Patriots.
```{r}
pats = c(11.50, 10.85, 11.15, 10.70, 11.10, 11.60, 11.85, 11.10, 10.95, 10.50, 10.90)
```
Suppose that we wanted to test
$$
H_0\colon \mu_P = 12.5 \quad \text{vs} \quad H_A\colon \mu_P < 12.5
$$
To do so, we use the `t.test()` function in `R`. We use three of the function's arguments:
- `x`: the data to be used for the test
- `mu`: the hypothesized value of the mean
- `alternative`: the alternative hypothesis of the test
```{r}
t.test(x = pats, mu = 12.5, alternative = "less")
```
The output gives, among other things, the value of the **test statistic** as well as the **p-value** of the test.
The `mu` and `alternative` arguments have default values of `0` and `two.sided` respectively. So, suppose that we wished to test
$$
H_0\colon \mu = 0 \quad \text{vs} \quad H_A\colon \mu \neq 0
$$
for the following data.
```{r}
some_data = c(-0.22, -0.51, 0.12, 0.14, -0.19)
```
Then, we would simply use.
```{r}
t.test(some_data)
```
***
# Exercise 0B - Sign Test in `R`
In class we only used the sign test for paired data, but it can also be used as a test about the median of a population.
Again, recall the [Deflategate](https://en.wikipedia.org/wiki/Deflategate) data from previous homework, this time the measurements for the Colts. (One datapoint was modified for illustration purposes.)
```{r}
colts = c(12.70, 12.75, 12.10, 12.55)
```
Suppose we wished to test
$$
H_0\colon m_C = 12.5 \quad \text{vs} \quad H_A\colon m_C < 12.5
$$
where $m_C$ is the median of the PSI of the Colts' footballs.
Instead of looking at the sign of the difference of paired data, we compare each datapoint to the hypothesized median, in particular noting how many observations are greater than the hypothesized value. (You could alternatively use the number less than, but that would have a reversing affect on the alternative hypothesis.)
```{r}
(num_inflated = sum(colts > 12.5))
```
To carry out the sign test in `R` we use the `binom.test()` function where:
- `x` is the test statistic, the number of observations greater than the hypothesized median
- in this case, this is measuring how many balls were properly inflated
- `n` is the number of observations
- `alternative` is the alternative hypothesis
- the "less" alternative here is equivalent to saying there is less than a 50% chance that a ball is inflated properly
- in other words, "more extreme" in this case is fewer inflated balls
```{r}
binom.test(x = num_inflated, n = length(colts), alternative = "less")
```
Here the output gives a number of things, including most importantly the **p-value**.
Again, most of the time we would use a two-sided test, which like `t.test()` is the default for `binom.test()`. So, suppose that we wished to test
$$
H_0\colon m = 0 \quad \text{vs} \quad H_A\colon m \neq 0
$$
for the following data.
We now longer need to supply an `alternative` to `binom.test()` and this time we compare each observation to `0`. (Also, with a two sided test, how the test statistic relates to the alternative is less important.)
```{r}
some_data = c(-0.22, -0.51, 0.12, -0.14, -0.19)
num_pos = sum(some_data > 0)
binom.test(x = num_pos, n = length(some_data))
```
***
# Exercise 0C - The Cauchy Distribution
Whenever you would like to break a statistical procedure, use the [Cauchy distribution](https://en.wikipedia.org/wiki/Cauchy_distribution)! It has a couple interesting properties:
- An **undefined** mean!
- An **undefined** variance!
```{r}
x_vals = seq(from = -5, to = 5, length = 10000)
plot(x_vals, dnorm(x_vals), type = "l", lwd = 2, lty = 1,
ylim = c(0, 0.45), xlab = "x", ylab = "density", col = "dodgerblue")
lines(x_vals, dcauchy(x_vals), lwd = 2, lty = 3, col = "darkorange")
legend("topleft", legend = c("Normal", "Cauchy"),
col = c("dodgerblue", "darkorange"), lwd = 2, lty = c(1, 3))
grid()
```
Like the $t$ distribution, the Cauchy distribution has "fatter" tails than a normal distribution. Even "fatter" than a $t$ distribution, expect when degrees of freedom is 1, in which case the $t$ and Cauchy are the same!
- Cauchy is "shorter" at the median value of 0:
```{r}
dnorm(x = 0, mean = 0, sd = 1)
dcauchy(x = 0, location = 0, scale = 1)
```
- Cauchy has "fatter" tails:
```{r}
pnorm(q = 2, mean = 0, sd = 1, lower.tail = FALSE)
```
```{r}
pcauchy(q = 2, location = 0, scale = 1, lower.tail = FALSE)
```
# Exercise 0D - Lab Setup
A few things to note for this lab.
The usual $t$-test assumes that we are sampling from a normal distribution, which is a symmetric distribution. There for, $\mu = m$, that is, the mean is equal to the median. That means that the $t$-test about
$$
H_0\colon \mu = 0 \quad \text{vs} \quad H_A\colon \mu \neq 0
$$
is the same as
$$
H_0\colon m = 0 \quad \text{vs} \quad H_A\colon m \neq 0
$$
in this case.
The sign test makes no assumption about the population that we are sampling from and tests
$$
H_0\colon m = 0 \quad \text{vs} \quad H_A\colon m \neq 0
$$
Since both tests can be formulated as a test about the median (the sign test cannot necessarily be formatted as a test about the mean, and the Cauchy distribution has no mean) we will view both tests as tests about the median for ease of comparison.
Two functions may be of use to you throughout this lab, `calc_power()` and `gen_p_value()`, both given below.
```{r}
calc_power = function(p_values, alpha = 0.05) {
mean(p_values < alpha)
}
```
```{r}
gen_p_value = function(dist = "norm", test = "t", m = 0, sample_size = 25) {
# create sample data depending on specified distribution
if (dist == "norm") {
sim_data = rnorm(n = sample_size, mean = m)
}
if (dist == "cauchy") {
sim_data = rcauchy(n = sample_size, location = m)
}
#perform test depending on requested test
if (test == "t") {
p_value = t.test(x = sim_data)$p.value
}
if (test == "sign") {
num_pos = sum(sim_data > 0)
p_value = binom.test(x = num_pos, n = sample_size)$p.value
}
# return p-value
p_value
}
```
Throughout this lab we will use samples of size $n = 25$ and a significance level of $\alpha = 0.05$ for all tests. Both the $t$-test and the sign test will refer to a two-sided test of the median, $m$, in particular,
$$
H_0\colon m = 0 \quad \text{vs} \quad H_A\colon m \neq 0
$$
***
# Exercise 1 - Test Validity, Sampling From Normal
Suppose data is sampled from a Normal distribution. (In particular standard normal, that is, a mean of 0 and a standard deviation of 1.) Consider testing
$$
H_0\colon m = 0 \quad \text{vs} \quad H_A\colon m \neq 0
$$
using a $t$-test and a sign test. (Note in this case that the null hypothesis is true.)
- Is the $t$-test valid at a significance level of $\alpha = 0.05$?
- Is the sign test valid significance level of $\alpha = 0.05$?
Use 10000 simulated datasets of size $n = 25$ to determine the validity of each test.
- Create side-by-side histograms of the simulated distributions of the p-values for both tests. Add a vertical line at $\alpha = 0.05$.
- Estimate the true $\alpha$ value using your simulated p-values for each test.
- Comment on the validity of the tests in this situation. (Sampling from normal.)
```{r}
set.seed(42)
# perform and store simulated p-values for t-test here
# perform and store simulated p-values for sign test here,
# for both, use replicate() with gen_p_value()
```
```{r}
par(mfrow = c(1, 2))
# create histogram for t-test p-values here
# create histogram for sign test p-values here
```
```{r}
# power calculation for t-test p-values here
# power calculation for sign test p-values here
```
- **Replace this text with a comments on the validity of the two tests.**
***
# Exercise 2 - Test Validity, Sampling From Cauchy
Suppose data is sampled from a Cauchy distribution. (In particular a Cauchy with a median (`location`) of 0 and a `scale` of 1.) Consider testing
$$
H_0\colon m = 0 \quad \text{vs} \quad H_A\colon m \neq 0
$$
using a $t$-test and a sign test. (Note in this case that the null hypothesis is true.)
- Is the $t$-test valid at a significance level of $\alpha = 0.05$?
- Is the sign test valid significance level of $\alpha = 0.05$?
Use 10000 simulated datasets of size $n = 25$ to determine the validity of each test.
- Create side-by-side histograms of the simulated distributions of the p-values for both tests. Add a vertical line at $\alpha = 0.05$.
- Estimate the true $\alpha$ value using your simulated p-values for each test.
- Comment on the validity of the tests in this situation. (Sampling from Cauchy.)
```{r}
set.seed(42)
# perform and store simulated p-values for t-test here
# perform and store simulated p-values for sign test here
# for both, use replicate() with gen_p_value()
```
```{r}
par(mfrow = c(1, 2))
# create histogram for t-test p-values here
# create histogram for sign test p-values here
```
```{r}
# power calculation for t-test p-values here
# power calculation for sign test p-values here
```
- **Replace this text with a comments on the validity of the two tests.**
***
# Exercise 3 - Comparing Power, Sampling From Normal
Compare the power of the $t$-test and the sign test of significance level $\alpha = 0.05$ when sampling from a Normal distribution. To do so, use $2500$ simulated samples of size $n = 25$ from Normal distributions with
- $m$ = $\mu$ = `mean` = `0.0`, `0.2`, `0.4`, `0.6`, `0.8`, `1.0`
- `sd` = `1`
That is, you will calculate power six times (once for each true median $m$) for the $t$-test, then you will repeat this process for the sign test.
- Summarize your results in a single plot. Use two "curves," one for each test, that show how the power changes as $m$ changes. Use different colors and line types for each "curve." Include a legend.
- Comment on the difference in power between the two tests.
Some notes:
- Technically when $m = 0$ we are calculating $\alpha$, but including it will create a nicer looking plot.
- You should use the supplied `gen_p_value()` function. Note that this is not necessarily the *best* way to perform these simulations. Because the `gen_p_value()` function only allows us to return a p-value for a single test, we have to re-simulate the data for both tests which is not computationally efficient.
- For "fun," think about how to get the p-values for both tests using a single simulated dataset. (However, if you do so, do not include those results in this lab.)
```{r}
set.seed(42)
# perform simulations for t-test here
# store results of using replicate() with gen_p_value() six times (once for each m)
```
```{r}
set.seed(42)
# perform simulations for sign test here
# store results of using replicate() with gen_p_value() six times (once for each m)
```
```{r}
# calculate power of t-test for each m here
# store results in a vector, perhaps named power_norm_ttst
# order the results according to the m value (0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
```
```{r}
# calculate power of sign test for each m here
# store results in a vector, perhaps named power_norm_sign
# order the results according to the m value (0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
```
```{r}
m = c(0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
# plot both power "curves" here
```
- **Replace this text with a comment on the difference in power between the two tests.**
***
# Exercise 4 - Comparing Power, Sampling From Cauchy
Compare the power of the $t$-test and the sign test of significance level $\alpha = 0.05$ when sampling from a Cauchy distribution. To do so, use $2500$ simulated samples of size $n = 25$ from Cauchy distributions with
- $m$ = `location` = `0.0`, `0.2`, `0.4`, `0.6`, `0.8`, `1.0`
- `scale` = `1`
That is, you will calculate power six times (once for each true median $m$) for the $t$-test, then you will repeat this process for the sign test.
- Summarize your results in a single plot. Use two "curves," one for each test, that show how the power changes as $m$ changes. Use different colors and line types for each "curve." Include a legend.
- Comment on the difference in power between the two tests.
Some notes:
- Technically when $m = 0$ we are calculating $\alpha$, but including it will create a nicer looking plot.
- You should use the supplied `gen_p_value()` function. Note that this is not necessarily the *best* way to perform these simulations. Because the `gen_p_value()` function only allows us to return a p-value for a single test, we have to re-simulate the data for both tests which is not computationally efficient.
- For "fun," think about how to get the p-values for both tests using a single simulated dataset. (However, if you do so, do not include those results in this lab.)
```{r}
set.seed(42)
# perform simulations for t-test here
# store results of using replicate() with gen_p_value() six times (once for each m)
```
```{r}
set.seed(42)
# perform simulations for sign test here
# store results of using replicate() with gen_p_value() six times (once for each m)
```
```{r}
# calculate power of t-test for each m here
# store results in a vector, perhaps named power_cchy_ttst
# order the results according to the m value (0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
```
```{r}
# calculate power of sign test for each m here
# store results in a vector, perhaps named power_cchy_sign
# order the results according to the m value (0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
```
```{r}
m = c(0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
# plot both power "curves" here
```
- **Replace this text with a comment on the difference in power between the two tests.**
***