Goal: After completing this lab, you should be able to…

• Simulate statistics from known distributions to estimate sampling distributions.
• Bootstrap any statistic.
• Create confidience intervals using bootstrap resampling.

In this lab we will use, but not focus on…

• R Markdown. This document will serve as a template. It is pre-formatted and already contains chunks that you need to complete.

• You may use this document as a template. You do not need to remove directions. Chunks that require your input have a comment indicating to do so.
• The reading posted with the lab and the associated R Markdown file contain details on performing the required tasks through a number of examples.
• This lab will also serve as Homework 05. Your grade on this lab will also count as your grade on Homework 05.

# Exercise 1 - How Large is Large ?

For this exercise we will use:

• Random samples of size $$n = 10$$, $$n = 30$$, and $$n = 100$$.
• Samples from an gamma distribution with
• $$\alpha = 0.3$$, that is, shape = 0.3
• $$\beta = 1.2$$, that is, scale = 1.2

Consider using the sample mean, $$\bar{x}$$, to estimate the mean, $$\mu = \text{E}[X] = \alpha\beta = 0.36$$.

If $$n$$ is “large” then the central limit theorem suggests that

$\bar{X} \stackrel{approx}{\sim} N\left(\alpha\beta, \frac{\alpha\beta^2}{n}\right)$

which with some additional work we could then use to create confidence intervals. (We’d also need to estimate the variance.)

However, when is this approximation good?

Perform three simulation studies:

• Study 1: Samples of size $$n = 10$$
• Study 2: Samples of size $$n = 30$$
• Study 3: Samples of size $$n = 100$$

For each, simulate a sample of the specified size from a given gamma distribution 5000 times. For each simulation calculate and store the sample mean.

For each study create a histogram of the simulated sample means. (These will serve as an estimate of the sampling distribution of $$\bar{X}$$.) For each, overlay the distribution if the CLT approximation was appropriate:

$N\left(\alpha\beta, \frac{\alpha\beta^2}{n}\right)$

The chunks below outline this procedure.

Hint: Done correctly, you should find that the approximation is bad for $$n = 10$$, reasonable for $$n = 100$$ and you may be uncertain about $$n = 30$$.

set.seed(42)
n = 10
sample_means_n_10 = rep(0, 5000)
# perform simulations for n = 10 here
set.seed(42)
n = 30
sample_means_n_30 = rep(0, 5000)
# perform simulations for n = 30 here
set.seed(42)
n = 100
sample_means_n_100 = rep(0, 5000)
# perform simulations for n = 100 here
par(mfrow = c(1, 3))

# create histogram for n = 10 here
# add curve for normal density assuming CLT applies

# create histogram for n = 30 here
# add curve for normal density assuming CLT applies

# create histogram for n = 100 here
# add curve for normal density assuming CLT applies

# Exercise 2 - How Long is a Trump Tweet?

Twitter has become an increasingly important part of the American political discourse. The 2016 Presidential election was unique in that all major contenders were somewhat prolific tweeters. The eventual winner, Donald Trump, was undoubtedly the most prolific of them all, and continues to be an active twitter user now that we approach two years into his presidency.

This use of Twitter sparked an interesting analysis by David Robinson who is currently the Chief Data Scientist at DataCamp and former Data Scientist at StackOverflow. His analysis, “Text analysis of Trump’s tweets confirms he writes only the (angrier) Android half”, became very popular leading up to the election.

Let’s take a look at this data. To do so, we’ll need a couple packages:

library(dplyr)
library(tidyr)

Then to load the data in a data frame named trump_tweets_df, we use:

load(url("http://varianceexplained.org/files/trump_tweets_df.rda"))

We then create a new data frame named tweets based on the raw data:

tweets = trump_tweets_df %>%
select(id, statusSource, text, created) %>%
extract(statusSource, "source", "Twitter for (.*?)<") %>%
filter(source %in% c("iPhone", "Android"))
tweets
## # A tibble: 1,390 x 4
##    id          source text                             created
##    <chr>       <chr>  <chr>                            <dttm>
##  1 7626698825… Andro… My economic policy speech will … 2016-08-08 15:20:44
##  2 7626415954… iPhone Join me in Fayetteville, North … 2016-08-08 13:28:20
##  3 7624396589… iPhone "#ICYMI: \"Will Media Apologize… 2016-08-08 00:05:54
##  4 7624253718… Andro… Michael Morell, the lightweight… 2016-08-07 23:09:08
##  5 7624008698… Andro… "The media is going crazy. They… 2016-08-07 21:31:46
##  6 7622845333… Andro… I see where Mayor Stephanie Raw… 2016-08-07 13:49:29
##  7 7621109187… iPhone Thank you Windham, New Hampshir… 2016-08-07 02:19:37
##  8 7621069044… iPhone ".@Larry_Kudlow - 'Donald Trump… 2016-08-07 02:03:39
##  9 7621044117… Andro… I am not just running against C… 2016-08-07 01:53:45
## 10 7620164261… iPhone "#CrookedHillary is not fit to … 2016-08-06 20:04:08
## # ... with 1,380 more rows

This dataset is a collection of 1390 tweets from Twitter user @realDonaldTrump. For this exercise we will be interested in the text variable which contains the text of each tweet.

For example:

tweets[2, "text"]
## # A tibble: 1 x 1
##   text
##   <chr>
## 1 Join me in Fayetteville, North Carolina tomorrow evening at 6pm. Tickets…

More specifically, we’ll be interested in the lengths of these tweets:

tweet_lengths = nchar(tweets\$text)
head(tweet_lengths)
## [1]  67 114  64 134 135 138
hist(tweet_lengths, col = "darkgrey",
main = "@realdonaldtrump Tweets",
xlab = "Number of Characters")
box()
grid()