Exercise 1

Let \(X_1, X_2, \ldots, X_n \stackrel{iid}{\sim} \text{Poisson}(\lambda)\). That is

\[ f(x \mid \lambda) = \frac{\lambda^xe^{-\lambda}}{x!}, \quad x = 0, 1, 2, \ldots \ \ \lambda > 0 \]

(a) Obtain a method of moments estimator for \(\lambda\), \(\tilde{\lambda}\). Calculate an estimate using this estimator when

\[ x_{1} = 1, \ x_{2} = 2, \ x_{3} = 4, \ x_{4} = 2. \]

Solution:

Recall that for a Poisson distribution we have \(\text{E}[X] = \lambda\).

Now to obtain the method of moments estimator we simply equate the first population mean to the first sample mean. (And then we need to “solve” this equation for \(\lambda\)…)

\[ \text{E}[X] = \bar{X} \\ \lambda = \bar{X} \]

Thus, after “solving” we obtain the method of moments estimator.

\[ \boxed{\tilde{\lambda} = \bar{X}} \]

Thus for the given data we can use this estimator to calculate the estimate.

\[ \tilde{\lambda} = \bar{x} = \frac{1}{4}(1 + 2 + 4 + 2) = \boxed{2.25} \]

(b) Find the maximum likelihood estimator for \(\lambda\), \(\hat{\lambda}\). Calculate an estimate using this estimator when

\[ x_{1} = 1, \ x_{2} = 2, \ x_{3} = 4, \ x_{4} = 2. \]

Solution:

\[ L(\lambda) = \prod_{i = 1}^{n}f(x_i \mid \lambda) = \prod_{i = 1}^{n} \frac{\lambda^{x_i} e^{-\lambda}}{x_i!} = \frac{\lambda^{\sum_{i = 1}^{n}x_i}e^{-n\lambda}}{\prod_{i = 1}^{n}\left(x_i!\right)} \]

\[ \log L(\lambda) = \left(\sum_{i = 1}^{n}x_i\right)\log \lambda - n\lambda - \sum_{i = 1}^{n}\log \left(x_i!\right) \]

\[ \frac{d}{d\lambda} \log L(\lambda) = \frac{\sum_{i = 1}^{n}x_i}{\lambda} - n = 0 \]

\[ \hat{\lambda} = \frac{1}{n}\sum_{i = 1}^{n} x_i \]

\[ \frac{d^2}{d\lambda^2} \log L(\lambda) = - \frac{\sum_{i = 1}^{n}x_i}{\lambda^2} < 0 \]

We then have the estimator, and for the given data, the estimate.

\[ \boxed{\hat{\lambda} = \frac{1}{n}\sum_{i = 1}^{n} x_i} = \frac{1}{4}(1 + 2 + 4 + 2) = \boxed{2.25} \]

(c) Find the maximum likelihood estimator of \(P[X = 4]\), call it \(\hat{P}[X = 4]\). Calculate an estimate using this estimator when

\[ x_{1} = 1, \ x_{2} = 2, \ x_{3} = 4, \ x_{4} = 2. \]

Solution:

Here we use the invariance property of the MLE. Since \(\hat{\lambda}\) is the MLE for \(\lambda\) then

\[ \boxed{\hat{P}[X = 4] = \frac{\hat{\lambda}^4 e^{-\hat{\lambda}}}{4!}} \]

is the maximum likelihood estimator for \(P[X = 4]\).

For the given data we can calculate an estimate using this estimator.

\[ \hat{P}[X = 4] = \frac{\hat{\lambda}^4 e^{-\hat{\lambda}}}{4!} = \frac{2.25^4 e^{-2.25}}{4!} = \boxed{0.1126} \]


Exercise 2

Let \(X_1, X_2, \ldots, X_n \stackrel{iid}{\sim} N(\theta,\sigma^2)\).

Find a method of moments estimator for the parameter vector \(\left(\theta, \sigma^2\right)\).

Solution:

Since we are estimating two parameters, we will need two population and sample moments.

\[ \text{E}[X] = \theta \]

\[ \text{E}\left[X^2\right] = \text{Var}\left[X\right] + \left(\text{E}[X]\right)^2 = \sigma^2 + \theta^2 \]

We equate the first population moment to the first sample moment, \(\bar{x}\) and we equate the second population moment to the second sample moment, \(\overline{X^2} = \frac{1}{n}\sum_{i = 1}^{n}X_i^2\).

\[ \begin{aligned} \text{E}\left[X\right] & = \bar{X}\\[1em] \text{E}\left[X^2\right] & = \overline{X^2}\\[1em] \end{aligned} \]

For this example, that is,

\[ \begin{aligned} \theta & = \bar{X}\\[1em] \sigma^2 + \theta^2 & = \overline{X^2}\\[1em] \end{aligned} \]

Solving this system of equations for \(\theta\) and \(\sigma^2\) we find the method of moments estimators.

\[ \boxed{ \begin{aligned} \tilde{\theta} & = \bar{X}\\[1em] \tilde{\sigma}^2 & = \overline{X^2} - (\bar{X})^2 = \frac{1}{n}\sum_{i = 1}^{n}(X_i - \bar{X})^2\\[1em] \end{aligned} } \]


Exercise 3

Let \(X_1, X_2, \ldots, X_n \stackrel{iid}{\sim} N(1,\sigma^2)\).

Find a method of moments estimator of \(\sigma^2\), call it \(\tilde{\sigma}^2\).

Solution:

The first moment is not useful because it is not a function of the parameter of interest \(\sigma^2\).

\[ \text{E}[X] = 1 \]

As a results, we instead use the second moment

\[ \text{E}\left[X^2\right] = \text{Var}\left[X\right] + \left(\text{E}[X]\right)^2 = \sigma^2 + 1^ 2 = \sigma^2 + 1 \]

We equate this second population moment to the second population moment, \(\overline{X^2} = \frac{1}{n}\sum_{i = 1}^{n}X_i^2\)

\[ \begin{aligned} \text{E}\left[X^2\right] & = \overline{X^2}\\[1em] \sigma^2 + 1 = \overline{X^2} \end{aligned} \]

Now solving for \(\sigma^2\) we obtain the method of moments estimator.

\[ \boxed{\tilde{\sigma}^2 = \left(\frac{1}{n}\sum_{i = 1}^{n}X_i^2\right) - 1} \]


Exercise 4

Let \(X_1, X_2, \ldots, X_n\) iid from a population with pdf

\[ f(x \mid \theta) = \frac{1}{\theta}x^{(1-\theta)/\theta}, \quad 0 < x < 1, \ 0 < \theta < \infty \]

(a) Find the maximum likelihood estimator of \(\theta\), call it \(\hat{\theta}\). Calculate an estimate using this estimator when

\[ x_{1} = 0.10, \ x_{2} = 0.22, \ x_{3} = 0.54, \ x_{4} = 0.36. \]

Solution:

\[ L(\theta) = \prod_{i = 1}^{n}f(x_i \mid \theta) = \prod_{i = 1}^{n} \frac{1}{\theta}x_i^{(1-\theta)/\theta} = \theta^{-n} \left(\prod_{i = 1}^{n}x_i\right)^{\frac{1 -\theta}{\theta}} \]

\[ \log L(\theta) = -n \log \theta + \frac{1 -\theta}{\theta} \sum_{i = 1}^{n}\log x_i = -n \log \theta + \frac{1}{\theta} \sum_{i = 1}^{n}\log x_i - \sum_{i = 1}^{n}\log x_i \\ \]

\[ \frac{d}{d\theta} \log L(\theta) = -\frac{n}{\theta} - \frac{1}{\theta^2}\sum_{i = 1}^{n}\log x_i = 0 \]

\[ \hat{\theta} = -\frac{1}{n}\sum_{i = 1}^{n}\log x_i \]

Note that \(\hat{\theta} > 0\), since each \(\log x_i < 0\) since \(0 < x_i < 1\).

\[ \frac{d^2}{d\theta^2} \log L(\theta) = \frac{n}{\theta^2} + \frac{2}{\theta^3} \sum_{i = 1}^{n}\log x_i \]

\[ \frac{d^2}{d\theta^2} \log L(\hat{\theta}) = \frac{n}{\hat{\theta}^2} + \frac{2}{\hat{\theta}^3} \left(-n\hat{\theta}\right) = \frac{n}{\hat{\theta}^2} - \frac{2n}{\hat{\theta}^2} = -\frac{n}{\hat{\theta}^2} < 0 \]

We then have the estimator, and for the given data, the estimate.

\[ \boxed{\hat{\theta} = -\frac{1}{n}\sum_{i = 1}^{n}\log x_i} = -\frac{1}{4}\log (0.10 \cdot 0.22 \cdot 0.54 \cdot 0.36) = \boxed{1.3636} \]

(b) Obtain a method of moments estimator for \(\theta\), \(\tilde{\theta}\). Calculate an estimate using this estimator when

\[ x_{1} = 0.10, \ x_{2} = 0.22, \ x_{3} = 0.54, \ x_{4} = 0.36. \]

Solution:

\[ \text{E}[X] = \int_{0}^{1} x \cdot \frac{1}{\theta}x^{(1-\theta)/\theta} dx = \text{... some calculus happens...}= \frac{1}{\theta + 1} \]

\[ \begin{aligned} \text{E}[X] & = \bar{X} \\[1em] \frac{1}{\theta + 1} = \bar{X} \end{aligned} \]

Solving for \(\theta\) results in the method of moments estimator.

\[ \boxed{\tilde{\theta} = \frac{1- \bar{X}}{\bar{X}}} \]

\[ \bar{x} = \frac{1}{4}(0.10 + 0.22 + 0.54 + 0.36) = 0.305 \]

Thus for the given data we can calculate the estimate.

\[ \tilde{\theta} = \frac{1 -\bar{x}}{\bar{x}} = \frac{1 - 0.305}{0.305} = \boxed{2.2787} \]


Exercise 5

Let \(X_1, X_2, \ldots, X_n\) iid from a population with pdf

\[ f(x \mid \theta) = \frac{\theta}{x^2}, \quad 0 < \theta \leq x \]

Obtain the maximum likelihood estimator for \(\theta\), \(\hat{\theta}\).

Solution:

First, be aware that the values of \(x\) for this pdf are restricted by the value of \(\theta\).

\[ \begin{aligned} L(\theta) &= \prod_{i = 1}^{n} \frac{\theta}{x_i^2} \quad 0 < \theta \leq x_i \text{ for all } x_i \\[1em] & = \frac{\theta^n}{\prod_{i = 1}^{n}x_i^2} \quad 0 < \theta \leq \min\{x_i\} \end{aligned} \]

\[ \log L(\theta) = n\log \theta - 2\sum_{i = 1}^{n} \log x_i \]

\[ \frac{d}{d\theta} \log L(\theta) = \frac{n}{\theta} > 0 \]

So, here we have a log-likelihood that is increasing in regions where it is not zero, that is, when \(\theta \min\{x_i\}\). Thus, the likelihood is the largest allowable value of \(\theta\) in this region, thus the maximum likelihood estimator is given by

\[ \boxed{\hat{\theta} = \min\{X_i\}} \]


Exercise 6

Let \(X_{1}, X_{2}, \ldots X_{n}\) be a random sample of size \(n\) from a distribution with probability density function

\[ f(x, \alpha) = \alpha^{-2}xe^{-x/\alpha}, \quad x > 0, \ \alpha > 0 \]

(a) Obtain the maximum likelihood estimator of \(\alpha\), \(\hat{\alpha}\). Calculate the estimate when

\[ x_{1} = 0.25, \ x_{2} = 0.75, \ x_{3} = 1.50, \ x_{4} = 2.5, \ x_{5} = 2.0. \]

Solution:

We first obtain the likelihood by multiplying the probability density function for each \(X_i\). We then simplify this expression.

\[ L(\alpha) = \prod_{i = 1}^{n} f(x_i; \alpha) = \prod_{i = 1}^{n} \alpha^{-2} x_i e^{-x_i/\alpha} = \alpha^{-2n} \left(\prod_{i = 1}^{n} x_i \right) \exp\left({\frac{-\sum_{i = i}^{n} x_i}{\alpha}}\right) \]

Instead of directly maximizing the likelihood, we instead maximize the log-likelihood.

\[ \log L(\alpha) = -2n \log \alpha + \sum_{i = i}^{n} \log x_i - \frac{\sum_{i = i}^{n} x_i}{\alpha} \]

To maximize this function, we take a derivative with respect to \(\alpha\).

\[ \frac{d}{d\alpha} \log L(\alpha) = \frac{-2n}{\alpha} + \frac{\sum_{i = i}^{n} x_i}{\alpha^2} \]

We set this derivative equal to zero, then solve for \(\alpha\).

\[ \frac{-2n}{\alpha} + \frac{\sum_{i = i}^{n} x_i}{\alpha^2} = 0 \]

Solving gives our estimator, which we denote with a hat.

\[ \boxed{\hat{\alpha} = \frac{\sum_{i = i}^{n} x_i}{2n} = \frac{\bar{x}}{2}} \]

Using the given data, we obtain an estimate.

\[ \hat{\alpha} = \frac{0.25 + 0.75 + 1.50 + 2.50 + 2.0}{2 \cdot 5} = \boxed{0.70} \]

(We should also verify that this point is a maxmimum, which is omitted here.)

(b) Obtain the method of moments estimator of \(\alpha\), \(\tilde{\alpha}\). Calculate the estimate when

\[ x_{1} = 0.25, \ x_{2} = 0.75, \ x_{3} = 1.50, \ x_{4} = 2.5, \ x_{5} = 2.0. \]

Hint: Recall the probability density function of an exponential random variable.

\[ f(x \mid \theta) = \frac{1}{\theta}e^{-x/\theta}, \quad x > 0, \ \theta > 0 \]

Note that, the moments of this distribution are given by

\[ E[X^k] = \int_{0}^{\infty} \frac{x^k}{\theta}e^{-x/\theta} = k! \cdot \theta^k. \]

This hint will also be useful in the next exercise.

Solution:

We first obtain the first population moment. Notice the integration is done by identifying the form of the integral is that of the second moment of an exponential distribution.

\[ \text{E}[X] = \int_{0}^{\infty} x \cdot \alpha^{-2}xe^{-x/\alpha} dx = \frac{1}{\alpha}\int_{0}^{\infty} \frac{x^2}{\alpha} e^{-x/\alpha} dx = \frac{1}{\alpha}(2\alpha^2) = 2\alpha \]

We then set the first population moment, which is a function of \(\alpha\), equal to the first sample moment.

\[ 2\alpha = \frac{\sum_{i = i}^{n} x_i}{n} \]

Solving for \(\alpha\), we obtain the method of moments estimator.

\[ \boxed{\tilde{\alpha} = \frac{\sum_{i = i}^{n} x_i}{2n} = \frac{\bar{x}}{2}} \]

Using the given data, we obtain an estimate.

\[ \tilde{\alpha} = \frac{0.25 + 0.75 + 1.50 + 2.50 + 2.0}{2 \cdot 5} = \boxed{0.70} \]

Note that, in this case, the MLE and MoM estimators are the same.


Exercise 7

Let \(X_{1}, X_{2}, \ldots X_{n}\) be a random sample of size \(n\) from a distribution with probability density function

\[ f(x \mid \beta) = \frac{1}{2 \beta^3} x^2 e^{-x/\beta}, \quad x > 0, \ \beta > 0 \]

(a) Obtain the maximum likelihood estimator of \(\beta\), \(\hat{\beta}\). Calculate the estimate when

\[ x_{1} = 2.00, \ x_{2} = 4.00, \ x_{3} = 7.50, \ x_{4} = 3.00. \]

Solution:

We first obtain the likelihood by multiplying the probability density function for each \(X_i\). We then simplify this expression.

\[ L(\beta) = \prod_{i = 1}^{n} f(x_i; \beta) = \prod_{i = 1}^{n} \frac{1}{2 \beta^3} x^2 e^{-x/\beta} = 2^{-n} \beta^{-3n} \left(\prod_{i = 1}^{n} x_i^2 \right) \exp\left({\frac{-\sum_{i = i}^{n} x_i}{\beta}}\right) \]

Instead of directly maximizing the likelihood, we instead maximize the log-likelihood.

\[ \log L(\beta) = -n \log 2 - 3n \log \beta + \sum_{i = i}^{n} \log x_i^2 - \frac{\sum_{i = i}^{n} x_i}{\beta} \]

To maximize this function, we take a derivative with respect to \(\beta\).

\[ \frac{d}{d\beta} \log L(\beta) = \frac{-3n}{\beta} + \frac{\sum_{i = i}^{n} x_i}{\beta^2} \]

We set this derivative equal to zero, then solve for \(\beta\).

\[ \frac{-3n}{\beta} + \frac{\sum_{i = i}^{n} x_i}{\beta^2} = 0 \]

Solving gives our estimator, which we denote with a hat.

\[ \boxed{\hat{\beta} = \frac{\sum_{i = i}^{n} x_i}{3n} = \frac{\bar{x}}{3}} \]

Using the given data, we obtain an estimate.

\[ \hat{\beta} = \frac{2.00 + 4.00 + 7.50 + 3.00}{3 \cdot 4} = \boxed{1.375} \]

(We should also verify that this point is a maxmimum, which is omitted here.)

(b) Obtain the method of moments estimator of \(\beta\), \(\tilde{\beta}\). Calculate the estimate when

\[ x_{1} = 2.00, \ x_{2} = 4.00, \ x_{3} = 7.50, \ x_{4} = 3.00. \]

Solution:

We first obtain the first population moment. Notice the integration is done by identifying the form of the integral is that of the third moment of an exponential distribution.

\[ \text{E}[X] = \int_{0}^{\infty} x \cdot \frac{1}{2 \beta^3} x^2 e^{-x/\beta} dx = \frac{1}{2\beta^2}\int_{0}^{\infty} \frac{x^3}{\beta} e^{-x/\beta} dx = \frac{1}{2\beta^2}(6\beta^3) = 3\beta \]

We then set the first population moment, which is a function of \(\beta\), equal to the first sample moment.

\[ \begin{aligned} \text{E}[X] & = \bar{X} \\[1em] 3\beta & = \frac{\sum_{i = i}^{n} x_i}{n} \end{aligned} \]

Solving for \(\beta\), we obtain the method of moments estimator.

\[ \boxed{\tilde{\beta} = \frac{\sum_{i = i}^{n} x_i}{3n} = \frac{\bar{x}}{3}} \]

Using the given data, we obtain an estimate.

\[ \tilde{\beta} = \frac{2.00 + 4.00 + 7.50 + 3.00}{3 \cdot 4} = \boxed{1.375} \]

Note again, the MLE and MoM estimators are the same.