Exercise 1

Let \(X_{1}, X_{2}, \ldots X_{n}\) be a random sample of size \(n\) from a distribution with probability density function

\[ f(x, \theta) = \frac{1}{\theta}e^{-x/\theta}, \quad x > 0, \ \theta > 0 \]

Note that, the moments of this distribution are given by

\[ E[X^k] = \int_{0}^{\infty} \frac{x^k}{\theta}e^{-x/\theta} = k! \cdot \theta^k. \]

This will be a useful fact for Exercises 2 and 3.

(a) Obtain the maximum likelihood estimator of \(\theta\), \(\hat{\theta}\). (This should be a function of the unobserved \(x_i\) and the sample size \(n\).) Calculate the estimate when

\[ x_{1} = 0.50, \ x_{2} = 1.50, \ x_{3} = 4.00, \ x_{4} = 3.00. \]

(This should be a single number, for this dataset.)

Solution:

We first obtain the likelihood by multiplying the probability density function for each \(X_i\). We then simplify this expression.

\[ L(\theta) = \prod_{i = 1}^{n} f(x_i; \theta) = \prod_{i = 1}^{n} \frac{1}{\theta}e^{-x_i/\theta} = \theta^{-n}\exp\left({\frac{-\sum_{i = i}^{n} x_i}{\theta}}\right) \]

Instead of directly maximizing the likelihood, we instead maximize the log-likelihood.

\[ \log L(\theta) = -n \log \theta - \frac{\sum_{i = i}^{n} x_i}{\theta} \]

To maximize this function, we take a derivative with respect to \(\theta\).

\[ \frac{d}{d\theta} \log L(\theta) = \frac{-n}{\theta} + \frac{\sum_{i = i}^{n} x_i}{\theta^2} \]

We set this derivative equal to zero, then solve for \(\theta\).

\[ \frac{-n}{\theta} + \frac{\sum_{i = i}^{n} x_i}{\theta^2} = 0 \]

Solving gives our estimator, which we denote with a hat.

\[ \hat{\theta} = \frac{\sum_{i = i}^{n} x_i}{n} = \bar{x} \]

Using the given data, we obtain an estimate.

\[ \hat{\theta} = \frac{0.50 + 1.50 + 4.00 + 3}{4} = \boxed{2.25} \]

(b) Calculate the bias of the maximum likelihood estimator of \(\theta\), \(\hat{\theta}\). (This will be a number.)

Solution:

Note that we have an exponential distribution.

\[ E[X_i] = \theta \]

\[ \text{Var}[X_i] = \theta^2 \]

\[\begin{align*} \text{Bias}(\hat{\theta}) &= E[\hat{\theta}] - \theta \\[1.5ex] &= E\left[\frac{\sum_{i = i}^{n} X_i}{n}\right] - \theta \\ &= \frac{1}{n} \sum_{i = i}^{n} E[X_i] - \theta \\ &= \frac{1}{n} n\theta - \theta \\ &= \theta - \theta = \boxed{0} \end{align*}\]

(c) Find the mean squared error of the maximum likelihood estimator of \(\theta\), \(\hat{\theta}\). (This will be an expression based on the parameter \(\theta\) and the sample size \(n\). Be aware of your answer to the previous part, as well as the distribution given.)

Solution:

\[\begin{align*} \text{MSE}(\hat{\theta}) &= [\text{Bias}(\hat{\theta})]^2 + \text{Var}(\hat{\theta}) \\[1.5ex] &= 0 + \text{Var}\left(\frac{\sum_{i = i}^{n} X_i}{n}\right) \\ &= \frac{1}{n^2} \sum_{i = i}^{n} \text{Var}(X_i) \\ &= \frac{1}{n^2} n\theta^2 = \boxed{\frac{\theta^2}{n}} \end{align*}\]

(d) Provide an estimate for \(P[X > 4]\) when

\[ x_{1} = 0.50, \ x_{2} = 1.50, \ x_{3} = 4.00, \ x_{4} = 3.00. \]

Solution:

\[ P[X > 4] = e^{-4 / \theta} \]

\[ \hat{P}[X > 4] = e^{-4 / \hat{\theta}} = e^{-4 / 2.25} = \boxed{0.1690} \]

Exercise 2

Let \(X_{1}, X_{2}, \ldots X_{n}\) be a random sample of size \(n\) from a distribution with probability density function

\[ f(x, \alpha) = \alpha^{-2}xe^{-x/\alpha}, \quad x > 0, \ \alpha > 0 \]

(a) Obtain the maximum likelihood estimator of \(\alpha\), \(\hat{\alpha}\). Calculate the estimate when

\[ x_{1} = 0.25, \ x_{2} = 0.75, \ x_{3} = 1.50, \ x_{4} = 2.5, \ x_{5} = 2.0. \]

Solution:

We first obtain the likelihood by multiplying the probability density function for each \(X_i\). We then simplify this expression.

\[ L(\alpha) = \prod_{i = 1}^{n} f(x_i; \alpha) = \prod_{i = 1}^{n} \alpha^{-2} x_i e^{-x_i/\alpha} = \alpha^{-2n} \left(\prod_{i = 1}^{n} x_i \right) \exp\left({\frac{-\sum_{i = i}^{n} x_i}{\alpha}}\right) \]

Instead of directly maximizing the likelihood, we instead maximize the log-likelihood.

\[ \log L(\alpha) = -2n \log \alpha + \sum_{i = i}^{n} \log x_i - \frac{\sum_{i = i}^{n} x_i}{\alpha} \]

To maximize this function, we take a derivative with respect to \(\alpha\).

\[ \frac{d}{d\alpha} \log L(\alpha) = \frac{-2n}{\alpha} + \frac{\sum_{i = i}^{n} x_i}{\alpha^2} \]

We set this derivative equal to zero, then solve for \(\alpha\).

\[ \frac{-2n}{\alpha} + \frac{\sum_{i = i}^{n} x_i}{\alpha^2} = 0 \]

Solving gives our estimator, which we denote with a hat.

\[ \hat{\alpha} = \frac{\sum_{i = i}^{n} x_i}{2n} = \frac{\bar{x}}{2} \]

Using the given data, we obtain an estimate.

\[ \hat{\alpha} = \frac{0.25 + 0.75 + 1.50 + 2.50 + 2.0}{2 \cdot 5} = \boxed{0.70} \]

(b) Obtain the method of moments estimator of \(\alpha\), \(\tilde{\alpha}\). Calculate the estimate when

\[ x_{1} = 0.25, \ x_{2} = 0.75, \ x_{3} = 1.50, \ x_{4} = 2.5, \ x_{5} = 2.0. \]

Solution:

We first obtain the first population moment. Notice the integration is done by identifying the form of the integral is that of the second moment of an exponential distribution.

\[ E[X] = \int_{0}^{\infty} x \cdot \alpha^{-2}xe^{-x/\alpha} dx = \frac{1}{\alpha}\int_{0}^{\infty} \frac{x^2}{\alpha} e^{-x/\alpha} dx = \frac{1}{\alpha}(2\alpha^2) = 2\alpha \]

We then set the first population moment, which is a function of \(\alpha\), equal to the first sample moment.

\[ 2\alpha = \frac{\sum_{i = i}^{n} x_i}{n} \]

Solving for \(\alpha\), we obtain the method of moments estimator.

\[ \tilde{\alpha} = \frac{\sum_{i = i}^{n} x_i}{2n} = \frac{\bar{x}}{2} \]

Using the given data, we obtain an estimate.

\[ \tilde{\alpha} = \frac{0.25 + 0.75 + 1.50 + 2.50 + 2.0}{2 \cdot 5} = \boxed{0.70} \]

Note that, in this case, the MLE and MoM estimators are the same.

Exercise 3

Let \(X_{1}, X_{2}, \ldots X_{n}\) be a random sample of size \(n\) from a distribution with probability density function

\[ f(x, \beta) = \frac{1}{2 \beta^3} x^2 e^{-x/\beta}, \quad x > 0, \ \beta > 0 \]

(a) Obtain the maximum likelihood estimator of \(\beta\), \(\hat{\beta}\). Calculate the estimate when

\[ x_{1} = 2.00, \ x_{2} = 4.00, \ x_{3} = 7.50, \ x_{4} = 3.00. \]

Solution:

We first obtain the likelihood by multiplying the probability density function for each \(X_i\). We then simplify this expression.

\[ L(\beta) = \prod_{i = 1}^{n} f(x_i; \beta) = \prod_{i = 1}^{n} \frac{1}{2 \beta^3} x^2 e^{-x/\beta} = 2^{-n} \beta^{-3n} \left(\prod_{i = 1}^{n} x_i \right) \exp\left({\frac{-\sum_{i = i}^{n} x_i}{\beta}}\right) \]

Instead of directly maximizing the likelihood, we instead maximize the log-likelihood.

\[ \log L(\beta) = -n \log 2 - 3n \log \beta + \sum_{i = i}^{n} \log x_i - \frac{\sum_{i = i}^{n} x_i}{\beta} \]

To maximize this function, we take a derivative with respect to \(\beta\).

\[ \frac{d}{d\beta} \log L(\beta) = \frac{-3n}{\beta} + \frac{\sum_{i = i}^{n} x_i}{\beta^2} \]

We set this derivative equal to zero, then solve for \(\beta\).

\[ \frac{-3n}{\beta} + \frac{\sum_{i = i}^{n} x_i}{\beta^2} = 0 \]

Solving gives our estimator, which we denote with a hat.

\[ \hat{\beta} = \frac{\sum_{i = i}^{n} x_i}{3n} = \frac{\bar{x}}{3} \]

Using the given data, we obtain an estimate.

\[ \hat{\beta} = \frac{2.00 + 4.00 + 7.50 + 3.00}{3 \cdot 4} = \boxed{1.375} \]

(b) Obtain the method of moments estimator of \(\beta\), \(\tilde{\beta}\). Calculate the estimate when

\[ x_{1} = 2.00, \ x_{2} = 4.00, \ x_{3} = 7.50, \ x_{4} = 3.00. \]

Solution:

We first obtain the first population moment. Notice the integration is done by identifying the form of the integral is that of the third moment of an exponential distribution.

\[ E[X] = \int_{0}^{\infty} x \cdot \frac{1}{2 \beta^3} x^2 e^{-x/\beta} dx = \frac{1}{2\beta^2}\int_{0}^{\infty} \frac{x^3}{\beta} e^{-x/\beta} dx = \frac{1}{2\beta^2}(6\beta^3) = 3\beta \]

We then set the first population moment, which is a function of \(\beta\), equal to the first sample moment.

\[ 3\beta = \frac{\sum_{i = i}^{n} x_i}{n} \]

Solving for \(\beta\), we obtain the method of moments estimator.

\[ \tilde{\beta} = \frac{\sum_{i = i}^{n} x_i}{3n} = \frac{\bar{x}}{3} \]

Using the given data, we obtain an estimate.

\[ \tilde{\beta} = \frac{2.00 + 4.00 + 7.50 + 3.00}{3 \cdot 4} = \boxed{1.375} \]

Note again, the MLE and MoM estimators are the same.

Exercise 4

Let \(X_{1}, X_{2}, \ldots X_{n}\) be a random sample of size \(n\) from a distribution with probability density function

\[ f(x, \lambda) = \lambda x^{\lambda - 1}, \quad 0 < x < 1, \lambda > 0 \]

(a) Obtain the maximum likelihood estimator of \(\lambda\), \(\hat{\lambda}\). Calculate the estimate when

\[ x_{1} = 0.10, \ x_{2} = 0.20, \ x_{3} = 0.30, \ x_{4} = 0.40. \]

Solution:

We first obtain the likelihood by multiplying the probability density function for each \(X_i\). We then simplify this expression.

\[ L(\lambda) = \prod_{i = 1}^{n} f(x_i; \lambda) = \prod_{i = 1}^{n} \lambda x_i^{\lambda - 1} = \lambda^n \left(\prod_{i = 1}^{n} x_i \right)^{\lambda - 1} \]

Instead of directly maximizing the likelihood, we instead maximize the log-likelihood.

\[ \log L(\lambda) = n \log \lambda + (\lambda - 1) \sum_{i = i}^{n} \log x_i \]

To maximize this function, we take a derivative with respect to \(\lambda\).

\[ \frac{d}{d\lambda} \log L(\lambda) = \frac{n}{\lambda} + \sum_{i = i}^{n} \log x_i \]

We set this derivative equal to zero, then solve for \(\beta\).

\[ \frac{n}{\lambda} + \sum_{i = i}^{n} \log x_i = 0 \]

Solving gives our estimator, which we denote with a hat.

\[ \hat{\lambda} = -\frac{n}{\sum_{i = i}^{n} \log x_i} \]

Using the given data, we obtain an estimate.

\[ \hat{\lambda} = -\frac{n}{\sum_{i = i}^{n} \log x_i} = -\frac{4}{\log(0.1 \cdot 0.2 \cdot 0.3 \cdot 0.4)} = \boxed{0.6631} \]

Note that this is actually a reparameterization of an example seen in class where \(\lambda = \frac{1}{\theta}\). Had you realized this, you could have simply found the answer via invariance.

(b) Obtain the method of moments estimator of \(\lambda\), \(\tilde{\lambda}\). Calculate the estimate when

\[ x_{1} = 0.10, \ x_{2} = 0.20, \ x_{3} = 0.30, \ x_{4} = 0.40. \]

Solution:

We first obtain the first population moment.

\[ E[X] = \int_{0}^{1} x \cdot \lambda x^{\lambda - 1} dx = \frac{\lambda}{\lambda + 1} \]

We then set the first population moment, which is a function of \(\beta\), equal to the first sample moment.

\[ \frac{\lambda}{\lambda + 1} = \frac{\sum_{i = i}^{n} x_i}{n} = \bar{x} \]

Solving for \(\lambda\), we obtain the method of moments estimator.

\[ \tilde{\lambda} = \frac{\bar{x}}{1 - \bar{x}} \]

Using the given data, we obtain an estimate.

\[ \bar{x} = \frac{0.1 + 0.2 + 0.3 + 0.4}{4} = 0.25 \]

\[ \tilde{\lambda} = \frac{0.25}{1 - 0.25} = \boxed{\frac{1}{3}} \]

Note that the MLE and MoM estimators are different.