Let \(X_1, X_2, \ldots, X_n \stackrel{iid}{\sim} \text{Poisson}(\lambda)\). That is
\[ f(x \mid \lambda) = \frac{\lambda^xe^{-\lambda}}{x!}, \quad x = 0, 1, 2, \ldots \ \ \lambda > 0 \]
(a) Obtain a method of moments estimator for \(\lambda\), \(\tilde{\lambda}\). Calculate an estimate using this estimator when
\[ x_{1} = 1, \ x_{2} = 2, \ x_{3} = 4, \ x_{4} = 2. \]
Solution:
Recall that for a Poisson distribution we have \(\text{E}[X] = \lambda\).
Now to obtain the method of moments estimator we simply equate the first population mean to the first sample mean. (And then we need to “solve” this equation for \(\lambda\)…)
\[ \text{E}[X] = \bar{X} \\ \lambda = \bar{X} \]
Thus, after “solving” we obtain the method of moments estimator.
\[ \boxed{\tilde{\lambda} = \bar{X}} \]
Thus for the given data we can use this estimator to calculate the estimate.
\[ \tilde{\lambda} = \bar{x} = \frac{1}{4}(1 + 2 + 4 + 2) = \boxed{2.25} \]
(b) Find the maximum likelihood estimator for \(\lambda\), \(\hat{\lambda}\). Calculate an estimate using this estimator when
\[ x_{1} = 1, \ x_{2} = 2, \ x_{3} = 4, \ x_{4} = 2. \]
Solution:
\[ L(\lambda) = \prod_{i = 1}^{n}f(x_i \mid \lambda) = \prod_{i = 1}^{n} \frac{\lambda^{x_i} e^{-\lambda}}{x_i!} = \frac{\lambda^{\sum_{i = 1}^{n}x_i}e^{-n\lambda}}{\prod_{i = 1}^{n}\left(x_i!\right)} \]
\[ \log L(\lambda) = \left(\sum_{i = 1}^{n}x_i\right)\log \lambda - n\lambda - \sum_{i = 1}^{n}\log \left(x_i!\right) \]
\[ \frac{d}{d\lambda} \log L(\lambda) = \frac{\sum_{i = 1}^{n}x_i}{\lambda} - n = 0 \]
\[ \hat{\lambda} = \frac{1}{n}\sum_{i = 1}^{n} x_i \]
\[ \frac{d^2}{d\lambda^2} \log L(\lambda) = - \frac{\sum_{i = 1}^{n}x_i}{\lambda^2} < 0 \]
We then have the estimator, and for the given data, the estimate.
\[ \boxed{\hat{\lambda} = \frac{1}{n}\sum_{i = 1}^{n} x_i} = \frac{1}{4}(1 + 2 + 4 + 2) = \boxed{2.25} \]
(c) Find the maximum likelihood estimator of \(P[X = 4]\), call it \(\hat{P}[X = 4]\). Calculate an estimate using this estimator when
\[ x_{1} = 1, \ x_{2} = 2, \ x_{3} = 4, \ x_{4} = 2. \]
Solution:
Here we use the invariance property of the MLE. Since \(\hat{\lambda}\) is the MLE for \(\lambda\) then
\[ \boxed{\hat{P}[X = 4] = \frac{\hat{\lambda}^4 e^{-\hat{\lambda}}}{4!}} \]
is the maximum likelihood estimator for \(P[X = 4]\).
For the given data we can calculate an estimate using this estimator.
\[ \hat{P}[X = 4] = \frac{\hat{\lambda}^4 e^{-\hat{\lambda}}}{4!} = \frac{2.25^4 e^{-2.25}}{4!} = \boxed{0.1126} \]
Let \(X_1, X_2, \ldots, X_n \stackrel{iid}{\sim} N(\theta,\sigma^2)\).
Find a method of moments estimator for the parameter vector \(\left(\theta, \sigma^2\right)\).
Solution:
Since we are estimating two parameters, we will need two population and sample moments.
\[ \text{E}[X] = \theta \]
\[ \text{E}\left[X^2\right] = \text{Var}\left[X\right] + \left(\text{E}[X]\right)^2 = \sigma^2 + \theta^2 \]
We equate the first population moment to the first sample moment, \(\bar{x}\) and we equate the second population moment to the second sample moment, \(\overline{X^2} = \frac{1}{n}\sum_{i = 1}^{n}X_i^2\).
\[ \begin{aligned} \text{E}\left[X\right] & = \bar{X}\\[1em] \text{E}\left[X^2\right] & = \overline{X^2}\\[1em] \end{aligned} \]
For this example, that is,
\[ \begin{aligned} \theta & = \bar{X}\\[1em] \sigma^2 + \theta^2 & = \overline{X^2}\\[1em] \end{aligned} \]
Solving this system of equations for \(\theta\) and \(\sigma^2\) we find the method of moments estimators.
\[ \boxed{ \begin{aligned} \tilde{\theta} & = \bar{X}\\[1em] \tilde{\sigma}^2 & = \overline{X^2} - (\bar{X})^2 = \frac{1}{n}\sum_{i = 1}^{n}(X_i - \bar{X})^2\\[1em] \end{aligned} } \]
Let \(X_1, X_2, \ldots, X_n \stackrel{iid}{\sim} N(1,\sigma^2)\).
Find a method of moments estimator of \(\sigma^2\), call it \(\tilde{\sigma}^2\).
Solution:
The first moment is not useful because it is not a function of the parameter of interest \(\sigma^2\).
\[ \text{E}[X] = 1 \]
As a results, we instead use the second moment
\[ \text{E}\left[X^2\right] = \text{Var}\left[X\right] + \left(\text{E}[X]\right)^2 = \sigma^2 + 1^ 2 = \sigma^2 + 1 \]
We equate this second population moment to the second population moment, \(\overline{X^2} = \frac{1}{n}\sum_{i = 1}^{n}X_i^2\)
\[ \begin{aligned} \text{E}\left[X^2\right] & = \overline{X^2}\\[1em] \sigma^2 + 1 = \overline{X^2} \end{aligned} \]
Now solving for \(\sigma^2\) we obtain the method of moments estimator.
\[ \boxed{\tilde{\sigma}^2 = \left(\frac{1}{n}\sum_{i = 1}^{n}X_i^2\right) - 1} \]
Let \(X_1, X_2, \ldots, X_n\) be a random sample from a population with pdf
\[ f(x \mid \theta) = \frac{1}{\theta}x^{(1-\theta)/\theta}, \quad 0 < x < 1, \ 0 < \theta < \infty \]
(a) Find the maximum likelihood estimator of \(\theta\), call it \(\hat{\theta}\). Calculate an estimate using this estimator when
\[ x_{1} = 0.10, \ x_{2} = 0.22, \ x_{3} = 0.54, \ x_{4} = 0.36. \]
Solution:
\[ L(\theta) = \prod_{i = 1}^{n}f(x_i \mid \theta) = \prod_{i = 1}^{n} \frac{1}{\theta}x_i^{(1-\theta)/\theta} = \theta^{-n} \left(\prod_{i = 1}^{n}x_i\right)^{\frac{1 -\theta}{\theta}} \]
\[ \log L(\theta) = -n \log \theta + \frac{1 -\theta}{\theta} \sum_{i = 1}^{n}\log x_i = -n \log \theta + \frac{1}{\theta} \sum_{i = 1}^{n}\log x_i - \sum_{i = 1}^{n}\log x_i \\ \]
\[ \frac{d}{d\theta} \log L(\theta) = -\frac{n}{\theta} - \frac{1}{\theta^2}\sum_{i = 1}^{n}\log x_i = 0 \]
\[ \hat{\theta} = -\frac{1}{n}\sum_{i = 1}^{n}\log x_i \]
Note that \(\hat{\theta} > 0\), since each \(\log x_i < 0\) since \(0 < x_i < 1\).
\[ \frac{d^2}{d\theta^2} \log L(\theta) = \frac{n}{\theta^2} + \frac{2}{\theta^3} \sum_{i = 1}^{n}\log x_i \]
\[ \frac{d^2}{d\theta^2} \log L(\hat{\theta}) = \frac{n}{\hat{\theta}^2} + \frac{2}{\hat{\theta}^3} \left(-n\hat{\theta}\right) = \frac{n}{\hat{\theta}^2} - \frac{2n}{\hat{\theta}^2} = -\frac{n}{\hat{\theta}^2} < 0 \]
We then have the estimator, and for the given data, the estimate.
\[ \boxed{\hat{\theta} = -\frac{1}{n}\sum_{i = 1}^{n}\log x_i} = -\frac{1}{4}\log (0.10 \cdot 0.22 \cdot 0.54 \cdot 0.36) = \boxed{1.3636} \]
(b) Obtain a method of moments estimator for \(\theta\), \(\tilde{\theta}\). Calculate an estimate using this estimator when
\[ x_{1} = 0.10, \ x_{2} = 0.22, \ x_{3} = 0.54, \ x_{4} = 0.36. \]
Solution:
\[ \text{E}[X] = \int_{0}^{1} x \cdot \frac{1}{\theta}x^{(1-\theta)/\theta} dx = \text{... some calculus happens...}= \frac{1}{\theta + 1} \]
\[ \begin{aligned} \text{E}[X] & = \bar{X} \\[1em] \frac{1}{\theta + 1} = \bar{X} \end{aligned} \]
Solving for \(\theta\) results in the method of moments estimator.
\[ \boxed{\tilde{\theta} = \frac{1- \bar{X}}{\bar{X}}} \]
\[ \bar{x} = \frac{1}{4}(0.10 + 0.22 + 0.54 + 0.36) = 0.305 \]
Thus for the given data we can calculate the estimate.
\[ \tilde{\theta} = \frac{1 -\bar{x}}{\bar{x}} = \frac{1 - 0.305}{0.305} = \boxed{2.2787} \]
Let \(X_1, X_2, \ldots, X_n\) iid from a population with pdf
\[ f(x \mid \theta) = \frac{\theta}{x^2}, \quad 0 < \theta \leq x \]
Obtain the maximum likelihood estimator for \(\theta\), \(\hat{\theta}\).
Solution:
First, be aware that the values of \(x\) for this pdf are restricted by the value of \(\theta\).
\[ \begin{aligned} L(\theta) &= \prod_{i = 1}^{n} \frac{\theta}{x_i^2} \quad 0 < \theta \leq x_i \text{ for all } x_i \\[1em] & = \frac{\theta^n}{\prod_{i = 1}^{n}x_i^2} \quad 0 < \theta \leq \min\{x_i\} \end{aligned} \]
\[ \log L(\theta) = n\log \theta - 2\sum_{i = 1}^{n} \log x_i \]
\[ \frac{d}{d\theta} \log L(\theta) = \frac{n}{\theta} > 0 \]
So, here we have a log-likelihood that is increasing in regions where it is not zero, that is, when \(\theta \min\{x_i\}\). Thus, the likelihood is the largest allowable value of \(\theta\) in this region, thus the maximum likelihood estimator is given by
\[ \boxed{\hat{\theta} = \min\{X_i\}} \]
Let \(X_{1}, X_{2}, \ldots X_{n}\) be a random sample of size \(n\) from a distribution with probability density function
\[ f(x, \alpha) = \alpha^{-2}xe^{-x/\alpha}, \quad x > 0, \ \alpha > 0 \]
(a) Obtain the maximum likelihood estimator of \(\alpha\), \(\hat{\alpha}\). Calculate the estimate when
\[ x_{1} = 0.25, \ x_{2} = 0.75, \ x_{3} = 1.50, \ x_{4} = 2.5, \ x_{5} = 2.0. \]
Solution:
We first obtain the likelihood by multiplying the probability density function for each \(X_i\). We then simplify this expression.
\[ L(\alpha) = \prod_{i = 1}^{n} f(x_i; \alpha) = \prod_{i = 1}^{n} \alpha^{-2} x_i e^{-x_i/\alpha} = \alpha^{-2n} \left(\prod_{i = 1}^{n} x_i \right) \exp\left({\frac{-\sum_{i = i}^{n} x_i}{\alpha}}\right) \]
Instead of directly maximizing the likelihood, we instead maximize the log-likelihood.
\[ \log L(\alpha) = -2n \log \alpha + \sum_{i = i}^{n} \log x_i - \frac{\sum_{i = i}^{n} x_i}{\alpha} \]
To maximize this function, we take a derivative with respect to \(\alpha\).
\[ \frac{d}{d\alpha} \log L(\alpha) = \frac{-2n}{\alpha} + \frac{\sum_{i = i}^{n} x_i}{\alpha^2} \]
We set this derivative equal to zero, then solve for \(\alpha\).
\[ \frac{-2n}{\alpha} + \frac{\sum_{i = i}^{n} x_i}{\alpha^2} = 0 \]
Solving gives our estimator, which we denote with a hat.
\[ \boxed{\hat{\alpha} = \frac{\sum_{i = i}^{n} x_i}{2n} = \frac{\bar{x}}{2}} \]
Using the given data, we obtain an estimate.
\[ \hat{\alpha} = \frac{0.25 + 0.75 + 1.50 + 2.50 + 2.0}{2 \cdot 5} = \boxed{0.70} \]
(We should also verify that this point is a maxmimum, which is omitted here.)
(b) Obtain the method of moments estimator of \(\alpha\), \(\tilde{\alpha}\). Calculate the estimate when
\[ x_{1} = 0.25, \ x_{2} = 0.75, \ x_{3} = 1.50, \ x_{4} = 2.5, \ x_{5} = 2.0. \]
Hint: Recall the probability density function of an exponential random variable.
\[ f(x \mid \theta) = \frac{1}{\theta}e^{-x/\theta}, \quad x > 0, \ \theta > 0 \]
Note that, the moments of this distribution are given by
\[ E[X^k] = \int_{0}^{\infty} \frac{x^k}{\theta}e^{-x/\theta} = k! \cdot \theta^k. \]
This hint will also be useful in the next exercise.
Solution:
We first obtain the first population moment. Notice the integration is done by identifying the form of the integral is that of the second moment of an exponential distribution.
\[ \text{E}[X] = \int_{0}^{\infty} x \cdot \alpha^{-2}xe^{-x/\alpha} dx = \frac{1}{\alpha}\int_{0}^{\infty} \frac{x^2}{\alpha} e^{-x/\alpha} dx = \frac{1}{\alpha}(2\alpha^2) = 2\alpha \]
We then set the first population moment, which is a function of \(\alpha\), equal to the first sample moment.
\[ 2\alpha = \frac{\sum_{i = i}^{n} x_i}{n} \]
Solving for \(\alpha\), we obtain the method of moments estimator.
\[ \boxed{\tilde{\alpha} = \frac{\sum_{i = i}^{n} x_i}{2n} = \frac{\bar{x}}{2}} \]
Using the given data, we obtain an estimate.
\[ \tilde{\alpha} = \frac{0.25 + 0.75 + 1.50 + 2.50 + 2.0}{2 \cdot 5} = \boxed{0.70} \]
Note that, in this case, the MLE and MoM estimators are the same.
Let \(X_{1}, X_{2}, \ldots X_{n}\) be a random sample of size \(n\) from a distribution with probability density function
\[ f(x \mid \beta) = \frac{1}{2 \beta^3} x^2 e^{-x/\beta}, \quad x > 0, \ \beta > 0 \]
(a) Obtain the maximum likelihood estimator of \(\beta\), \(\hat{\beta}\). Calculate the estimate when
\[ x_{1} = 2.00, \ x_{2} = 4.00, \ x_{3} = 7.50, \ x_{4} = 3.00. \]
Solution:
We first obtain the likelihood by multiplying the probability density function for each \(X_i\). We then simplify this expression.
\[ L(\beta) = \prod_{i = 1}^{n} f(x_i; \beta) = \prod_{i = 1}^{n} \frac{1}{2 \beta^3} x^2 e^{-x/\beta} = 2^{-n} \beta^{-3n} \left(\prod_{i = 1}^{n} x_i^2 \right) \exp\left({\frac{-\sum_{i = i}^{n} x_i}{\beta}}\right) \]
Instead of directly maximizing the likelihood, we instead maximize the log-likelihood.
\[ \log L(\beta) = -n \log 2 - 3n \log \beta + \sum_{i = i}^{n} \log x_i^2 - \frac{\sum_{i = i}^{n} x_i}{\beta} \]
To maximize this function, we take a derivative with respect to \(\beta\).
\[ \frac{d}{d\beta} \log L(\beta) = \frac{-3n}{\beta} + \frac{\sum_{i = i}^{n} x_i}{\beta^2} \]
We set this derivative equal to zero, then solve for \(\beta\).
\[ \frac{-3n}{\beta} + \frac{\sum_{i = i}^{n} x_i}{\beta^2} = 0 \]
Solving gives our estimator, which we denote with a hat.
\[ \boxed{\hat{\beta} = \frac{\sum_{i = i}^{n} x_i}{3n} = \frac{\bar{x}}{3}} \]
Using the given data, we obtain an estimate.
\[ \hat{\beta} = \frac{2.00 + 4.00 + 7.50 + 3.00}{3 \cdot 4} = \boxed{1.375} \]
(We should also verify that this point is a maxmimum, which is omitted here.)
(b) Obtain the method of moments estimator of \(\beta\), \(\tilde{\beta}\). Calculate the estimate when
\[ x_{1} = 2.00, \ x_{2} = 4.00, \ x_{3} = 7.50, \ x_{4} = 3.00. \]
Solution:
We first obtain the first population moment. Notice the integration is done by identifying the form of the integral is that of the third moment of an exponential distribution.
\[ \text{E}[X] = \int_{0}^{\infty} x \cdot \frac{1}{2 \beta^3} x^2 e^{-x/\beta} dx = \frac{1}{2\beta^2}\int_{0}^{\infty} \frac{x^3}{\beta} e^{-x/\beta} dx = \frac{1}{2\beta^2}(6\beta^3) = 3\beta \]
We then set the first population moment, which is a function of \(\beta\), equal to the first sample moment.
\[ \begin{aligned} \text{E}[X] & = \bar{X} \\[1em] 3\beta & = \frac{\sum_{i = i}^{n} x_i}{n} \end{aligned} \]
Solving for \(\beta\), we obtain the method of moments estimator.
\[ \boxed{\tilde{\beta} = \frac{\sum_{i = i}^{n} x_i}{3n} = \frac{\bar{x}}{3}} \]
Using the given data, we obtain an estimate.
\[ \tilde{\beta} = \frac{2.00 + 4.00 + 7.50 + 3.00}{3 \cdot 4} = \boxed{1.375} \]
Note again, the MLE and MoM estimators are the same.
Let \(Y_1, Y_2, \ldots, Y_n\) be a random sample from a distribution with pdf
\[ f(y \mid \alpha) = \frac{2}{\alpha} \cdot y \cdot \exp\left\{-\frac{y^2}{\alpha}\right\}, \ y > 0, \ \alpha > 0. \]
(a) Find the maximum likelihood estimator of \(\alpha\).
Solution:
The likelihood function of the data is the joint distribution viewed as a function of the parameter, so we have:
\[ L(\alpha) = \frac{2^n}{\alpha^n}\left\{ \prod_{i=1}^n y_i \right\}\exp\left\{-\frac{1}{\alpha} \sum_{i=1}^n y_i^2\right\} \]
We want to maximize this function. First, we can take the logarithm:
\[ \log L(\alpha) = n \log 2 - n \log \alpha + \sum_{i=1}^n \log y_i -\frac{1}{\alpha} \sum_{i=1}^n y_i^2 \]
And then take the derivative:
\[ \frac{d}{d\alpha} \log L(\alpha) = - \frac{n}{\alpha} +\frac{1}{\alpha^2} \sum_{i=1}^n y_i^2 \\ \]
Setting this equal to 0 and solving for \(\alpha\):
\[ \begin{aligned} & - \frac{n}{\alpha} +\frac{1}{\alpha^2} \sum_{i=1}^n y_i^2 = 0 \\ \iff & \frac{n}{\alpha} = \frac{1}{\alpha^2} \sum_{i=1}^n y_i^2\\ \iff &\alpha =\frac{1}{n} \sum_{i=1}^n y_i^2\\ \end{aligned} \]
So, our candidate for the MLE is
\[ \hat{\alpha} =\frac{1}{n} \sum_{i=1}^n y_i^2. \]
Taking the second derivative,
\[ \frac{d^2}{d\alpha^2} \log L(\alpha) = \frac{n}{\alpha^2} -\frac{2}{\alpha^3} \sum_{i=1}^n y_i^2 =\frac{n}{\alpha^2} -\frac{2n}{\alpha^3} \hat{\alpha} \]
so that:
\[ \frac{d^2}{d\alpha^2} \log L(\hat{\alpha}) = \frac{n}{\hat{\alpha}^2} -\frac{2n}{\hat{\alpha}^3} \hat{\alpha} = -\frac{n}{\hat{\alpha}^2} < 0 \]
Thus, the (log-)likelihood is concave down at \(\hat{\alpha}\), which confirms that the value of \(\alpha\) that maximizes the likelihood is:
\[ \boxed{\hat{\alpha}_\text{MLE} =\frac{1}{n} \sum_{i=1}^n Y_i^2} \]
(b) Let \(Z_1 = Y_1^2\). Find the distribution of \(Z_1\). Is the MLE for \(\alpha\) an unbiased estimator of \(\alpha\)?
Solution:
If \(Z_i = Y_i^2,\) then \(Y_i = \sqrt{Z_i}\), and \(\frac{dy_i}{dz_i} = \frac{1}{2}\frac{1}{\sqrt{z_i}}\), so that:
\[ f_Z(z) = \frac{2}{\alpha} \sqrt{z} \cdot\exp\left\{-\frac{z}{\alpha}\right\}\frac{1}{2}\frac{1}{\sqrt{z}} = \boxed{ \frac{1}{\alpha} \exp\left\{ - \frac{z}{\alpha}\right\}} \]
which is the pdf of an exponential distribution with parameter \(\alpha\). Thus,
\[ \text{E}\left[\frac{1}{n} \sum_{i=1}^n Y_i^2\right] = \text{E}\left[\bar{Z}\right] = \text{E}[Z_1] = \alpha, \]
so that \(\hat{\alpha}_\text{MLE}\) is unbiased for \(\alpha\).
Note: I typically do not remember the “formula” for the pdf of a transformed variable, so I typically start from:
\[ \text{for positive } z, \ \ F_Z(z) = P(Z \le z) = P(Y^2 \le z) = P(Y \le \sqrt{z}) = F_Y(\sqrt{z}) \]
and then take a derivative:
\[ f_Z(z) = \frac{d}{dz} P(Z \le z) = \frac{d}{dz} F_Y(\sqrt{z}) = f_Y(\sqrt{z}) \frac{d}{dz}\left\{ \sqrt{z} \right\} \]
Let \(X\) be a single observation from a \(\text{Binom}(n, p),\) where \(p\) is an unknown parameter. (In this case, we will consider \(n\) known.)
(a) Find the maximum likelihood estimator (MLE) of \(p\).
Solution:
We just have one observation, so the likelihood is just the pmf:
\[ L(p) = {n \choose x} p^x (1-p)^{n-x}, \ \ 0 < p < 1, \ \ x = 0, 1, \ldots n \]
The log-likelihood is:
\[ \log L(p) = \log\left\{ {n \choose x} \right\} + x\log(p) + (n-x)\log(1-p). \]
The derivative of the log-likelihood is:
\[ \frac{d}{dp} \log L(p) = \frac{x}{p} - \frac{n-x}{1-p}. \]
Setting this to be 0, we solve:
\[ \frac{x}{p} - \frac{n-x}{1-p} = 0 \iff x-px = np - px \iff p=\frac{x}{n}. \]
Thus, \(\hat{p} = \frac{x}{n}\) is our candidate.
We take the second derivative:
\[ \frac{d^2}{dp^2} \log L(p) = -\frac{x}{p^2} - \frac{n-x}{(1-p)^2} \]
which is always less than 0; thus
\[ \boxed{\hat{p} = \frac{X}{n}} \]
is the maximum likelihood estimator for \(p\).
(b) Suppose you roll a 6-sided die 40 times and observe eight rolls of a 6. What is the maximum likelihood estimate of the probability of observing a 6?
Solution:
Here, we can let \(X\) be the number of sixes in 40 (independent) rolls of the die: \(X \sim \text{Binom}(40, p)\), where \(p\) is the probability of rolling a 6 on this die.
Then \[ \boxed{\hat{p} = \frac{8}{40} = 0.2} \]
is the maximum likelihood estimate for \(p.\)
(c) Using the same observed data, suppose you now plan to perform a second experiment with the same die, and will roll the die 5 more times. What is the maximum likelihood estimate of the probability that you will observe no 6’s in this next experiment?
Solution:
Let \(Y \sim \text{Binom}(5, p)\) represent the number of sixes you will obtain in this second experiment. Based on the pmf of the binomial, we know that:
\[ P(Y=0) = {5 \choose 0} p^0(1-p)^{5-0}=(1-p)^5 \]
Let us call this new parameter of interest \(\theta\). Then we have
\[ \theta = (1-p)^5 \]
We are asked to find the MLE \(\hat{\theta}\).
Based on the invariance property of the MLE,
\[ \hat{\theta} = (1 - \hat{p})^5 \]
With the observed data, the maximum likelihood estimate is thus
\[ \boxed{(1-0.2)^5 = 0.33} \]
Thus, our best guess (using the maximum likelihood framework) at the chance that we will observe no sixes in the next 5 rolls is 33%.
Suppose that a random variable \(X\) follows a discrete distribution, which is determined by a parameter \(\theta\) which can take only two values, \(\theta = 1\) or \(\theta = 2\). The parameter \(\theta\) is unknown.
Now suppose we observe \(X = 3\). Based on this data, what is the maximum likelihood estimate of \(\theta\)?
Solution:
Because there are only two possible values of \(\theta\) (1 and 2) rather than a whole range of possible values (like examples with \(0 < \theta < \infty\)) the approach of taking the derivative of something with respect to \(\theta\) will not work. Instead, we need to think about the definition of the MLE. Instead, we just want to determine which value of \(\theta\) makes our observed data, \(X = 3\), most likely.
If \(\theta = 1\), then \(X\) follows a Poisson distribution with parameter \(\lambda = 2\). Thus, if \(\theta = 1\),
\[ P(X=3) = \frac{e^{-2} \cdot 2^3}{3!} = 0.180447 \]
If \(\theta = 2\), then \(X\) follows a Geometric distribution with parameter \(p = \frac{1}{4}\). Thus, if \(\theta = 2\),
\[ P(X=3) = \frac{1}{4}\left(1-\frac{1}{4}\right)^{3-1} = 0.140625 \]
Thus, observing \(X = 3\) is more likely when \(\theta = 1\) (0.18) than when \(\theta = 2\) (0.14), so \(\boxed{1}\) is the maximum likelihood estimate of \(\theta\).
Let \(Y_1, Y_2, \ldots, Y_n\) be a random sample from a population with pdf
\[ f(y \mid \theta) = \dfrac{2\theta^2}{y^3}, \ \ \theta \le y < \infty \]
Find the maximum likelihood estimator of \(\theta.\).
Solution:
The likelihood is:
\[ L(\theta) = \prod_{i=1}^n \frac{2\theta^2}{y_i^3} = \frac{2^n\theta^{2n}}{\prod_{i=1}^n y_i^3}, \ 0 < \theta \le y_i < \infty, \text{ for every } i. \]
Note that
\[ 0 < \theta \le y_i < \infty \text{ for every } i \iff 0 < \theta \le \min\left\{y_i\right\}. \]
To understand the behavior of \(L(\theta)\), we can take the log and take the derivative:
\[ \log L(\theta) = n\log 2 + (2n)\log \theta - \log \left(\prod_{i=1}^n y_i^3\right) \]
\[ \frac{d}{d\theta} \log L(\theta) = \frac{2n}{\theta}>0 \text{ on } \theta \in \left(0, \min\left\{ y_i\right\}\right) \]
Thus, the MLE is the largest possible value of \(\theta\):
\[ \boxed{\hat{\theta} = \min\{Y_i\}} \]