1 The random variable \(X\) has the continuous uniform distribution with probability density function
$$\mathrm { f } ( x ) = \frac { 1 } { \theta } , \quad 0 \leqslant x \leqslant \theta$$
where \(\theta ( \theta > 0 )\) is an unknown parameter.
A random sample of \(n\) observations from \(X\) is denoted by \(X _ { 1 } , X _ { 2 } , \ldots , X _ { n }\), with sample mean \(\bar { X } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } X _ { i }\).
- Show that \(2 \bar { X }\) is an unbiased estimator of \(\theta\).
- Evaluate \(2 \bar { X }\) for a case where, with \(n = 5\), the observed values of the random sample are \(0.4,0.2\), 1.0, 0.1, 0.6. Hence comment on a disadvantage of \(2 \bar { X }\) as an estimator of \(\theta\).
For a general random sample of size \(n\), let \(Y\) represent the sample maximum, \(Y = \max \left( X _ { 1 } , X _ { 2 } , \ldots , X _ { n } \right)\). You are given that the probability density function of \(Y\) is
$$g ( y ) = \frac { n y ^ { n - 1 } } { \theta ^ { n } } , \quad 0 \leqslant y \leqslant \theta$$
- An estimator \(k Y\) is to be used to estimate \(\theta\), where \(k\) is a constant to be chosen. Show that the mean square error of \(k Y\) is
$$k ^ { 2 } \mathrm { E } \left( Y ^ { 2 } \right) - 2 k \theta \mathrm { E } ( Y ) + \theta ^ { 2 }$$
and hence find the value of \(k\) for which the mean square error is minimised.
- Comment on whether \(k Y\) with the value of \(k\) found in part (iii) suffers from the disadvantage identified in part (ii).