OCR MEI S4 2007 June — Question 1 24 marks

Exam BoardOCR MEI
ModuleS4 (Statistics 4)
Year2007
SessionJune
Marks24
PaperDownload PDF ↗
Mark schemeDownload PDF ↗
TopicMoment generating functions
TypeShow unbiased estimator
DifficultyChallenging +1.2 This is a structured multi-part question on estimation theory requiring standard techniques: showing unbiasedness via E(2X̄) = θ, computing MSE using the formula Var + Bias², and minimizing by differentiation. While it involves Further Maths Statistics content (order statistics, MSE), each part follows predictable steps with the pdf of Y given. The conceptual demand is moderate—understanding bias/MSE trade-offs—but no novel insight is required.
Spec5.03c Calculate mean/variance: by integration5.05b Unbiased estimates: of population mean and variance

1 The random variable \(X\) has the continuous uniform distribution with probability density function $$\mathrm { f } ( x ) = \frac { 1 } { \theta } , \quad 0 \leqslant x \leqslant \theta$$ where \(\theta ( \theta > 0 )\) is an unknown parameter.
A random sample of \(n\) observations from \(X\) is denoted by \(X _ { 1 } , X _ { 2 } , \ldots , X _ { n }\), with sample mean \(\bar { X } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } X _ { i }\).
  1. Show that \(2 \bar { X }\) is an unbiased estimator of \(\theta\).
  2. Evaluate \(2 \bar { X }\) for a case where, with \(n = 5\), the observed values of the random sample are \(0.4,0.2\), 1.0, 0.1, 0.6. Hence comment on a disadvantage of \(2 \bar { X }\) as an estimator of \(\theta\). For a general random sample of size \(n\), let \(Y\) represent the sample maximum, \(Y = \max \left( X _ { 1 } , X _ { 2 } , \ldots , X _ { n } \right)\). You are given that the probability density function of \(Y\) is $$g ( y ) = \frac { n y ^ { n - 1 } } { \theta ^ { n } } , \quad 0 \leqslant y \leqslant \theta$$
  3. An estimator \(k Y\) is to be used to estimate \(\theta\), where \(k\) is a constant to be chosen. Show that the mean square error of \(k Y\) is $$k ^ { 2 } \mathrm { E } \left( Y ^ { 2 } \right) - 2 k \theta \mathrm { E } ( Y ) + \theta ^ { 2 }$$ and hence find the value of \(k\) for which the mean square error is minimised.
  4. Comment on whether \(k Y\) with the value of \(k\) found in part (iii) suffers from the disadvantage identified in part (ii).

1 The random variable $X$ has the continuous uniform distribution with probability density function

$$\mathrm { f } ( x ) = \frac { 1 } { \theta } , \quad 0 \leqslant x \leqslant \theta$$

where $\theta ( \theta > 0 )$ is an unknown parameter.\\
A random sample of $n$ observations from $X$ is denoted by $X _ { 1 } , X _ { 2 } , \ldots , X _ { n }$, with sample mean $\bar { X } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } X _ { i }$.\\
(i) Show that $2 \bar { X }$ is an unbiased estimator of $\theta$.\\
(ii) Evaluate $2 \bar { X }$ for a case where, with $n = 5$, the observed values of the random sample are $0.4,0.2$, 1.0, 0.1, 0.6. Hence comment on a disadvantage of $2 \bar { X }$ as an estimator of $\theta$.

For a general random sample of size $n$, let $Y$ represent the sample maximum, $Y = \max \left( X _ { 1 } , X _ { 2 } , \ldots , X _ { n } \right)$. You are given that the probability density function of $Y$ is

$$g ( y ) = \frac { n y ^ { n - 1 } } { \theta ^ { n } } , \quad 0 \leqslant y \leqslant \theta$$

(iii) An estimator $k Y$ is to be used to estimate $\theta$, where $k$ is a constant to be chosen. Show that the mean square error of $k Y$ is

$$k ^ { 2 } \mathrm { E } \left( Y ^ { 2 } \right) - 2 k \theta \mathrm { E } ( Y ) + \theta ^ { 2 }$$

and hence find the value of $k$ for which the mean square error is minimised.\\
(iv) Comment on whether $k Y$ with the value of $k$ found in part (iii) suffers from the disadvantage identified in part (ii).

\hfill \mbox{\textit{OCR MEI S4 2007 Q1 [24]}}