4 An amateur meteorologist records the total rainfall at her home each day using a traditional rain gauge. This means that she has to go out each day at 9 am to read the rain gauge and then to empty it. She wants to save time by using a digital rain gauge, but she also wants to ensure that the readings from the digital gauge are similar to those of her traditional gauge. Over a period of 100 days, she uses both gauges to measure the rainfall.
The meteorologist uses software to produce a 95\% confidence interval for the difference between the two readings (the traditional gauge reading minus the digital gauge reading). The output from the software is shown in Fig. 4. Although rainfall was measured over a period of 100 days, there was no rain on 40 of those days and so the sample size in the software output is 60 rather than 100.
\begin{table}[h]
| Z Estimate of a Mean |
| Confidence Level | |
| Sample |
| Mean 0.1173 |
| □ |
| |
| Result |
| Z Estimate of a Mean |
| Mean | 0.1173 |
| \(\sigma\) | 0.5766 |
| SE | 0.07444 |
| N | 60 |
| Lower Limit | -0.0286 |
| Upper Limit | 0.2632 |
| Interval | \(0.1173 \pm 0.1459\) |
\captionsetup{labelformat=empty}
\caption{Fig. 4}
\end{table}
- Explain why this confidence interval can be calculated even though nothing is known about the distribution of the population of differences.
- State the confidence interval which the software gives in the form \(a < \mu < b\).
- Show how the value 0.07444 (labelled SE) was calculated.
- Comment on whether you think that the confidence interval suggests that the two different methods of measurement are broadly in agreement.