__
I could prove God statistically. ~ George Gallup__

Chapter 9

Sec 9.1

Before we begin the topics in Chapter 9 it would be useful to revisit
"Why Statistics" from the very beginning of the year.

The concepts we learn in Chapter 9 are critical to set the stage for
studying the final component of statistical analysis - statistical inference -
which asks and answers the question "How often would this method give a correct
answer if I used it very many times?" Inference is most secure when we
produce data by random sampling or randomized comparative experiments. The
reason is that when we use chance to choose respondents or assign subjects, the
laws of probability answer the question stated above. We will prepare for
the study of statistical inference by looking at the *probability
distributions *of some very common statistics: __sample proportions__
and __sample means__.

When looking at data we MUST keep straight whether a number describes a sample
or a population.

A **parameter **is a number that describes the population. A parameter
is a fixed number but we do not know its value because we cannot examine the
entire population. A **statistic** is a number that describes a sample.
The value of a statistic is known when we have taken a sample, but it can change
from sample to sample. We use a statistic to *estimate* an unknown
parameter. We use p to represent a population proportion while we use p
hat, the sample proportion, to estimate the parameter. Each sample will
have its own unique statistic ie., sample statistics will vary. BUT...this
is not fatal...what happens if we take MANY samples??

The sampling distribution (histogram) of a statistic is the distribution of
values taken by the statistic in ALL possible samples of the same size from the
same population. The interpretation of a sampling distribution is the
same, whether we obtain it by simulation or by the mathematics of probability.

We can use the same tools of data analysis used in beginning chapters to
describe any distribution. Using a histogram of the sampling distribution
will provide the overall shape, measure of center and spread, and information
about any outliers. The appearance of the approximate sampling
distributions is a consequence of random sampling. When randomization is
used in a design for producing data, statistics computed from the data have a
definite pattern of behavior over many repetitions, even though the result of a
single repetition is uncertain.

Of course we need to ask how trustworthy a statistic is as an estimate of a
parameter. Sampling distributions allow us to describe bias more precisely
by speaking of the bias of a statistic rather than bias in a sampling method.
Bias concerns the center of the sampling distribution. A statistic used to
estimate a parameter is unbiased if the mean of its sampling distribution is
exactly equal to the true value of the parameter being estimated. The
sample proportion (p hat) from an SRS is an unbiased estimator of the population
proportion p.

Statistics have variability but very large samples produce less variability then
small samples. An IMPORTANT fact is that the spread of the sampling
distribution does NOT depend very much on the size of the population.

The variability of a statistic is described by the spread of its sampling
distribution. This spread is determined by the sampling design and the
size of the sample. Large samples give smaller spread. As long as
the population is much larger than the sample (at least 10 times larger) the
spread of the sampling distribution is approximately the same for any population
size.

Imagining the true value of the population parameter as the bull's eye on a
target and the sample statistic as an arrow fixed at the target we can explain
bias and variability pictorially. Both describe what happens when we take
many shots at the target. Bias means that the aim is off and we
consistently MISS the bull's eye in the same direction. The sample values
do NOT center on the population value. High variability means that
repeated shots are widely scattered on the target. Repeated samples do NOT
give very similar results. Properly chosen statistics computed from random
samples of sufficient size will have low bias and low variability.