The manipulation of statistical formulas is no substitute
for knowing what one is doing.
 Hubert M Blalock Jr

Chapter 10

Tests of Significance: or....."Believe it or not!"

We need to be cautious when believing an unsupported claim or even a claim that "appears" to have some basis for belief.  Employing the use of a test on any hypothesis is the way to rule out the possibility that the results stated are due to chance rather than to the suggested cause.

"Does lowering the price of a product really lead to increased sales?"  How can we be sure that the change in sales is due to the price reduction and not to chance?  How much change would we expect to see before we declare that chance cannot explain the change?  When is the change large enough to rule out chance?

Here is how it works...
The statement (claim, hypothesis) being tested is called the Null Hypothesis.
We don't really believe the null hypothesis and are hoping to be proven wrong.
By rejecting the null  hypothesis we are forced to conclude that the alternative must be true. Usually the null hypothesis states "NO effect" or "NO difference."  Sometimes that means 0 and sometimes it means that the statistic value will not change.

The alternative hypothesis is the claim about the population that we are trying to find evidence FOR. 

When the alternative hypothesis cites a specific change (> or <) we have a ONE-SIDED test.
When the alternative hypothesis cites a change but doesn't specify direction, we have a TWO-SIDED test.

NOTE:  Hypotheses are ALWAYS talking about the POPULATION,
so population parameters are used in their statement.

A Test of Significance assesses the evidence AGAINST the null hypothesis by assigning a probability (P-value) that describes how strong the evidence is.  A very low P-value will cause us to reject the null hypothesis since that particular outcome would be very unlikely by chance alone.  P-values are found by using methods of the past when we assume that a normal distribution is appropriate for our problem.  Remember normal cdf....

What is a very low P-value???  When is the number "statistically significant???"  A rule of thumb is to use P=,05 for decisions about the hypothesis.  Sometimes other levels of certainty are requested (called alpha  
(α) levels) like .03 or .01.  The smaller the P-value the stronger the evidence against the null hypothesis.  A P-value may be significant at one level but not at another.
 

If the P-value is as small or smaller than alpha, we say that the data are statistically significant
at level  alpha. (
α)   P(x=.023) is statistically significant at α level .03 but not at α level .01)


Index