A fool must now and then be right, by chance. 
William Cowper

Chapter 10
    Sec 10.4  Inference as Decision

Tests of significance assess the strength of evidence against the null hypothesis by assigning a p-value indicating the probability that the outcome would occur by chance if the null hypothesis were true.  VERY unlikely or very low p values provide evidence against the null hypothesis.

Using significance tests with an alpha level selected in advance suggests a DECISION will be made based on the outcome - alpha being the standard against which the p value is measured for decision purposes.

Acceptance sampling is a procedure used by manufacturers, like Lays Potato Chips who inspect a sample of product and will either accept or reject an entire batch.  Acceptance sampling calls for slight adjustments to our tests of significance.

There are only two outcomes - accept or reject.
H0 is not special but still uses the notation to imply meeting a standard
Ha indicates not meeting a standard.

Decision makers always hope for correct decision based on statistical procedures but errors are still possible. There are two kinds of errors...
TYPE I error:  accepting a batch which should have been rejected upsets customers.
TYPE II error:  rejecting a batch which was really good hurts the company and costs $

In the language of this chapter:  Our goal was to generally reject the H0 in favor of Ha.  We may do calculations correctly, try to minimize bias in the sample, use a large enough sample, etc. but errors still can occur.

We could REJECT H0 when it is actually true.  (TYPE I)   OR
W could FAIL TO REJECT H0 when it is false.

  TYPE I ERROR TYPE II ERROR
H0 reject when true* fail to reject
 when false**
Ha fail to rejct
 when false
reject when true
Probability = alpha level of z falling between
the critical values.

Error free conclusions would mean rejecting H0 when it is false and failing to reject when it is true.

* is analogous to convicting and innocent defendant
** is analogous to freeing a guilty defendant
 

A Type II Error occurs if we accept (or fail to reject) the null hypothesis when it is in fact false.  When do we accept (or fail to reject) the null hypothesis?  When we assume that it is true and find that the statistic of interest falls outside the rejection region. 

However, the probability that the statistic falls outside the rejection region is NOT the area of the unshaded region.  Think about it… If the null hypothesis is in fact false, then the picture is NOT CORRECT… it is off center.

 

                                   

 

               Two-Sided Test

To calculate the probability of a Type II Error, we must find the probability that the statistic falls outside the rejection region (the unshaded area) given that the mean is some other specified value.

 

 

 
               
One-Sided Test

 

The probability of a Type II Error tells us the probability of accepting the null hypothesis when it is actually false.

The complement of this would be the probability of not accepting (in other words rejecting) the null hypothesis when it is actually false.

To calculate the probability of rejecting the null hypothesis when it is actually false,
compute  1 – P(Type II Error), or (1 – b).  This is called the power of a significance test.

Index