Those of you who work in data analysis and have been through some statistics courses may have come to know Neyman-Pearson lemma (NP-lemma). The message is simple, the demonstration not so much but what I always found difficult was to get a common-sense feeling of what it was about. Reading a book named “Common Errors in Statistics” by P.I.Good and J.W.Hardin I got to an explanation and example that helped me get this gut feeling about the NP-lemma I had always missed.

In not 100% mathematically perfect language, what Neyman-Pearson tells us is that the most powerful test one can come up to validate a given hypothesis within a certain significance level is given by a rejection region made by all possible observations coming from this test with a likelihood ratio above a certain threshold… woahhh! Who said it was easy!

Keep calm and deconstruct the lemma:

  1. Hypothesis. In statistics one always works with two hypothesis that a statistical test should reject or not reject. There is the null hypothesis, that will not be rejected until sample evidence against it is strong enough. There is also the alternative hypothesis, the one we will take if the null seems to be false.
  2. Power of a test (a.k.a. sensitivity) tells us which proportion of times we will correctly reject the null hypothesis when it is wrong. We want powerful tests, so most of the time we reject the null hypothesis we are right!
  3. Significance level of a test (a.k.a. false positive rate) tells us which proportion of times we will wrongly reject the null hypothesis when it is true. We want a small significance level so most of the times we reject the null hypothesis we are not wrong!
  4. Rejection region, given all possible outcomes of the test, the rejection region includes those outcomes that will make us reject the null hypothesis in benefit of its alternative one.
  5. Likelihood is the probability of having seen the observed outcome of the test given that the null hypothesis (Likelihood of the null hypothesis) or the alternative one (Likelihood of the alternative hypothesis) were true.
  6. Likelihood ratio, is the ratio of the alternative hypothesis likelihood divided by the null hypothesis likelihood. If the test outcome was very much expected if the null hypothesis were true versus the alternative one, the likelihood ratio should be small.

Enough definitions! (although if you look at them carefully, you will realize they are very insightful!). Let’s go to what Neyman and Pearson tell us: if you want to have the best possible statistical test from the point of view of its power just define the rejection region by including those test results that have the highest likelihood ratio, and keep adding more test results until you reach a certain value for the number of times your test will reject the null hypothesis when it is true (significance level).

Let’s see an example where hopefully everything will come together. The example is based on the book mentioned above.  It is completely made up by myself so it should not be viewed as reflecting any reality or personal opinion.

Imagine one wants to determine whether somebody is in favor of setting immigration quotas (null hypothesis) or not (alternative hypothesis) by asking his/her feelings versus the European Union.

Imagine we knew the actual probability distribution for both types of people regarding the answer to our question:

What is your feeling about the EU?
Hypothesis hate dislike indifferent like really like Total
H0: for quotas 20% 30% 20% 20% 10% 100%
H1: against quotas 5% 10% 20% 35% 30% 100%
Likelihood Ratio 0.25 0.33 1.0 1.75 3.0

Let’s imagine we are willing to accept a false positive error of 30%, that is, 30% of the time we will reject the null hypothesis and assume the interviewed person is against quotas when he/she is really for them. How would we construct the test?

According to Neyman and Pearson we would first take the result with the highest likelihood ratio. This is the answer of “really like the EU” with a ratio of 3. With this result, if we assume somebody is against quotas when he/she said he “really likes the EU”, 10% of the time we would be assigning for quotas people as against (significance). However we would only be correctly classifying against quota people 30% of the time (power) as not everybody in this group have the same opinion about the EU.

This seems to be a poor result as far as power is concerned. However, the test does not make many mistakes at misclassifying for quota people (significance) . As we are more flexible regarding significance, let’s look for the next test result that we should add to the bag of answers that reject the null hypothesis (rejection region).

The next answer with the highest likelihood ratio is “like the EU”. If we use the answers “really like” and “like” the EU as test results that allow us to reject the null hypothesis of somebody being for quotas, we would be misclassifying for quotas people as not 30% of the time (10% from the “really like” and 20% from the “like”) and we would be correctly classifying against quotas people 65% of the time (30% from “really like” and 35% from “like”). In statistical jargon: our significance increased from 10% to 30% (bad!) while the power of our test increased from 30% to 65% (good!).

This is a situation all statistical tests have. There is not something such as a free lunch even in statistics! If you want to increase the power of your test you do it at the expense of increasing the level of significance. Or in simpler terms: you want to better classify the good guys, you will do at the expense of having more bad guys looking good!

Basically, now we are done! We created the most powerful test we could with the given data and a significance level of 30% by using “really like” and “like” labels to determine if somebody is against quotas… are we sure?

What would have happened if we had included in the second step after the “really like” answer was chosen, the answer “indifferent” instead of “like”? The significance of the test would have been the same than before at 30%: 10% for quota people answer “really” like and 20% for quota people answer “dislike”. Both tests would be as bad at misclassifying for quota individuals. However, the power would get worse! With the new test we would have a power of 50% instead of the 65% we had before: 30% from “really likes” and 20% from “indifferent”. With the new test we would be less precise at identifying against quota individuals!

Who helped out here? Neyman-Person likelihood ratio remarkable idea! Taking at each time the answer with the highest likelihood ratio ensured us that we include in the new test as much power as possible (large numerator) while keeping the significance under control (small denominator)!

imatgehipotesis

About Ignasi Puig de Dou

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *