I think we all should know more about false discovery rate and type 2 error.
Do you know this graph? It shows the balance between type 1 (incorrect rejection of a true null hypothesis) and type 2 (failure to reject false null hypothesis) error in statistics. Imagine you have done an analysis to see if any of your genetic markers are connected with the gene you are looking for. When you decide your cut off value (or where you p-value should be), you actually decide the balance you are willing to accept between type 1 and type 2 error.
You see, if you decide that you want a cut off value of 0.05, that might mean that you get 14 significant results (the blue ones in the graph). But that will also mean that there are 18 significant results you didn’t find (the red) because you were too afraid of any of your blue results to be a false positive (a significant result from chance and not actually from the data).
This isn’t so difficult when you do simple tests, but if you do very many tests, the problem increases. And right now at work, I am facing the problem of this after 4500 tests. How do I find a good cut off value so I actually find the significant results, but can trust that they are true positives?
I will keep you updated.