Personal tools
Navigation
Log in


Forgot your password?
 
Document Actions

False discovery rate

Up to Week 11: Model selection and validation

False discovery rate

Posted by Stephanie Manel at March 03. 2009

I suggest to introduce false discovery rate in the lecture that has been defined as more robust than p-value to detect fasle positive and to be use in multiple testing.


I can send papers about FDR if necessary.


Stéphanie


Re: False discovery rate

Posted by Helene Wagner at March 08. 2009

Hi Stephanie - yes, could you please send me the paper?


Thanks,


Helene


Re: False discovery rate

Posted by Melanie Murphy at March 12. 2009

Hi Stephanie -




Could you post the reference for this paper for all of us?  I am very interested in this paper, and I am sure others are as well.


Re: False discovery rate

Posted by Niko Balkenhol at March 16. 2009

I'm not sure which FDR papers Stephanie has in mind, but here are 2 of the main FDR references I'm aware of:


Benjamini Y, Hochberg Y (1995) "Controlling the false discovery rate: a practical and


powerful approach to multiple testing." Journal of the Royal Statistical Society B 8


57, 289-300.


 


Benjamini Y ,Yekutieli D (2001) "The control of the false discovery rate in multiple testing under dependency". Annals of Statistics 29 (4): 1165–1188.



Re: False discovery rate

Posted by Niko Balkenhol at March 16. 2009

On second thought: If you include FDR, you should perhaps also cover (sequential) Bonferroni correction.



I’m a little confused about the separation of AIC vs. Information-theoretic approaches.  After all, AIC is one (of many) measures used in information theory.  I’m also wondering whether you could discuss why using AIC and related indices might not work with our data.  (I.e., why EXACTLY is pair-wise data problematic for this approach? – I took a 3-day workshop on information-theoretic approaches, but even the experts couldn’t really tell me what conceptual/ theoretic arguments go against it for our typical data in landscape genetics…curious to see what you have to say about it…)


 


This would be a good lecture to point out the problems associated with partial Mantel tests.


 


And, just a very minor thing: On slides 55ff, you talk about the “correct” model.  All models are wrong, but some are useful.  Maybe change it to “model most supported by the data”?


 



 

Powered by Plone CMS, the Open Source Content Management System