Speaker : | Sayeh Khaniha |
Inria | |
Date: | 27/11/2019 |
Time: | 10:30 am - 12:00 pm |
Location: | Doctoral Training Center (EIT Digital) |
Abstract
The false discovery rate (FDR) is a statistical approach used in multiple hypothesis testing to correct for multiple comparisons. It is typically used in high-throughput experiments in order to correct for random events that falsely appear significant. When testing a null hypothesis to determine whether an observed score is statistically significant, a measure of confidence, the p-value, is calculated and compared to a confidence threshold ?. When k hypotheses are tested simultaneously with a confidence level ?, the chances of occurrence of false positives (i.e., rejecting the null hypothesis when in fact it is true) is equal to 1 ? (1 ? ?)k, which can lead to a high error rate in the experiment. Therefore, a multiple testing correction, such as the FDR, is needed to adjust our statistical confidence measures based on the number of tests performed.
References: False Discovery Rate (https://doi.org/10.1007/978-1-4419-9863-7_223), Computer Age Statistical Inference (By Bradley Efron, Trevor Hastie).