Session outline:

Dr. McNutt will open, presenting the problems of reproducibility and replicability from her perspective as editor of a top general-interest scientific publication, and her current plan to raise the bar. Prof. Stark will discuss replicability and reproducibility as viewed by an applied statistician, emphasizing the connection between reproducibility and replicability and the scientific method in an era in which science relies on big data and heroic computations. Prof. Benjamini will discuss how selective inference and overly restrictive views on variability affect replicability and generalizability, and what can a researcher do to avoid them when reporting results of a single study. Prof. Heller will address whether repeated studies indeed replicate a given study.

 

Titles and short bios:

 Tentative title: Reproducibility, Scientific Publication, and Peer Review
 Marcia McNutt is Editor-in-Chief of the journal Science. Previously, she was director of the United States Geological Survey (USGS) and science adviser to the United States Secretary of the Interior. Before that, McNutt was president and chief executive officer of the Monterey Bay Aquarium Research Institute. She has also been a professor of marine geophysics at Stanford University and the University of California, Santa Cruz.



 Title: Addressing statistical woes affecting replicability
YoavBenjamini is the Nathan and Lily Professor of Applied Statistics at Tel Aviv university. He is the originator (with Hochberg) of the False Discovery Rate criterion as well as  the widely used  Benjamini-Hochberg procedure that controls it, and since then has been a major contributor to theory and practice in multiplicity research. His work on assuring statistical properties 'on the average over the selected' started with work on confidence intervals (JASA 2005)  all the way to family of families (JASA 2013). He presented a Medallion lecture (2012) on Selective Inference and Replicability, where he advanced the thesis that much of the replicability problems stem from selective inference. He has been working for many years with zoologists on measuring  behaviour and in particular published approaches to deal with lack of replicability of measured strains differences across laboratories. He is leading a large European Research Council research project on  statistical approaches to replicability problems in the life sciences. In 2012 he was awarded the Israel Prize for his research in Statistics.

 Tentative Title: Assessing replicability across studies: the r-value.
 Ruth Heller is a senior lecturer at the Department of Statistics, Tel Aviv University, and prior to that was the Mark O. Winkelman Distinguished Scholar in Residence Visiting Lecturer of Statistics,  University of Pennsylvania, and later at the Technion. A large part of her research in recent years evolves about the question of how replicability claims can be established,  with applications to bioinformatics and functional Magnetic Resonance Imaging (fMRI).   Starting with original work on partial conjunction hypotheses (Biometrics 2008),  and selective inference (Philosophical transactions, 2009),  in her current work Ruth develops replicability analysis procedures that quantify the evidence in favor of replicability claims. For a widely used design in ``omics" research,  where  the promising  features are selected for follow-up based on a primary study,  Ruth developed procedures for discovering whether the follow-up study has replicated the findings in the primary study (JASA, 2013). When several studies examine many features, an empirical Bayes approach to replicabilitywas  developed  (AOS, 2014), and a frequentist approach is currently under way.  Her other research interests are nonparametric tests of independence and of equality of distribution