Complexity Explorer Santa Fe Institute

Your progress is not being saved! Enroll now or log in to track your progress or submit homework.

Introduction to Open Science

Lead instructor:


2.4 Unit 2 References » Unit 2 References

National Academies of Sciences, Engineering, and Medicine. (2019). Reproducibility and  replicability in science. National Academies Press.


The Turing Way Community, Becky Arnold, Louise Bowler, Sarah Gibson, Patricia Herterich, Rosie Higman, Kirstie Whitaker. (2019, March 25). The Turing Way: A Handbook for Reproducible Data Science (Version v0.0.4). Zenodo. http://doi.org/10.5281/zenodo.3233986
And the app version: https://the-turing-way.netlify.app/welcome.html


Poldrack, R. A., Feingold, F., Frank, M. J., Gleeson, P., de Hollander, G., Huys, Q. J., & Cohen, J.D. (2019). The importance of standards for sharing of computational models and data. Computational Brain & Behavior, 2(3), 229-232.

Markowetz, F. (2015). Five selfish reasons to work reproducibly. Genome biology, 16(1), 1-4.

Nakagawa, S., & Parker, T. H. (2015). Replicating research in ecology and evolution: feasibility, incentives, and the cost-benefit conundrum. BMC biology, 13(1), 1-6.

Devezer, B., Nardin, L. G., Baumgaertner, B., & Buzbas, E. O. (2019). Scientific discovery in a model-centric framework: Reproducibility, innovation, and epistemic diversity. PloS one, 14(5), e0216125.

Devezer, B., Navarro, D. J., Vandekerckhove, J., & Buzbas, E. O. (2020). The case for formal methodology in scientific reform. biorxiv.
(Talk version: https://www.youtube.com/watch?v=QhPtKurU2qk&t=39s )
 

Replication projects:
- Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251).


- Klein, R., Ratliff, K., Vianello, M., Adams Jr, R., Bahník, S., Bernstein, M., ... & Nosek, B. (2014). Data from investigating variation in replicability: A “many labs” replication project. Journal of Open Psychology Data, 2(1).


- Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams Jr, R. B., Alper, S., & Sowden, W. (2018). Many Labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1(4), 443-490.


- Ebersole, C. R., Atherton, O. E., Belanger, A. L., Skulborstad, H. M., Allen, J. M., Banks, J. B., & Nosek, B. A. (2016). Many Labs 3: Evaluating participant pool quality across the academic semester via replication. Journal of Experimental Social Psychology, 67, 68-82.


- Klein, R. A., Cook, C. L., Ebersole, C. R., Vitiello, C., Nosek, B. A., Chartier, C. R., ... & Ratliff, K. (2019). Many Labs 4: Failure to replicate mortality salience effect with and without original author involvement.


- Camerer, Colin F., Anna Dreber, Eskil Forsell, Teck-Hua Ho, Jürgen Huber, Magnus Johannesson, Michael Kirchler et al. "Evaluating replicability of laboratory experiments in economics." Science 351, no. 6280 (2016): 1433-1436.


- Many Primates, Altschul, D. M., Beran, M. J., Bohn, M., Call, J., DeTroy, S., & Watzek, J. (2019). Establishing an infrastructure for collaboration in primate cognition research. PLoS One, 14(10), e0223675.


- Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., & Yurovsky, D. (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory‐building. Infancy, 22(4), 421-435.

 

HARK-ing, questionable research practices (QRPs) and incentives in science:
Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., Du Sert, N. P., &Ioannidis, J. P. (2017). A manifesto for reproducible science. Nature human behaviour, 1(1), 1-9.

Hollenbeck, J. R., & Wright, P. M. (2017). Harking, sharking, and tharking: Making the case for post hoc analysis of scientific data.
Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Department of Statistics, Columbia University, 348.

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological science, 23(5), 524-532.

Chambers, C. (2019). The seven deadly sins of psychology: A manifesto for reforming the culture of scientific practice. Princeton University Press.

Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society open science, 3(9), 160384.

Ritchie, S. (2020). Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. Metropolitan Books.
A comics version : https://www.smbc-comics.com/comic/science-fictions

Leiden manifesto for research metrics: http://www.leidenmanifesto.org/(and video version:
https://vimeo.com/133683418 )

San Francisco Declaration on Research Assessment (DORA): https://sfdora.org/

Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and social
psychology review, 2(3), 196-217.

Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy
of science, 34(2), 103-115.

Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. Journal of the American Statistical Association.