Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Apr 9;19(1):68.
doi: 10.1186/s12915-021-01006-3.

Towards open, reliable, and transparent ecology and evolutionary biology

Affiliations

Towards open, reliable, and transparent ecology and evolutionary biology

Rose E O'Dea et al. BMC Biol. .

Abstract

Unreliable research programmes waste funds, time, and even the lives of the organisms we seek to help and understand. Reducing this waste and increasing the value of scientific evidence require changing the actions of both individual researchers and the institutions they depend on for employment and promotion. While ecologists and evolutionary biologists have somewhat improved research transparency over the past decade (e.g. more data sharing), major obstacles remain. In this commentary, we lift our gaze to the horizon to imagine how researchers and institutions can clear the path towards more credible and effective research programmes.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Fig. 1
Fig. 1
. The strained researcher is tugged away from their ideals by the incentives of the institutions they rely upon for employment and promotion. Practices and behaviours on the left-hand side of the tug-of-war (shaded orange) depict problems of the status quo, where research is focussed more on publishing papers than answering questions. Preferred practices and behaviours on the right-hand side of the tug-of-war (shaded blue) depict a vision for efficient and collaborative science aimed at credibly answering questions. To shift research practices towards reliability, three types of institutional incentives could change, as shown by grey boxes underneath the tug-of-war. First, journals and funders could quickly encourage validation of original research by publishing and funding replication studies. Less likely, journals could publish fewer, more comprehensive and coherent research programmes (both long-term studies and collections of smaller studies on the same research topic), thereby relieving pressures to oversell the importance of small studies. Second, employers could hire individuals with specialised expertise (e.g. data stewards, empiricists, statisticians, and writers), whose employment does not depend on particular research outcomes. Reducing the pyramid structure of academic career paths might promote a more diverse workforce that—without the pressure to maintain professional brands—could be quicker to discard discredited beliefs. Third, funding agencies could curb the benefits of self-promotion and irreproducible results by funding diverse teams, science maintenance (e.g. validation and error detection) as much as innovation, and by selecting randomly from projects that pass particular thresholds (i.e. grant lotteries). Grant lotteries are already being trialled by multiple funding agencies (e.g. the Fetzer Franklin Fund, the Health Research Council of New Zealand, and the Swiss National Science Foundation), but their effects on the reliability of research will depend on which metrics are used to select entrants into the lottery
Fig. 2
Fig. 2
Three areas for reform to relieve research strain, outstanding questions for meta-research, and possible answers. Error detection: researchers need to be able to distinguish between reliable and unreliable research. A better system of quality control (both prior to and post-publication) might discourage research practices that inflate the rate of false-positive findings in the research literature (e.g. selective reporting; p-hacking; HARKing). At the same time, there should be incentives for researchers to remedy mistakes in their previous work, for example, through ‘living’ papers that can be easily updated. A more drastic change would be to require self-contained studies to be replicated, and for published results from long-term field studies to be revisited in subsequent years (e.g. before funding is renewed). Theory development: research in ecology and evolutionary biology sometimes fails to traverse the space between speculation and theory. In addition to hypothesis testing, answering big questions requires space for descriptive and exploratory research [10]. Detailed descriptions of natural history help calibrate theoretical models, and predictions of models should be tested in natural settings. To specify conditions under which findings are expected to replicate, authors can include ‘constraints on generality’ statements alongside inferences. When un-expected results are attributed to ‘context dependence’, specific contexts can be tested with new data. For cumulative research, foundational studies can be validated with close replications, and their generality assessed in different settings. Human resources: education programmes could increase the ability of researchers to work transparently and reproducibly, but honing these skills and conducting rigorous research is too often unrewarded. Any change to evaluation metrics requires careful consideration and measurement of unintended consequences (e.g. how to ensure costs are not disproportionately borne by less well-resourced research groups and universities). Much published research represents independent projects conducted by trainees, but reliability might be increased by coordinating multiple trainees on the same projects (including replication projects) and providing secure employment to people with specialised expertise (who can be professionally indifferent to the outcome of a particular study)

References

    1. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, du Sert NP, et al. A manifesto for reproducible science. Nat Hum Behav. 2017;1(1):1–9. doi: 10.1038/s41562-016-0021. - DOI - PMC - PubMed
    1. Forstmeier W, Wagenmakers E-J, Parker TH. Detecting and avoiding likely false-positive findings - a practical guide. Biol Rev Camb Philos Soc. 2017;92(4):1941–1968. doi: 10.1111/brv.12315. - DOI - PubMed
    1. Parker TH, Forstmeier W, Koricheva J, Fidler F, Hadfield JD, Chee YE, Kelly CD, Gurevitch J, Nakagawa S. Transparency in ecology and evolution: real problems, real solutions. Trends Ecol Evol. 2016;31(9):711–719. doi: 10.1016/j.tree.2016.07.002. - DOI - PubMed
    1. Culina A, van den Berg I, Evans S, Sánchez-Tójar A. Low availability of code in ecology: a call for urgent action. PLoS Biol. 2020;18(7):e3000763. doi: 10.1371/journal.pbio.3000763. - DOI - PMC - PubMed
    1. Fraser H, Parker T, Nakagawa S, Barnett A, Fidler F. Questionable research practices in ecology and evolution. PLoS One. 2018;13:e0200303. 10.1371/journal.pone.0200303. - PMC - PubMed

Publication types

LinkOut - more resources