Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Oct 21:15:755812.
doi: 10.3389/fnbeh.2021.755812. eCollection 2021.

PEERS - An Open Science "Platform for the Exchange of Experimental Research Standards" in Biomedicine

Affiliations

PEERS - An Open Science "Platform for the Exchange of Experimental Research Standards" in Biomedicine

Annesha Sil et al. Front Behav Neurosci. .

Abstract

Laboratory workflows and preclinical models have become increasingly diverse and complex. Confronted with the dilemma of a multitude of information with ambiguous relevance for their specific experiments, scientists run the risk of overlooking critical factors that can influence the planning, conduct and results of studies and that should have been considered a priori. To address this problem, we developed "PEERS" (Platform for the Exchange of Experimental Research Standards), an open-access online platform that is built to aid scientists in determining which experimental factors and variables are most likely to affect the outcome of a specific test, model or assay and therefore ought to be considered during the design, execution and reporting stages. The PEERS database is categorized into in vivo and in vitro experiments and provides lists of factors derived from scientific literature that have been deemed critical for experimentation. The platform is based on a structured and transparent system for rating the strength of evidence related to each identified factor and its relevance for a specific method/model. In this context, the rating procedure will not solely be limited to the PEERS working group but will also allow for a community-based grading of evidence. We here describe a working prototype using the Open Field paradigm in rodents and present the selection of factors specific to each experimental setup and the rating system. PEERS not only offers users the possibility to search for information to facilitate experimental rigor, but also draws on the engagement of the scientific community to actively expand the information contained within the platform. Collectively, by helping scientists search for specific factors relevant to their experiments, and to share experimental knowledge in a standardized manner, PEERS will serve as a collaborative exchange and analysis tool to enhance data validity and robustness as well as the reproducibility of preclinical research. PEERS offers a vetted, independent tool by which to judge the quality of information available on a certain test or model, identifies knowledge gaps and provides guidance on the key methodological considerations that should be prioritized to ensure that preclinical research is conducted to the highest standards and best practice.

Keywords: animal models; neuroscience; platform; quality rating; reproducibility; study design; study outcome; transparency.

PubMed Disclaimer

Conflict of interest statement

AB and CE are employees and shareholders of PAASP GmbH and PAASP US LLC. AB was an employee and shareholder of Exciva GmbH and Synventa LLC. CF-B was employed by Cohen Veterans Bioscience, which has funded the initial stages of the PEERS project development. MP was an employee of EpiEndo Pharmaceutical EHF and previously of Fraunhofer IME-TMP and GSK. TS was an employee of Janssen Pharmaceutica. AH was employed by Y47 Consultancy. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

FIGURE 1
FIGURE 1
Outline of the PEERS concept and workflow (the 3Es). To understand whether specific factors are relevant for certain methods/models (“protocols”), the PEERS workflow is based on different steps to collect information about selected factors/protocols from publications or the scientific community (“Evidence”); rate the strength of this information and provide mechanisms for editing, curating and maintaining the information/database (“Evaluation”); and present the outcome in a user-friendly and digestible form (“Extraction Output”) so that users will be provided with an answer helpful for their planned experiments.
FIGURE 2
FIGURE 2
PEERS platform structure. Users can interact with the PEERS platform (blue arrows) by searching for or adding information (Front End Modules). The PEERS database (Back End) consists of various protocols, for which generic and specific factors and related references have been identified. The Quality of Evidence for the importance of certain factors is evaluated using scorecards and a summary is presented by visualizing results in the user interface. Users can contribute by adding new protocols or factors and by scoring relevant references (green arrows).
FIGURE 3
FIGURE 3
The “Open Field” protocol example, demonstrating how the different back-end functionalities of PEERS will translate into the “Extracted Output,” presented to PEERS users. A search query for a specific factor/protocol will lead to the selection of all relevant references from the PEERS database dealing with the factor of interest (e.g., the “illumination level of the arena”). Based on the scorecards for these references the combined score is calculated which translates into the overall extracted output for the selected factor/protocol combination. This status is then presented to the user. Users also have access to all scorecards to understand how the overall grading of evidence was achieved.

References

    1. Aguillon-Rodriguez V., Angelaki D., Bayer H., Bonacchi N., Carandini M., Cazettes F., et al. (2021). Standardized and reproducible measurement of decision-making in mice. Elife 10:e63711. 10.7554/eLife.63711 - DOI - PMC - PubMed
    1. Bespalov A., Bernard R., Gilis A., Gerlach B., Guillen J., Castagne V., et al. (2021). Introduction to the EQIPD quality system. Elife 10:e63294. 10.7554/eLife.63294 - DOI - PMC - PubMed
    1. Bespalov A., Steckler T. (2018). Lacking quality in research: is behavioral neuroscience affected more than other areas of biomedical science? J. Neurosci. Methods 300 4–9. 10.1016/j.jneumeth.2017.10.018 - DOI - PubMed
    1. Bespalov A., Steckler T., Altevogt B., Koustova E., Skolnick P., Deaver D., et al. (2016). Failed trials for central nervous system disorders do not necessarily invalidate preclinical models and drug targets. Nat. Rev. Drug Discov. 15 516–516. 10.1038/nrd.2016.88 - DOI - PubMed
    1. Bohlen M., Hayes E. R., Bohlen B., Bailoo J. D., Crabbe J. C., Wahlsten D. (2014). Experimenter effects on behavioral test scores of eight inbred mouse strains under the influence of ethanol. Behav. Brain Res. 272 46–54. 10.1016/j.bbr.2014.06.017 - DOI - PMC - PubMed

LinkOut - more resources