Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Sep-Oct;21(5):1504-1517.
doi: 10.1109/TCBB.2024.3402220. Epub 2024 Oct 9.

Bayesian Lookahead Perturbation Policy for Inference of Regulatory Networks

Bayesian Lookahead Perturbation Policy for Inference of Regulatory Networks

Mohammad Alali et al. IEEE/ACM Trans Comput Biol Bioinform. 2024 Sep-Oct.

Abstract

The complexity, scale, and uncertainty in regulatory networks (e.g., gene regulatory networks and microbial networks) regularly pose a huge uncertainty in their models. These uncertainties often cannot be entirely reduced using limited and costly data acquired from the normal condition of systems. Meanwhile, regulatory networks often suffer from the non-identifiability issue, which refers to scenarios where the true underlying network model cannot be clearly distinguished from other possible models. Perturbation or excitation is a well-known process in systems biology for acquiring targeted data to reveal the complex underlying mechanisms of regulatory networks and overcome the non-identifiability issue. We consider a general class of Boolean network models for capturing the activation and inactivation of components and their complex interactions. Assuming partial available knowledge about the interactions between components of the networks, this paper formulates the inference process through the maximum aposteriori (MAP) criterion. We develop a Bayesian lookahead policy that systematically perturbs regulatory networks to maximize the performance of MAP inference under the perturbed data. This is achieved by optimally formulating the perturbation process in a reinforcement learning context and deriving a scalable deep reinforcement learning perturbation policy to compute near-optimal Bayesian policy. The proposed method learns the perturbation policy through planning without the need for any real data. The high performance of the proposed approach is demonstrated by comprehensive numerical experiments using the well-known mammalian cell cycle and gut microbial community networks.

PubMed Disclaimer

Similar articles

Cited by

References

    1. Davidson EH, The regulatory genome: gene regulatory networks in development and evolution. Elsevier, 2010.
    1. Shmulevich I, Dougherty ER, and Zhang W, “From Boolean to probabilistic Boolean networks as models of genetic regulatory networks,” Proceedings of the IEEE, vol. 90, no. 11, pp. 1778–1792, 2002.
    1. Li Y, Xiao J, Chen L, Huang X, Cheng Z, Han B, Zhang Q, and Wu C, “Rice functional genomics research: past decade and future,” Molecular plant, vol. 11, no. 3, pp. 359–380, 2018. - PubMed
    1. Salama E-S, Govindwar SP, Khandare RV, Roh H-S, Jeon B-H, and Li X, “Can omics approaches improve microalgal biofuels under abiotic stress?,” Trends in plant science, 2019. - PubMed
    1. Misra N, Panda PK, and Parida BK, “Agrigenomics for microalgal biofuel production: An overview of various bioinformatics resources and recent studies to link OMICS to bioenergy and bioeconomy,” Omics: a journal of integrative biology, vol. 17, no. 11, pp. 537–549, 2013. - PMC - PubMed

Publication types