A review of some techniques for inclusion of domain-knowledge into deep neural networks
- PMID: 35058487
- PMCID: PMC8776800
- DOI: 10.1038/s41598-021-04590-0
A review of some techniques for inclusion of domain-knowledge into deep neural networks
Abstract
We present a survey of ways in which existing scientific knowledge are included when constructing models with neural networks. The inclusion of domain-knowledge is of special interest not just to constructing scientific assistants, but also, many other areas that involve understanding data using human-machine collaboration. In many such instances, machine-based model construction may benefit significantly from being provided with human-knowledge of the domain encoded in a sufficiently precise form. This paper examines the inclusion of domain-knowledge by means of changes to: the input, the loss-function, and the architecture of deep networks. The categorisation is for ease of exposition: in practice we expect a combination of such changes will be employed. In each category, we describe techniques that have been shown to yield significant changes in the performance of deep neural networks.
© 2022. The Author(s).
Conflict of interest statement
The authors declare no competing interests.
Figures
References
-
- Stevens, R. et al. Ai for science. Tech. Rep., Argonne National Lab.(ANL), Argonne, IL (United States) (2020).
-
- Kitano H. Artificial intelligence to win the nobel prize and beyond: Creating the engine for scientific discovery. AI Mag. 2016;37:39–49.
-
- Lipton, Z. C. The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016).
-
- Arrieta, A. B. et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. arXiv preprint arXiv:1910.10045 (2019).
-
- Dash, T., Srinivasan, A. & Vig, L. Incorporating symbolic domain knowledge into graph neural networks. Mach. Learn. 1–28 (2021).
LinkOut - more resources
Full Text Sources
