Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2023 Aug 22:6:1124718.
doi: 10.3389/frai.2023.1124718. eCollection 2023.

Learning and reasoning with graph data

Affiliations
Review

Learning and reasoning with graph data

Manfred Jaeger. Front Artif Intell. .

Abstract

Reasoning about graphs, and learning from graph data is a field of artificial intelligence that has recently received much attention in the machine learning areas of graph representation learning and graph neural networks. Graphs are also the underlying structures of interest in a wide range of more traditional fields ranging from logic-oriented knowledge representation and reasoning to graph kernels and statistical relational learning. In this review we outline a broad map and inventory of the field of learning and reasoning with graphs that spans the spectrum from reasoning in the form of logical deduction to learning node embeddings. To obtain a unified perspective on such a diverse landscape we introduce a simple and general semantic concept of a model that covers logic knowledge bases, graph neural networks, kernel support vector machines, and many other types of frameworks. Still at a high semantic level, we survey common strategies for model specification using probabilistic factorization and standard feature construction techniques. Based on this semantic foundation we introduce a taxonomy of reasoning tasks that casts problems ranging from transductive link prediction to asymptotic analysis of random graph models as queries of different complexities for a given model. Similarly, we express learning in different frameworks and settings in terms of a common statistical maximum likelihood principle. Overall, this review aims to provide a coherent conceptual framework that provides a basis for further theoretical analyses of respective strengths and limitations of different approaches to handling graph data, and that facilitates combination and integration of different modeling paradigms.

Keywords: graph data; graph neural networks; inductive logic programming; neuro-symbolic integration; representation learning; statistical relational learning.

PubMed Disclaimer

Conflict of interest statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Reasoning landscape.
Figure 2
Figure 2
Framework classes and representatives.
Figure 3
Figure 3
Initial node features and their use: (A) node identifiers; (B) node attributes; (C) vacuous.
Figure 4
Figure 4
Indistinguishable nodes.
Figure 5
Figure 5
Some expressivity relations.

References

    1. Abboud R., Ceylan I. I., Grohe M., Lukasiewicz T. (2021). “The surprising power of graph neural networks with random node initialization,” in Proceedings of IJCAI 2021. 10.24963/ijcai.2021/291 - DOI
    1. Barceló P., Kostylev E., Monet M., Pérez J., Reutter J., Silva J.-P. (2020). “The logical expressiveness of graph neural networks,” in 8th International Conference on Learning Representations (ICLR 2020).
    1. Besag J. (1975). Statistical analysis of non-lattice data. J. R. Stat. Soc. Ser. D 24, 179–195. 10.2307/2987782 - DOI
    1. Blockeel H., De Raedt L. (1998). Top-down induction of first-order logical decision trees. Artif. Intell. 101, 285–297. 10.1016/S0004-3702(98)00034-4 - DOI
    1. Bonifati A., Holubová I., Prat-Pérez A., Sakr S. (2020). Graph generators: state of the art and open challenges. ACM Comput. Surveys 53, 1–30. 10.1145/3379445 - DOI

LinkOut - more resources