A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
- PMID: 40040929
- PMCID: PMC11877648
- DOI: 10.3389/frai.2025.1526221
A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
Abstract
The rise of sophisticated cyber threats has spurred advancements in Intrusion Detection Systems (IDS), which are crucial for identifying and mitigating security breaches in real-time. Traditional IDS often rely on complex machine learning algorithms that lack transparency despite their high accuracy, creating a "black box" effect that can hinder the analysts' understanding of their decision-making processes. Explainable Artificial Intelligence (XAI) offers a promising solution by providing interpretability and transparency, enabling security professionals to understand better, trust, and optimize IDS models. This paper presents a systematic review of the integration of XAI in IDS, focusing on enhancing transparency and interpretability in cybersecurity. Through a comprehensive analysis of recent studies, this review identifies commonly used XAI techniques, evaluates their effectiveness within IDS frameworks, and examines their benefits and limitations. Findings indicate that rule-based and tree-based XAI models are preferred for their interpretability, though trade-offs with detection accuracy remain challenging. Furthermore, the review highlights critical gaps in standardization and scalability, emphasizing the need for hybrid models and real-time explainability. The paper concludes with recommendations for future research directions, suggesting improvements in XAI techniques tailored for IDS, standardized evaluation metrics, and ethical frameworks prioritizing security and transparency. This review aims to inform researchers and practitioners about current trends and future opportunities in leveraging XAI to enhance IDS effectiveness, fostering a more transparent and resilient cybersecurity landscape.
Keywords: cyber threats; explainable artificial intelligence; intrusion detection systems; machine learning; model explainability; model interpretability; systematic review.
Copyright © 2025 Mohale and Obagbuwa.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
-
- Ables J., Childers N., Anderson W., Mittal S., Rahimi S., Banicescu I., et al. ., (2024). Eclectic rule extraction for Explainability of deep neural network based intrusion detection systems, arXiv preprint arXiv:2401.10207. Available at: https://arxiv.org/abs/2401.10207 (Accessed November 07, 2024).
-
- Adadi A., Berrada M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160. doi: 10.1109/ACCESS.2018.2870052 - DOI
-
- Ali S., Shamsi J. A., Mustafa S., Shamshirband S. (2022). Intrusion detection system in IoT-based Smart City applications using explainable AI. Sustain. Cities Soc. 77:103590. doi: 10.1016/j.scs.2021.103590 - DOI
-
- Arrieta A. B., Díaz-Rodríguez N., Ser J. D., Bennetot A., Tabik S., Barbado A. (2020). Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities, and challenges toward responsible AI. Information Fusion 58, 82–115. doi: 10.1016/j.inffus.2019.12.012 - DOI
Publication types
LinkOut - more resources
Full Text Sources
