A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
- PMID: 40040929
- PMCID: PMC11877648
- DOI: 10.3389/frai.2025.1526221
A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
Abstract
The rise of sophisticated cyber threats has spurred advancements in Intrusion Detection Systems (IDS), which are crucial for identifying and mitigating security breaches in real-time. Traditional IDS often rely on complex machine learning algorithms that lack transparency despite their high accuracy, creating a "black box" effect that can hinder the analysts' understanding of their decision-making processes. Explainable Artificial Intelligence (XAI) offers a promising solution by providing interpretability and transparency, enabling security professionals to understand better, trust, and optimize IDS models. This paper presents a systematic review of the integration of XAI in IDS, focusing on enhancing transparency and interpretability in cybersecurity. Through a comprehensive analysis of recent studies, this review identifies commonly used XAI techniques, evaluates their effectiveness within IDS frameworks, and examines their benefits and limitations. Findings indicate that rule-based and tree-based XAI models are preferred for their interpretability, though trade-offs with detection accuracy remain challenging. Furthermore, the review highlights critical gaps in standardization and scalability, emphasizing the need for hybrid models and real-time explainability. The paper concludes with recommendations for future research directions, suggesting improvements in XAI techniques tailored for IDS, standardized evaluation metrics, and ethical frameworks prioritizing security and transparency. This review aims to inform researchers and practitioners about current trends and future opportunities in leveraging XAI to enhance IDS effectiveness, fostering a more transparent and resilient cybersecurity landscape.
Keywords: cyber threats; explainable artificial intelligence; intrusion detection systems; machine learning; model explainability; model interpretability; systematic review.
Copyright © 2025 Mohale and Obagbuwa.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Similar articles
-
Explainable artificial intelligence for botnet detection in internet of things.Sci Rep. 2025 Mar 4;15(1):7632. doi: 10.1038/s41598-025-90420-6. Sci Rep. 2025. PMID: 40038372 Free PMC article.
-
An Intrusion Detection System over the IoT Data Streams Using eXplainable Artificial Intelligence (XAI).Sensors (Basel). 2025 Jan 30;25(3):847. doi: 10.3390/s25030847. Sensors (Basel). 2025. PMID: 39943488 Free PMC article.
-
Explainable Artificial Intelligence in Radiological Cardiovascular Imaging-A Systematic Review.Diagnostics (Basel). 2025 May 31;15(11):1399. doi: 10.3390/diagnostics15111399. Diagnostics (Basel). 2025. PMID: 40506971 Free PMC article. Review.
-
A literature review of artificial intelligence (AI) for medical image segmentation: from AI and explainable AI to trustworthy AI.Quant Imaging Med Surg. 2024 Dec 5;14(12):9620-9652. doi: 10.21037/qims-24-723. Epub 2024 Nov 29. Quant Imaging Med Surg. 2024. PMID: 39698664 Free PMC article. Review.
-
Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence.Bioengineering (Basel). 2024 Apr 12;11(4):369. doi: 10.3390/bioengineering11040369. Bioengineering (Basel). 2024. PMID: 38671790 Free PMC article. Review.
Cited by
-
Early warning score and feasible complementary approach using artificial intelligence-based bio-signal monitoring system: a review.Biomed Eng Lett. 2025 Jun 25;15(4):717-734. doi: 10.1007/s13534-025-00486-4. eCollection 2025 Jul. Biomed Eng Lett. 2025. PMID: 40621610 Free PMC article. Review.
References
-
- Ables J., Childers N., Anderson W., Mittal S., Rahimi S., Banicescu I., et al. ., (2024). Eclectic rule extraction for Explainability of deep neural network based intrusion detection systems, arXiv preprint arXiv:2401.10207. Available at: https://arxiv.org/abs/2401.10207 (Accessed November 07, 2024).
-
- Adadi A., Berrada M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160. doi: 10.1109/ACCESS.2018.2870052 - DOI
-
- Ali S., Shamsi J. A., Mustafa S., Shamshirband S. (2022). Intrusion detection system in IoT-based Smart City applications using explainable AI. Sustain. Cities Soc. 77:103590. doi: 10.1016/j.scs.2021.103590 - DOI
-
- Arrieta A. B., Díaz-Rodríguez N., Ser J. D., Bennetot A., Tabik S., Barbado A. (2020). Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities, and challenges toward responsible AI. Information Fusion 58, 82–115. doi: 10.1016/j.inffus.2019.12.012 - DOI
Publication types
LinkOut - more resources
Full Text Sources