Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jan 28:8:1526221.
doi: 10.3389/frai.2025.1526221. eCollection 2025.

A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity

Affiliations

A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity

Vincent Zibi Mohale et al. Front Artif Intell. .

Abstract

The rise of sophisticated cyber threats has spurred advancements in Intrusion Detection Systems (IDS), which are crucial for identifying and mitigating security breaches in real-time. Traditional IDS often rely on complex machine learning algorithms that lack transparency despite their high accuracy, creating a "black box" effect that can hinder the analysts' understanding of their decision-making processes. Explainable Artificial Intelligence (XAI) offers a promising solution by providing interpretability and transparency, enabling security professionals to understand better, trust, and optimize IDS models. This paper presents a systematic review of the integration of XAI in IDS, focusing on enhancing transparency and interpretability in cybersecurity. Through a comprehensive analysis of recent studies, this review identifies commonly used XAI techniques, evaluates their effectiveness within IDS frameworks, and examines their benefits and limitations. Findings indicate that rule-based and tree-based XAI models are preferred for their interpretability, though trade-offs with detection accuracy remain challenging. Furthermore, the review highlights critical gaps in standardization and scalability, emphasizing the need for hybrid models and real-time explainability. The paper concludes with recommendations for future research directions, suggesting improvements in XAI techniques tailored for IDS, standardized evaluation metrics, and ethical frameworks prioritizing security and transparency. This review aims to inform researchers and practitioners about current trends and future opportunities in leveraging XAI to enhance IDS effectiveness, fostering a more transparent and resilient cybersecurity landscape.

Keywords: cyber threats; explainable artificial intelligence; intrusion detection systems; machine learning; model explainability; model interpretability; systematic review.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
PRISMA flowchart.

Similar articles

Cited by

References

    1. Ables J., Childers N., Anderson W., Mittal S., Rahimi S., Banicescu I., et al. ., (2024). Eclectic rule extraction for Explainability of deep neural network based intrusion detection systems, arXiv preprint arXiv:2401.10207. Available at: https://arxiv.org/abs/2401.10207 (Accessed November 07, 2024).
    1. Adadi A., Berrada M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160. doi: 10.1109/ACCESS.2018.2870052 - DOI
    1. Aldahdooh A., Wang P., Zhao K. (2021). Anomaly detection using explainable artificial intelligence in cyber-physical systems. Sensors 21:6403. doi: 10.3390/s21126403 - DOI - PubMed
    1. Ali S., Shamsi J. A., Mustafa S., Shamshirband S. (2022). Intrusion detection system in IoT-based Smart City applications using explainable AI. Sustain. Cities Soc. 77:103590. doi: 10.1016/j.scs.2021.103590 - DOI
    1. Arrieta A. B., Díaz-Rodríguez N., Ser J. D., Bennetot A., Tabik S., Barbado A. (2020). Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities, and challenges toward responsible AI. Information Fusion 58, 82–115. doi: 10.1016/j.inffus.2019.12.012 - DOI

Publication types

LinkOut - more resources