Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025:33:1863-1875.
doi: 10.1109/TNSRE.2025.3558730. Epub 2025 May 15.

Derivative-Guided Dual-Attention Mechanisms in Patch Transformer for Efficient Automated Recognition of Auditory Brainstem Response Latency

Derivative-Guided Dual-Attention Mechanisms in Patch Transformer for Efficient Automated Recognition of Auditory Brainstem Response Latency

Yin Liu et al. IEEE Trans Neural Syst Rehabil Eng. 2025.

Abstract

Accurate recognition of auditory brainstem response (ABR) wave latencies is essential for clinical practice but remains a subjective and time-consuming process. Existing AI approaches face challenges in generalization, complexity, and semantic sparsity due to single sampling-point analysis. This study introduces the Derivative-Guided Patch Dual-Attention Transformer (Patch-DAT), a novel, lightweight, and generalizable deep learning (DL) model for the automated recognition of latencies for waves I, III, and V. Patch-DAT divides the ABR time series into overlapping patches to aggregate semantic information, better capturing local temporal patterns. Meanwhile, leveraging the fact that ABR waves occur at the zero crossing of the first derivative, Patch-DAT incorporates a first derivative-guided dual-attention mechanism to model global dependencies. Trained and validated on large-scale, diverse datasets from two hospitals, Patch-DAT (with a size of 0.36 MB) achieves accuracies of 92.29% and 98.07% at 0.1 ms and 0.2 ms error scales, respectively, on a held-out test set. It also performs well on an independent dataset with accuracies of 88.50% and 95.14%, demonstrating strong generalization across clinical settings. Ablation studies highlight the contributions of the patching strategy and dual-attention mechanisms. Compared to previous state-of-the-art DL models, Patch-DAT shows superior accuracy and reduced complexity, making it a promising solution for object recognition of ABR latencies. Additionally, we systematically investigate how sample size and data heterogeneity affect model generalization, indicating the importance of large, diverse datasets in training robust DL models. Future work will focus on expanding dataset diversity and improving model interpretability to further improve clinical relevance.

PubMed Disclaimer

References

Publication types

LinkOut - more resources