UNCONSTRAINED DYSFLUENCY MODELING FOR DYSFLUENT SPEECH TRANSCRIPTION AND DETECTION
- PMID: 40625646
- PMCID: PMC12233912
- DOI: 10.1109/asru57964.2023.10389771
UNCONSTRAINED DYSFLUENCY MODELING FOR DYSFLUENT SPEECH TRANSCRIPTION AND DETECTION
Abstract
Dysfluent speech modeling requires time-accurate and silence-aware transcription at both the word-level and phonetic-level. However, current research in dysfluency modeling primarily focuses on either transcription or detection, and the performance of each aspect remains limited. In this work, we present an unconstrained dysfluency modeling (UDM) approach that addresses both transcription and detection in an automatic and hierarchical manner. UDM eliminates the need for extensive manual annotation by providing a comprehensive solution. Furthermore, we introduce a simulated dysfluent dataset called VCTK++ to enhance the capabilities of UDM in phonetic transcription. Our experimental results demonstrate the effectiveness and robustness of our proposed methods in both transcription and detection tasks.
Keywords: detection; dysfluent speech; transcription.
Figures



References
-
- Snowling Margaret J and Stackhouse Joy, Dyslexia, speech and language: a practitioner’s handbook, John Wiley & Sons, 2013.
-
- Pálfy Juraj and Pospíchal Jiří, “Pattern search in dysfluent speech,” in 2012 IEEE International Workshop on Machine Learning for Signal Processing. IEEE, 2012, pp. 1–6.
-
- Pitt Mark A, Johnson Keith, Hume Elizabeth, Kiesling Scott, and Raymond William, “The buckeye corpus of conversational speech: Labeling conventions and a test of transcriber reliability,” Speech Communication, vol. 45, no. 1, pp. 89–95, 2005.
-
- Kouzelis Theodoros, Paraskevopoulos Georgios, Katsamanis Athanasios, and Katsouros Vassilis, “Weakly-supervised forced alignment of disfluent speech using phoneme-level modeling,” Interspeech, 2023.