Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Nov 7;67(11):4162-4175.
doi: 10.1044/2024_JSLHR-24-00122. Epub 2024 Sep 26.

Community-Supported Shared Infrastructure in Support of Speech Accessibility

Affiliations

Community-Supported Shared Infrastructure in Support of Speech Accessibility

Mark Hasegawa-Johnson et al. J Speech Lang Hear Res. .

Abstract

Purpose: The Speech Accessibility Project (SAP) intends to facilitate research and development in automatic speech recognition (ASR) and other machine learning tasks for people with speech disabilities. The purpose of this article is to introduce this project as a resource for researchers, including baseline analysis of the first released data package.

Method: The project aims to facilitate ASR research by collecting, curating, and distributing transcribed U.S. English speech from people with speech and/or language disabilities. Participants record speech from their place of residence by connecting their personal computer, cell phone, and assistive devices, if needed, to the SAP web portal. All samples are manually transcribed, and 30 per participant are annotated using differential diagnostic pattern dimensions. For purposes of ASR experiments, the participants have been randomly assigned to a training set, a development set for controlled testing of a trained ASR, and a test set to evaluate ASR error rate.

Results: The SAP 2023-10-05 Data Package contains the speech of 211 people with dysarthria as a correlate of Parkinson's disease, and the associated test set contains 42 additional speakers. A baseline ASR, with a word error rate of 3.4% for typical speakers, transcribes test speech with a word error rate of 36.3%. Fine-tuning reduces the word error rate to 23.7%.

Conclusions: Preliminary findings suggest that a large corpus of dysarthric and dysphonic speech has the potential to significantly improve speech technology for people with disabilities. By providing these data to researchers, the SAP intends to significantly accelerate research into accessible speech technology.

Supplemental material: https://doi.org/10.23641/asha.27078079.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Best published word error rates (%), up to each specified year, of automatic speech recognizers tested on standard, widely available corpora of dysarthric speech (orange, top curve) and nondysarthric speech (blue, bottom curve). ASR = automatic speech recognition.
Figure 2.
Figure 2.
The differential diagnostic pattern dimensions in the study of Darley et al. (1969) are a set of 41 Likert scales, each describing a dimension of disability audible in an utterance. A total of 5,342 utterances in the 2023-10-05 SAP data package (an average of 25.3 per participant) were rated using differential diagnostic pattern dimensions. This figure shows histograms of those 5,342 utterances, as rated using 10 of the differential diagnostic pattern dimensions.
Figure 3.
Figure 3.
Examples of utterances with low, moderate, and high intelligibility loss. (a) Utterance with low intelligibility loss, but highly breathy voice: “My favorite book … ” (b) Utterance with moderate intelligibility loss, glottalized on the first word: “Set an alarm … ” (c) Utterance with high intelligibility loss: “How's the traffic … ” Note the drop in loudness by 20 dB following the first syllable, the sonorance of the two fricatives, and the reduction of the /r/ in “traffic” to a /w/. (d) Utterance with high intelligibility loss and high prevalence of repeated phonemes: “Create a (g-) grocery sh-.” Note approximately 10 attempts to produce the /g/ in grocery, from 1.56 s to 2.50 s.
Figure 4.
Figure 4.
Examples of unusually fast and unusually slow speech. (a) Unusually fast, hypoarticulated speech: “That is the way that they keep elephants at the circus, you know” (16 canonical syllables, 14 produced syllables, 2 s). (b) Unusually slow, creaky speech: “Set a reminder … ” (four syllables in 2 s).
Figure 5.
Figure 5.
Average word error rate on the unshared prompts spoken by each person in the test corpus, plotted as a function of the number of words in the corresponding reference transcript. WER = word error rate.

References

    1. Amodei, D., Ananthanarayanan, S., Anubhai, R., Bai, J., Battenberg, E., Case, C., Casper, J., Catanzaro, B., Cheng, Q., Chen, G., Chen, J., Chen, J., Chen, Z., Chrzanowski, M., Coates, A., Diamos, G., Ding, K., Du, N., Elsen, E., … Zhu, Z. (2016, June). Deep Speech 2: End-to-end speech recognition in English and Mandarin. In Balcan M. F. & Weinberger K. Q. (Eds.), Proceedings of the 33rd International Conference on Machine Learning (Vol. 48, pp. 173–182). JMLR.
    1. Ansel, B. M., & Kent, R. D. (1992). Acoustic-phonetic contrasts and intelligibility in the dysarthria associated with mixed cerebral palsy. Journal of Speech and Hearing Research, 35(2), 296–308. 10.1044/jshr.3502.296 - DOI - PubMed
    1. Baevski, A., Zhou, Y., Mohamed, A., & Auli, M. (2020). wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33, 12449–12460.
    1. Baskar, M. K., Herzig, T., Nguyen, D., Diez, M., Polzehl, T., Burget, L., & Černocký, J. (2022). Speaker adaptation for wav2vec2 based dysarthric ASR. In Proceedings of Interspeech (pp. 3403–3407). ISCA.
    1. Chang, H.-P. (1993). Speech input for dysarthric users. The Journal of the Acoustical Society of America, 94(Suppl. 3), 1782. 10.1121/1.407986 - DOI

LinkOut - more resources