Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jun 5;13(6):416.
doi: 10.3390/biology13060416.

Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals

Affiliations

Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals

Guillaume Andéol et al. Biology (Basel). .

Abstract

Understanding speech in noise is particularly difficult for individuals occupationally exposed to noise due to a mix of noise-induced auditory lesions and the energetic masking of speech signals. For years, the monitoring of conventional audiometric thresholds has been the usual method to check and preserve auditory function. Recently, suprathreshold deficits, notably, difficulties in understanding speech in noise, has pointed out the need for new monitoring tools. The present study aims to identify the most important variables that predict speech in noise understanding in order to suggest a new method of hearing status monitoring. Physiological (distortion products of otoacoustic emissions, electrocochleography) and behavioral (amplitude and frequency modulation detection thresholds, conventional and extended high-frequency audiometric thresholds) variables were collected in a population of individuals presenting a relatively homogeneous occupational noise exposure. Those variables were used as predictors in a statistical model (random forest) to predict the scores of three different speech-in-noise tests and a self-report of speech-in-noise ability. The extended high-frequency threshold appears to be the best predictor and therefore an interesting candidate for a new way of monitoring noise-exposed professionals.

Keywords: amplitude modulation detection; distortion products of otoacoustic emissions; electrocochleography; extended high frequency; frequency modulation detection; hearing questionnaire; speech in noise.

PubMed Disclaimer

Conflict of interest statement

Author Nihaad Paraouty and Nicolas Wallaert was employed by the company iAudiogram—My Medical Assistant SAS. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure A1
Figure A1
Amplitude modulation detection threshold as a function of sensation level and carrier frequency. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Figure A2
Figure A2
Frequency modulation detection threshold as a function of sensation level and carrier frequency. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Figure A3
Figure A3
Wave I amplitude as a function of click level and ear. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Figure A4
Figure A4
Electrocochleography wave I slope as a function of the ear. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Figure A5
Figure A5
DPOAE as a function of frequency and ear. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Figure A6
Figure A6
Upper frequency bounds under which 99% of the total power of the spectrum of the speech signals are comprised, for the three speech corpora. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual speech sounds. The dashed line points out the frequency of 8000 Hz.
Figure A7
Figure A7
Ratio of acoustical power contained in high vs. in low frequencies, with a cutoff frequency of 8 kHz, for the three speech corpora. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual speech sounds. The majority of speech sounds present at least 20 dB more energy in low (below 8 kHz) vs. in high frequencies.
Figure 1
Figure 1
Distribution of the age of participants. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Each dot shows the age of one participant.
Figure 2
Figure 2
Exterior view of the mobile hearing laboratory.
Figure 3
Figure 3
Interior view of the mobile hearing laboratory. In the center right of the setup, a video screen displays images of participants situated in the four booths. Positioned in the center left are four portable “follower” computers equipped with fold-down screens, to which the screens, keyboards, and mice of each booth are connected. Beneath these computers, there is the “leader” computer, positioned at the bottom center, with its screen visible. Additionally, the screens corresponding to the “follower” computers are also visible.
Figure 4
Figure 4
Audiometric thresholds as a function of frequency for left and right ear (N = 70). The black line shows the median, the gray area shows the interquartile range.
Figure 5
Figure 5
The performance for each speech-in-noise audiometry test; each dot shows the result of one participant. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range.
Figure 6
Figure 6
Main predictors of the consonant identification score. The importance is measured as the MSE increase for the nine first most important variables. The larger the value, the more important the variable. See Table 1 for abbreviations.
Figure 7
Figure 7
Scatter plots of the nine most important predictors of the consonant identification score. (A). Right ear EHF threshold. (B). Amplitude modulation detection threshold at 60 dB SL at 500 Hz. (C). Left ear 8000Hz threshold. (D). Left ear pure tone average. (E). Left ear wave I amplitude at 80 dB nHL. (F). Years of motorcycling. (G). Best ear pure tone average. (H). Frequency modulation detection threshold at 60 dB SL at 500 Hz. (I). Left ear EHF threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 8
Figure 8
Word in noise recognition. Importance measured as the increase in the mean square error for the nine most important variables. The larger the value is, the more important the variable in the model. See Table 1 for abbreviations.
Figure 9
Figure 9
Scatter plots of the nine most important predictors of the word in noise recognition threshold. (A). Frequency modulation detection threshold at 60 dB SL at 500 Hz. (B). Right ear EHF threshold. (C). History of hearing pathology. (D). Left ear 1000 Hz threshold. Pure tone average. (E). Left ear EHF threshold. (F). Years of motorcycling. (G). Left ear DPOAE at 2000 Hz. (H). Left ear DPOAE at 1000 Hz. (I). Left ear 2000 Hz threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 10
Figure 10
French matrix test. Importance measured as the increase in the mean square error for the nine most important variables. The larger the value is, the more important the variable in the model.
Figure 11
Figure 11
Scatter plots of the nine most important predictors of the FrMatrix. (A). Right ear EHF threshold. (B). Years of motorcycling. (C). Right ear DPOAE at 3000 Hz. (D). Amplitude modulation detection threshold at 60 dB SL at 500 Hz. (E). History of hearing pathology. (F). Right ear 4000 Hz threshold. (G). Right ear 125 Hz threshold. (H). Amplitude modulation detection threshold at 60 dB SL at 500 Hz. (I). Left ear EHF threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 12
Figure 12
Speech-in-noise pragmatic scale. Importance measured as the increase in the mean square error for the nine most important variables. The larger the value is, the more important the variable in the model.
Figure 13
Figure 13
Scatter plots of the nine most important predictors of speech-in-noise pragmatic scale. (A). Years of motorcycling. (B). Frequency modulation detection threshold at 60 dB SL at 500 Hz. (C). Right ear pure tone average. (D). Best ear pure tone average. (E). Right ear 8000 Hz threshold. (F). Left ear EHF threshold. (G). Left ear DPOAE at 3000 Hz. (H). Age. (I). Right ear 1000 Hz threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 14
Figure 14
Scatter plots showing the correlations between the three speech-in-noise tests: (A). Consonant identification vs. French matrix test. (B). Words-in-noise recognition vs. French matrix test. (C). Consonant identification vs. words-in-noise recognition. The blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 15
Figure 15
Scatter plots showing the correlations between the speech-in-noise pragmatic scale and the three speech-in-noise tests. (A). Consonant identification vs Speech-in-noise pragmatic scale (B). Words-in-noise recognition vs. Speech-in-noise pragmatic scale. (C). French matrix test vs. Words-in-noise recognition. The blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.

References

    1. Moore B.C.J. An Introduction to the Psychology of Hearing. 5th ed. Academic Press; San Diego, CA, USA: 2003.
    1. Le Prell C.G., Clavier O.H. Effects of noise on speech recognition: Challenges for communication by service members. Hear. Res. 2017;349:76–89. doi: 10.1016/j.heares.2016.10.004. - DOI - PubMed
    1. Hope A.J., Luxon L.M., Bamiou D.-E. Effects of chronic noise exposure on speech-in-noise perception in the presence of normal audiometry. J. Laryngol. Otol. 2013;127:233–238. doi: 10.1017/S002221511200299X. - DOI - PubMed
    1. Alvord L.S. Cochlear Dysfunction in “Normal-Hearing” Patients with History of Noise Exposure. Ear Hear. 1983;4:247. doi: 10.1097/00003446-198309000-00005. - DOI - PubMed
    1. Kujawa S.G., Liberman M.C. Adding insult to injury: Cochlear nerve degeneration after “temporary” noise-induced hearing loss. J. Neurosci. 2009;29:14077–14085. doi: 10.1523/JNEUROSCI.2845-09.2009. - DOI - PMC - PubMed

LinkOut - more resources