Home Email this page Print this page Bookmark this page Decrease font size Default font size Increase font size
Noise & Health  
 CURRENT ISSUE    PAST ISSUES    AHEAD OF PRINT    SEARCH   GET E-ALERTS    
 
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Email Alert *
Add to My List *
* Registration required (free)  
 


 
   Abstract
  Introduction
   Materials and me...
  Results
  Discussion
  Conclusions
   References
   Article Tables
 

 Article Access Statistics
    Viewed3659    
    Printed192    
    Emailed0    
    PDF Downloaded15    
    Comments [Add]    
    Cited by others 2    

Recommend this journal

 


 
  Table of Contents    
ORIGINAL ARTICLE  
Year : 2019  |  Volume : 21  |  Issue : 98  |  Page : 7-16
Selected Cognitive Factors Associated with Individual Variability in Clinical Measures of Speech Recognition in Noise Amplified by Fast-Acting Compression Among Hearing Aid Users

Department of Behavioral Sciences and Learning, Linköping University, Linköping; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden

Click here for correspondence address and email
Date of Submission03-Nov-2018
Date of Acceptance20-Dec-2019
Date of Web Publication19-Feb-2020
 
  Abstract 


Objective: Previous work examining speech recognition in more challenging listening environments has revealed a large variability in both persons with normal and hearing impairments. Although this is clinically very important, up to now, no consensus has been reached about which factors may provide better explanation for the existing individual variability in speech recognition ability among hearing aid users, when speech signal is degraded. This study aimed to examine hearing-sensitivity skills and cognitive ability differences between listeners with good and poor speech recognition abilities. Materials and Methods: A total of 195 experienced hearing aid users (33–80 years) were grouped by higher or lower speech recognition ability based on their performance on the Hagerman sentences task in multi-talker babble using fast-acting compression algorithm. They completed a battery of cognitive abilities tests, hearing-in-noise and the auditory thresholds test. Results: The results showed that the two groups did differ significantly overall on cognitive abilities tests like working memory, cognitive processing speed and attentional shifting, but not on the attentional inhibitory test and non-verbal intelligence test. Conclusions: Listeners with poor compared to those with better speech recognition abilities exhibit poorer cognitive abilities, which place them in a disadvantaged position, and /or more susceptible to signal modifications (as a result of fast-acting compression signal processing), resulting in limited benefits from hearing aids strategies. The findings may have implications for hearing aid signal processing strategies selection in rehabilitations.

Keywords: Cognition, hearing aid users, individual variability, speech recognition ability

How to cite this article:
Yumba WK. Selected Cognitive Factors Associated with Individual Variability in Clinical Measures of Speech Recognition in Noise Amplified by Fast-Acting Compression Among Hearing Aid Users. Noise Health 2019;21:7-16

How to cite this URL:
Yumba WK. Selected Cognitive Factors Associated with Individual Variability in Clinical Measures of Speech Recognition in Noise Amplified by Fast-Acting Compression Among Hearing Aid Users. Noise Health [serial online] 2019 [cited 2023 Dec 9];21:7-16. Available from: https://www.noiseandhealth.org/text.asp?2019/21/98/7/278383



  Introduction Top


In daily communication, persons with hearing loss commonly encounter a large variation in speech signal, which may influence their performance.[1],[2],[3] High individual variability exists in speech recognition in adverse listening situations4. Several investigations have suggested that this variation has been partly attributed to hearing sensitivity and cognitive abilities like working memory, cognitive processing speed and attentional inhibitory skills.[1–3, 5–10] Although this is clinically very important, up to now, no consensus has been reached about which factors may provide better explanation for the existing individual variability in speech recognition ability, among hearing aid users when speech signal is degraded. This study aimed to examine hearing-sensitivity skills and cognitive ability differences between listeners with good and poor speech recognition abilities. A number of studies have shown that hearing loss with comparable degrees of and configurations of sensorineural hearing loss frequently exhibit different abilities to recognize speech in more challenging listening conditions.[5],[6],[7],[8],[9],[10],[11] Other investigators have observed a link between individual’s cognitive abilities and the variability in speech recognition performance in adverse listening conditions.[5],[12],[13],[14],[15],[16] For example, Lunner and Sundewall-Thorén[17] suggested that listeners who struggle more with performing speech recognition in noise tasks when fast-acting compression was used have been demonstrated to have poorer cognition. Whereas, listeners with better cognition was reported to possess greater ability to recognize speech in adverse listening situations using fast-acting compression. Such individual susceptibility to adverse listening conditions, such as noise, and /or signal degradations created by hearing aid signal-processing strategy, may explain why hearing loss persons may receive different degrees of benefits from hearing aids. In other words, for listeners with lower cognitive ability exhibit high susceptibility to noise and/or speech signal modifications, which may offset or cancel intended benefit from hearing aids, and results into poorer speech recognition performance.[10],[18] Cognitive ability can be thought of as a set of mental resources to be used when dealing various tasks.[16] Recent model by Rönnberg and colleagues, the Ease of Language Understanding (ELU) model has proposed the relative importance of cognitive abilities that may influence individual variability in both unaided and aided speech recognition performance in more challenging listening situations for persons with normal hearing or hearing impaired and deaf sign language users.[19],[20] According to the ELU model, under more difficult listening conditions (i.e., caused by background noise, hearing loss, and distortions from hearing aid signal processing), when the phonological form of speech signals input does not match phonological representations stored in long-term memory, listeners must rely to a greater extent on their cognitive abilities to unlock the meaning of the message. In those situations, individuals with smaller cognitive ability may be at a disadvantage, resulting to poorer performance than those with lager cognitive ability. Whereas, under favourable listening conditions (i.e., match conditions), listeners rely to lesser extent on their cognitive abilities as speech signal input is rapidly and implicitly matched to the phonological representations stored in long-term memory.

In support to this model, several studies have shown that cognitive abilities such as working memory, cognitive processing speed, and executive function were associated with intelligibility of speech in degraded listening environments.[3],[16],[17],[18] Similarly, Vaughn and colleagues found that after controlling for the effects of hearing sensitivity, working memory and cognitive processing speed were the most dominant factors associated with speech recognition performance in older adults’ listeners.[22] Another study by Akeroyd in which twenty research studies were reviewed aiming to evaluate the factors accounting the individual differences in speech recognition in noise among hearing-impaired individuals. The author found, however, that an individual’s peripheral hearing sensitivity was the primary factor predicting the variability observed in speech recognition in noise performance, and cognitive abilities as a second factor, whereas measures of general ability such as IQ tests were most ineffective.[5]

In a similar investigation, individual data from Humes and colleagues[8] investigated in 50 elderly listeners aged 63 to 83 years, speech recognition using a wide range of speech materials (nonsense syllables, monosyllabic words, sentences) in both in quiet and in noise background (at the presentation levels of 70 and 90 dB SPL). They found that hearing loss was very important factor associated with the variability in speech recognition performance explaining for 70–75% of the total variance in speech recognition performance, while the auditory processing and cognitive function accounting for little or no additional variance.[23]

Tamati and colleagues[3] evaluated the sensory, perceptual, and neurocognitive differences between good and poor listeners on a new high-variability sentence recognition test in adverse listening situations, referred to as Perceptually Robust English Sentence Test Open-set (PRESTO). A total of 40 young, normal hearing adults were tested on PRESTO. They reported that the two groups did not differ generally on self-reported hearing difficulties in real-world listening environments. However, listeners with poor PRESTO ability showed significantly greater difficulty understanding speakers in a public place. While, listeners with better PRESTO ability performed more accurately compared to those with poor PRESTO ability on gender discrimination and regional dialect categorisation, but there was no significant difference in talker discrimination on accuracy or response time. In addition, the two groups performed significantly different on verbal working memory measures, but there was no difference between the groups on the inhibition task and on the measure of non-verbal intelligibility. However, the current study focused on experienced older adult hearing aid users as opposed to Tamati et al. study. In another study by Ohlenforst, Souza and Mac Donald,[24] in support of earlier work, have demonstrated that listeners with high working memory capacity performed significantly better on speech intelligibility tasks compared to those with lower working memory capacity, when fast-acting compression was applied. They suggested that individual variability in working memory capacity plays an important role in determining the benefits of hearing aid signal processing algorithms (see also Souza et al.).[25]

The purpose of the study

The present study examined the individual differences between the higher speech recognition ability group and lower-speech recognition ability group on the cognitive abilities tests, general ability (IQ test), the Swedish HINT test and hearing thresholds among hearing-impaired listeners. These two groups were created based on listeners’ performance on Hagerman sentences recognition task in multi-babble noise background, using fast-acting compression. Why do hearing-impaired listeners vary in their aided speech recognition abilities? Evidence from previous studies has suggested that a combination of hearing aid signal processing algorithm such as fast-acting compression and multi-babble noise background may create a type of adverse listening environments for hearing-impaired listeners,[12],[18],[21],[26],[27] as a result of greater speech signal distortions they may cause. Moreover, a number of studies support the idea that hearing-impaired listeners with higher cognitive capacity performed better on speech recognition in adverse listening conditions, because they are less susceptible to noise.[16],[17],[25] Those findings have been interpreted as a greater susceptibility to speech signal modification for listeners with lower cognitive ability, which offsets or cancels intended benefit for some hearing-impaired hearing aid users, and results into poorer speech recognition performance.[10],[11],[12],[13],[14],[15],[16],[17],[18] We hypothesized that these two groups would differ on their overall cognitive abilities and their susceptibility to noise and speech signal modifications created by fast-acting compression strategy. We expected that listeners with higher-speech recognition capacity would show greater working memory capacity, faster cognitive processing speed, better executive functioning ability and better hearing-in-noise ability, but not better general ability (Intelligence test, IQ test), compared to those with lower-speech recognition capacity. Moreover, as susceptibility to speech signal distortions is related to listeners’ cognitive abilities, we also would expect that listeners with lower speech recognition ability to be more susceptible to speech signal distortions created by fast-acting compression and multi-babble noise background, showing poor benefits from hearing aid signal processing (fast-acting compression).


  Materials and methods Top


Participants

One hundred and ninety-five participants native Swedish speakers (51 women and 82 men), aged 33–80 years (mean = 60.75 years, SD = 8.89), from a larger investigation (Rönnberg et al., 2016), were included the study. All participants had bilateral, symmetrical, and mild to moderate sensorineural hearing loss and were recruited from the audiology clinic of the university hospital of Linköping, Sweden. The pure- tone average hearing threshold for both ears at frequencies (0.5, 1, 2, and 4kHz) was 39.23 dB HL (SD = 19.64). The inclusion criteria were: bilaterally fitted with digital hearing aids with common features such as wide dynamic range compression, noise reduction, and directional microphones, and that the participants had used the aids for a minimum of 1 year at the time of testing. The participants had no history of otological problems or psychological disorders. The study was approved by the regional ethics committee (Dnr: 55-09 T122-09). All participants gave informed consent.

Cognitive tests

Reading span test

The Reading Span test was used to measure working memory capacity.[28],[29] The test was designed to tap individual working memory capacity in terms of storage and processing simultaneously. A total of 28 sentences were presented on the computer screen at a rate of one word or word pair every 800 msec. Half of the sentences were absurd (e.g. “The car drinks milk”), and half of the sentences were normal (e.g. “The farmer builds his house”). After an entire sentence was shown, the participants were instructed to determine whether the sentence made sense or not. The test included two blocks each of two, three, four, and five sentences per block. After each block was shown, the participants were asked to recall the first or the last words of the presented set of sentences. The test was recorded by the total number of items correctly recalled irrespective of serial order.

Semantic word pair span test

A semantic word-pair span test[30] was used to tap participants’working memory capacity. This test does not include any syntactic elements in the processing and storage components as in the reading span test. A sequence of word-pairs (such as “Bun, Hippo”) were displayed on a computer screen at a speed of 800msec per word. Half of the word-pairs consisted of living things (e.g. “cat”), and half of the word-pairs were non-living things (e.g. “paper”). The list length varied from 2 to 5, with three trials per length. The participants were asked to press the left button (green) if the living thing appears at the left side, and press the right button (red) if the living thing appears at the right side. Respond by pressing any button to begin, indicating in which position the word representing the living thing was presented (e.g., Left-Right). After each sequence of word-pair, the participants were asked to say loudly all the words that were recently presented either at left or right, and should be in the correct order of presentation. The total number of words correctly recalled was recorded by the experimenter. The maximum total score is 42 points.

Rapid Automatized Naming test (RAN)

Rapid Automatized Naming test was designed to measure individual cognitive processing speed, automaticity of naming, activation of working memory for processing and monitoring and, and the ability to shift between visual dimensions and semantic fields.[31] The test consisted of a set of three timed subtests, in which participants named the color (black, blue, red, or yellow), the shape (circle, square, line, and triangle), and/or a combined color-shape (e.g., Black square), presented on a paper in eight rows of five (40 items per subtest). In the first subtest, participants were asked to rapidly name 40 randomly sequenced colored squares (black, blue, and red, yellow). In the second subtest, subjects required to name each shape rendered in black (circle, square, line, and triangle). In the third subset, participants were instructed to rapidly name the color first, and then the shape in 40 stimulus combination (e.g., black square, blue circle, etc.). The participants were told to proceed as fast and as accurately as they could and the total time (in seconds) to complete each subtest was recorded. The entire test required 3 to 5 minutes and the total time (seconds) for rapid and automatic naming of 40 visual presented items was recorded digitally and provided for the statistical analysis.[31]

Physical matching task

Physical matching task was used to measure participants’ general processing speed.[29],[32] A series of lists containing 16 letters each were displayed on the computer screen at a rate of 30 seconds. The participants were asked to make a judgement as whether two letters presented at the same time had the same physical shape (e.g., A-A, or not A-a). Respond by pressing the button “yes” for the same shape and “no” for not the same shape during a 1750 mses interval after the presentation of each letter-pair. The verbal information processing speed was quantified by reaction tin (in milliseconds). The order of the trials was randomized across participants.

Shifting test

The number-letter task was used to measure individual executive function of attention shifting.[33] The materials consisted of a series of number-letter pairs (e.g. 7 g) displayed one at the time in one of four corners of the computer screen. The numbers consisted of odd (e.g., 1, 3, 5) and even (2, 4, 6) digits, and the letters were either capital (e.g., A, M, E, P) or small (e.g., a, m, k, y). In this shifting task, the participants were asked to press the button “even” for even number and the button “odd” for the odd number, to indicate whether the number was even or odd when the number-letter pairs were presented in one of the two top corners of the computer screen. The participants were instructed to press the button “Capital” or “Small”, to indicate whether the letter was capital or small, when the number-letter pairs were displayed at the lower corners of the computer screen. The number-letter pairs were presented in a clockwise rotation fashion.

Inhibiting test

The stop-signal task was to measure individual executive function of inhibition.[34] In this inhibiting task, a series of digits were displayed on the computer screen. The participants were instructed to press the “space bar” every time a digit different than 3 was displayed on the screen, but not to press the “space bare” when a digit 3 was displayed on the screen. Press any key to commence. The inhibiting ability was quantified by the number of total prepotent responses (i.e., digit 3) correctly inhibited.

Intelligence test

The Raven’s standard progressive matrix was used to assess the participants’ non-verbal intelligence.[35] The test consisted of a set of three subtests without time limit, (A, D, and E), containing 12 items with six multiple-choice alternatives each. In each subtest, participants were asked to respond by marking one of six alternatives with a pencil, on scoring sheets. The subtest A was used for practice, with feedback from the experimenter. The subtests D and E were administered on paper without feedback and time limit. The non-verbal intelligence was quantified by the sum scores on subtests D and E. The maximum total score is 24 points.

Speech recognition in nose tests

The current study used two different types of speech material, including Hagerman sentences test,[36] and the Swedish hearing-in-noise (HINT) test.[32] The Hagerman sentences are Swedish five-word, low semantic redundancy, low-context sentences with typical structure consisting of proper noun, verb, numeral, adjective and noun, for example Randy had six new books. The sentences were displayed in randomized order via a loudspeaker at a constant level of 65 dB SPL in noise background. The noise level was automatically adjusted based on the adaptive method to estimate SNR at 50% and 80% levels of performance.[37] The Swedish hearing-in-noise (HINT) sentences have relatively higher semantic redundancy consisting of three to seven words. For example, “The young lady drives a car” and “He had a book in his hand”. The test sentences are presented in the presence of noise, using an adaptive procedure to estimate the speech recognition threshold for sentences where 50% for the sentences are correctly repeated, an assessment that does not produce ceiling or floor effects.

Procedure

The data were collected in three sessions of nearly 3 hours each as part of a larger investigation.[30] Data reported in the present study during the first session and in the third session. In the first session, background data were recorded, PTA4, and the cognitive tests, and general ability such IQ test were administered, and in the third session speech recognition in noise test (Hagerman sentences test and HINT test).

The testing took place in a sound-treated test booth. The participants were seated individually in double-walled sound attenuating chamber one meter from the loudspeaker. A brief practice session was carried out in order to familiarize each participant with the speech stimuli and response tasks. One list of ten sentences for both Hagerman sentences and HINT sentences materials were presented. During the experimental session, tree lists containing ten sentences each were used for Hagerman sentences test and two lists of ten sentences were used for HINT test. The sentences stimuli were presented one at a time in randomized order at a constant level of 65 dB SPL. The participant was asked to repeat as many of words as possible for each sentence. The noise level was automatically adjusted by applying a standard algorithm that employ an interleaved procedure on the basis of word scores to estimate individual SNRs for 50% and 80% levels of performance for the Hagerman sentences tests.[37] Whereas, the noise level was also automatically adapted applying a procedure based on sentence score to estimate SNRs for 50% performance for the HINT sentences materials.[32]

Study design and data analysis

The current study used an extreme group design to examine sensory, perceptual and neurocognitive abilities differences between good hearing-impaired listeners and poor hearing-impaired listeners on Hagerman sentences recognition test under adverse listening conditions. Two speech recognition ability groupings were used in the present study, one with low and one with high speech recognition ability. These extreme groups were created based on their speech recognition abilities scores on Hagerman sentences test presented in four-talker babble and processed by fast-acting compression. The four participants who achieved the median 2.055 dB were excluded, as they could not easily be assigned to either of the two groupings. The average aggregate Hagerman sentences scores were calculated. Mean Hagerman sentences scores in the low group were 3.90 dB (SD = 1.433) and the high group mean score was 0.738 dB (SD = 1.303) participants.

Hagerman sentences test is well an established high variability sentences recognition test under adverse listening conditions in clinical settings.[4],[30],[36],[38] In addition, this adverse listening condition (i.e., Hagerman sentences presented in four-talker babble with fast-acting compression) was selected based on our previous results[21] that found that the combination of Hagerman sentences in the presence of four-talker babble background noise with fast-acting compression was the most difficult listening condition (or a source of major signal speech distortions) for hearing-impaired listeners. The individual differences between the higher-speech recognition ability group and lower speech recognition ability group on neurocognitive abilities tests, the Swedish HINT test and hearing thresholds were conducted using a series of the independent sample t-tests. There was a significant difference in age between the two groups t (190) = −3.722, P = 0.000, suggesting that the group who performed better on Hagerman sentences (Hagerman & four-talker babble and fast-acting compression) were significantly younger (M = 58.80 years, SD = 8.86) than those who performed poorer (M = 63.08 years, SD = 6.86). All the significance levels were set at P < 0.05 and P < 0.01(two-tailed). All analyses were performed using SPSS statistical package 23.0 for windows.


  Results Top


The average aggregate Hagerman sentences scores were calculated under various aided listening conditions. Mean Hagerman sentences scores in the low group was 3.90 dB (SD = 1.433) and the high group mean score was 0.738 dB (SD = 1.303) participants. Means and standard deviations for hearing variables, and hearing-in-noise measures (SNR in dB) are shown in [Table 1] and [Table 2].
Table 1 Mean values and standard deviations for Fluid intelligence test, cognitive abilities measures of the listener groups (higher and lower speech recognition capacity)

Click here to view
Table 2 Mean values and standard deviations for hearing sensitivity scores (dB HL) and HINT test (in dB, SNR) of the listener groups (higher and lower speech recognition ability)

Click here to view


Cognitive abilities

Inhibition test

The Independent sample t-tests were carried out between the two groups on the inhibition test, revealing no significant differences between higher and lower speech recognition ability groups, t (185) = −0.97, P = 0.332 This suggests that two groups performed similarly in terms of response accuracy and the ability to inhibit irrelevant information.

Shifting test

The t-test showed a significant difference between higher and lower speech recognition ability groups, t (183) = −2.022, P = 0.045, where the grouping with higher speech recognition capacity (M = 157.89, SD = 433.52) outperformed the grouping with lower speech recognition ability (M = 1709.35, SD = 436.49).

Rapid automatized naming test

The independent sample t-test was carried out on RAN test revealed that the group with higher speech recognition capacity performed better than the group with lower speech recognition capacity, t (131) = −2.45, P = 0.015. This suggests that the grouping with higher speech recognition capacity have higher cognitive processing speed skills (M = 57.48, SD = 11.49) than those with lower speech recognition capacity (M = 62.97, SD = 11.96).

Physical matching test

The t-test showed a significant difference between higher and lower speech recognition ability groups, t (190) = −2.45, P = 0.045, where the grouping with higher speech recognition ability (M = 943.25, SD = 184.50) outperformed the grouping with lower speech recognition ability (M = 1002.23, SD = 219.42).

Reading span test

The independent sample t-test was carried out on the reading span test revealing a significant difference between the grouping with higher speech recognition ability and the grouping with lower speech recognition ability, t (191) = 2.17, P = 0.031, where the grouping with higher speech recognition ability have higher working memory capacity (M = 16.63, SD = 3.59) than those with lower speech recognition ability (M = 15.45, SD = 3.94).

Semantic word-pair span test

The independent sample t-test on the SWPST test showed a significant difference between the two groups of speech recognition ability, t (191) = 2.42, P = 0.016, where the grouping with higher speech recognition ability have higher working memory capacity (M = 17.82, SD = 5.32) than those with lower speech recognition ability (M = 17.28, SD = 5.70).

Intelligence test

The independent sample t-test was carried out on the average scores of Raven matrix test between larger speech recognition ability and smaller speech recognition ability groups. No significant differences between listener groups were found on either task or the overall performance IQ score.

Hearing sensitivity

The independent sample t-tests were carried out between groups on PTA4 left ear, PTA4 right ear, PTA4 better ear and PTA4 worse ear. The t-tests showed significant differences between the two groups on PTA4 left ear, t (191) = −5.51, P = 0.0000; on PTA4 right ear, t (191) = −5.51, P = 0.000; on PTA4 better ear, t (191) = −5.77, P = 0.000; and on PTA4 worse ear, t (191) = −5.35, P = 0.000. The listeners with good speech recognition ability appear to exhibit better hearing sensitivity than those with poorer speech recognition ability.

The Hearing-in-noise test

The Swedish HINT test

The independent sample t-test was carried out on the Swedish hearing-in-noise test. The t-test revealed that the participants with larger speech recognition ability were significantly more accurate than those with smaller speech recognition ability in hearing in noise task, t (191) = −6.24, P = 0.000.


  Discussion Top


In this study, hearing aid user participants were devised in two groups based on speech recognition scores (i.e., higher and lower speech recognition ability group) obtained from Hagerman sentences presented in multi-babble noise and processed by fast-acting compression. Recent studies have shown that speech signal distortions caused by a combination of a fluctuating background of noise such as multi-babble noise and fast-acting compression may be one of a major source of signal modifications which may result into poorer speech recognition performance for some listeners with hearing loss.[10],[12],[21],[24] The aim of this study was to evaluate the individual variability between listeners with lower and higher speech recognition ability in adverse listening conditions. Overall, our findings show that listeners with lower speech recognition capacity performed poorly compared to those with higher speech recognition capacity on almost all cognitive ability tasks and on the ability to hearing-in noise task. In contrast, the two groups of listeners did not differ on the executive functioning task of inhibiting, and the non-verbal intelligence task (IQ test). The data could be interpreted to suggest that listeners with good speech recognition ability seem to have faster cognitive processing speed, greater working memory capacity, greater hearing-in-noise ability and better shifting ability compared to those with poorer speech recognition capacity. Successful aided speech recognition in noise requires greater speech processing ability associated with better cognitive ability, but not with non-verbal intelligence. The findings also may indicate that listeners with better speech recognition abilities exhibit greater hearing-in-noise ability and higher hearing sensitive skills, compared to those with poorer speech recognition capacity. These findings agree with previous work even though different test materials and methods were used.[3],[5],[6],[16],[18],[23]

Supporting evidence has shown that individual variabilities in speech recognition in noise ability is related to individual cognitive abilities. Here, we considered the relationship between cognition and speech recognition in multi-babble noise with fast-acting compression. The recognition of speech in noise implies greater ability needed for listeners to process rapidly the incoming speech signal by extracting meaning from the message and maintaining that information for the integration with later speech signal input.[19] We might expect speech recognition processing to rely on listener’s cognitive ability, placing listeners with poor cognitive ability at a disadvantage.[2],[16],[20] In addition, several studies investigating the relationship between cognition and speech recognition in noise have suggested that listeners with poor cognitive ability may have difficulty recognizing speech in adverse listening conditions, and they seem to be more susceptible to speech signal distortions caused by background noise and signal processing artifacts introduced by some hearing aids.[12],[17] This elevated susceptibility to signal modification may be responsible for reduced amplification benefits experienced by listeners with poor speech recognition ability.

Similar to previous studies, we interpret our data to suggest that listeners with higher speech recognition ability are also likely to have better cognitive capacity compared with those with lower speech recognition ability. Secondly, listeners with higher speech recognition capacity seemed to be relatively better at benefitting from fast-acting compression applied to noisy speech, which allows them to cope with distortions caused fast-acting compression. This may suggest that listeners with poorer speech recognition ability seem to be less able to adapt to rapid changes to the modified speech signal that may offset or cancel intended benefits from hearing aid signal processing algorithms. Several studies have shown that individual differences in cognitive abilities are important in evaluating ability in recognizing speech in noise and the benefit of signal processing algorithms in hearing aids among hearing impaired. Gatehouse and her colleagues observed for examples that individuals with larger cognitive abilities outperformed those with smaller cognitive abilities in terms of benefits from fast-acting compression.[13],[14] Taken together, these earlier observations, it may appear that listeners with lower speech recognition ability performed more poorly on the Hagerman sentences presented in multi-babble noise and processed with fast-acting compression, because of their poor cognitive abilities. This finding might explain why listeners with poor speech recognition ability may be unable to matching altered acoustic signal created by fast-acting compression when modifying speech envelope to stored information in the lexicon.[17],[20]

With regard to listeners working memory capacity, our finding indicates that listeners with lower speech recognition ability performed more poorly on working memory tasks, compared to those with higher speech recognition ability, as expected. This finding agrees with previous studies that suggest that individuals with smaller working memory ability had poor speech intelligibility scores for speech processed with hearing aid fast-acting compression.[12],[16],[17],[24] This relationship has been observed in multi-babble noise conditions. Other investigators have shown a relationship between listeners’ responses to speech signal distortions caused by hearing aid signal processing algorithms and working memory capacity.[12] These observations suggest that relative benefits from fast-acting compression is reduced in individuals with lower working memory capacity. As shown here, it seems that listeners with poorer speech recognition ability compared to those with better speech recognition ability are likely to also exhibit poorer working memory capacity. It is possible that listeners with poorer ability have more difficulties recognizing speech in noise, which makes them to be more vulnerable to hearing aid signal processing distortions. Listeners with poorer speech recognition ability are more sensitive to fast-acting compression aggressive signal processing.[18] Similar to previous work, one may interpreter these results to suggest that listeners with higher speech recognition ability may have better abilities to store and process information instantaneously, which enables them to cope with distortions introduced by fast-acting compression.[12]

The finding also reveals that listeners with poorer speech recognition ability seem to have slower cognitive processing speed than those with poor speech recognition ability, as indicated by our results. It is interesting to observe that both RAN task and Physical matching task are good predictors of the variability in speech recognition ability among experience hearing aid users. This may suggest that listeners with poorer speech recognition ability may have slower cognitive processing speed.[39] It is possible that for listeners with limited speech recognition ability, the ability to rapidly and accurately processed speech might have been affected when more challenging listening conditions are created by signal degradations (i.e., from multi-babble background noise and fast-acting compression). [3],[10],[21],[25],[27] This may suggest that listeners with poorer speech recognition ability possess a reduced ability to benefit from hearing aid fast-acting compression.[10] The pattern of this finding is consistent with earlier observations,[10],[22] and indicates that cognitive processing speed ability is crucial in speech recognition performance in noise for persons with impairment.

Researchers have demonstrated that the executive functions are very useful in speech recognition processing in adverse listening situations.[20] Similarly to earlier work by Vaughn and colleagues,[22] in the present study, listeners with poor speech recognition ability were found to exhibit lower attentional shifting ability, compared to those with better speech recognition ability did. The present finding may suggest that listeners with good shifting ability are easily able to adapt quickly to speech signal changes (as modified by fast-acting compression), and easily recognize speech in various listening conditions. Moreover, good shifting ability allows individuals to rapidly choose a strategy and carry it out to adapt to the changing listening conditions. This finding is in agreement with previous work that suggested that listeners’ ability to shift back and forth between multiple tasks plays an important role in individual differences in speech recognition in adverse listening environments for hearing-impaired individuals.[20]

Several studies have shown that good attentional and inhibitory abilities enable listeners to accurately attend selectively to detailed information in the speech signals in the target utterance and quickly adapt to different speakers.[16],[30] Contrary to our expectations, the finding shows that listeners with lower and higher speech recognition abilities did not differ significantly in their attentional and inhibitory abilities, which is in agreement with previous studies.[3],[25],[40] This may suggest that the two groups of listeners may have similar abilities to simultaneously focus and attend to the target speaker and content, while suppressing competition from the background speaker and content. As observed in this finding, it appears that the attentional and inhibitory abilities measured by stop-signal task in the present study, do not play an important role in the variability in speech recognition performance in multi-babble noise using fast-acting compression. One possible explanation of this observation could be that the inhibitory test material was not enough difficult for experienced hearing aid users, possibly due to their previous exposure to similar material.[41]

Similar to inhibitory test, no differences were observed between listeners with lower and higher speech recognition abilities on Raven test, which was employed to assess non-verbal intelligence. This finding is consistent with previous studies that suggest that individual differences in speech recognition in noise are not associated with some overall difference in no-verbal intelligence.[3] However, the present finding is in contrast with other work by Humes,[42] in which he observed that non-verbal intelligence and aging accounted for 12.7% of the variance. The effects of non-verbal intelligence on speech recognition performance were similar in listeners with lower and higher speech recognition ability.Regarding the individual variability in terms of the ability to hearing speech in noisy environments, our finding reveals that listeners with higher speech recognition ability performed better than listeners with lower speech recognition ability on hearing-in-noise task. This finding may suggest that the ability to hear when speech signal is degraded is related to listener’s susceptibility to noise. Listeners with good hearing in noise ability are likely to show lesser susceptibility to noise and /or signal modifications.[10] It is possible that better speech recognition ability enables listener to efficiently process and hear speech relatively better, even when speech signals are modified by the background noise or hearing aid signal processing algorithms18. Similarly to hearing-in-noise ability, it has been observed in the present study that listeners with higher speech recognition ability exhibited higher hearing sensitivity compared to those with lower speech recognition ability, when speech was processed by fast-acting compression in noisy listening conditions. This may indicate that individual differences in hearing sensitivity ability may play an important role in determining the benefits of hearing aid signal processing algorithms when listening in noisy situations, in everyday communication. Based on these observations, these findings may suggest that a combination of good ability in speech recognition and greater hearing speech-in-noise ability may play an important role in terms of benefit when speech is processed by fast-acting compression in noisy environments.[16],[25]


  Conclusions Top


The present study, together with earlier studies[3],[5],[6],[12],[23],[43], provides ongoing evidence for consideration of individual variability in cognitive processing speed, working memory, relating to the variability observed in speech recognition in more challenging listening conditions using Hagerman sentences in multi-talker babble with fast-acting compression. Our findings suggest that hearing aid users with poorer speech recognition abilities exhibit poorer cognitive abilities, which place them in a disadvantaged position, and/or more susceptible to signal modifications compared to those with better speech recognition abilities. The findings may have implications for hearing aid signal processing strategies selection in rehabilitations.

Acknowledgments

The author thanks Rachel Jane Ellis for her insightful comments on earlier version of this manuscript. Jerker Rönnberg and Henrik Danielsson for allowing me to use part of the data of n200 project for this manuscript under a data transfer agreement. Mathias Hällgren (from the Department of Technical Audiology, Linköping University) for his technical support; Tomas Bjuvmar and Helena Torlofson (from the Hearing Clinic, University Hospital, Linköping); Elaine Ng (from the Swedish Institute for Disability Research, Linköping University) for her assistance in data collection; and Thomas Karlsson, Björn Lidestam, Shahram Moradi, and Emil Holmer for their valuable support. Björn Lidestam, Shahram Moradi, for their valuable assistance regarding this manuscript.

Financial support and sponsorship

A Linnaeus Centre Hearing and Deafness (grant number 349-2007-8654) from the Swedish Research Council supported this work.

Conflicts of interest

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.



 
  References Top

1.
Lunner T, Rudner M, Rosenbom T et al. Using speech recall in hearing aid fitting and outcome evaluation under ecological test conditions. Ear Hear 2016;37:145S-154S  Back to cited text no. 1
    
2.
Rudner M. Cognitive spare capacity as an index of listening efforts. Ear Hear 2016;27:69S-76S.  Back to cited text no. 2
    
3.
Tamati T N, Gilbert J L, Pisoni DB (2013). Some factors underlying individual differences in speech recognition on PRESTO: a first report. J Am Acad Audiol 2013;24:616-34. doi: 10.3766/jaaa.24.7.10.  Back to cited text no. 3
    
4.
Larsby B, Hällgren M, Lyxell B. The interference of different background noises on speech processing in elderly hearing-impaired subjects. Int J Audiol 2008;47:S83-S90  Back to cited text no. 4
    
5.
Akeroyd MA. Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int J Audiol 2008;47:S53-71. doi: 10.1080/14992020802301142  Back to cited text no. 5
    
6.
Hwang JS, Kim KH, Lee JH. Factors affecting sentence in noise recognition for normal hearing and listeners with hearing loss. J Audiol Oto 2017;21:81-7  Back to cited text no. 6
    
7.
Humes LE, Dubno JR, Gordon-Salant S. Central presbycusis: a review and evaluation of the evidence. J Am Acad Audiol 2012;23:635-66  Back to cited text no. 7
    
8.
Humes LE, Watson UB, Christensen AL et al. Factors associated with individual differences in clinical measures of speech recognition among the elders. J Speech Hear Research 1994;37:464-74.  Back to cited text no. 8
    
9.
Crandell C. Individual differences in speech recognition ability: implications for hearing aid selection. Ear Hear 1991;12:100-8.  Back to cited text no. 9
    
10.
Lunner T. Cognitive function in relation to hearing aid use. Int J Audiol 2003;42:S49-58. doi:10.3109/14992020309074624  Back to cited text no. 10
    
11.
Plomb R. A signal-to- noise ratio model for the speech reception threshold of hearing impaired. J Speech, Lang Hear Res 1986;29:146-54.  Back to cited text no. 11
    
12.
Arehart KH, Souza P, Baca R, Kates JM. Working memory, age, and hearing loss: susceptibility to hearing aid distortion. Ear Hear 2013;34:251-60.  Back to cited text no. 12
    
13.
Gatehouse S, Naylor G, Elberling C. Benefits from hearing aids in relation to the interaction between the users and the environment. Int J Audiol 2003;42:S77-85. doi:10.3109/14992020309074627  Back to cited text no. 13
    
14.
Gatehouse S, Naylor G, Elberling C. Linear and nonlinear hearing aids fittings. Patterns of benefit. Int J Audiol 2006;45:130-52. doi:10.1080/14992020500429518.  Back to cited text no. 14
    
15.
Foo C, Rudner M, Rönnberg J et al. Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. J Am Acad Audiol 2007;18:618-31. doi:10.3766/jaaa.18.7.8.  Back to cited text no. 15
    
16.
Rudner M, Rönnberg J, Lunner T. Working memory supports listening in noise for persons with hearing impairment. J Am Acad Audiol 2011;22:156-67. oi:10.3766/jaaa.22.3.4.  Back to cited text no. 16
    
17.
Lunner T, Sundewall-Thorén E. Interaction between cognition, compression, and listening conditions: effects on speech in noise performance in two channels hearing aid. J Am Acad Audiol 2007;18:539-552.  Back to cited text no. 17
    
18.
Souza P, Sirow L. Relating working memory to compression parameters in clinically fit hearing aids. Am J Audiol 2014;23:394-401.  Back to cited text no. 18
    
19.
Rönnberg J, Rudner M, Foo C et al. Cognition counts: a working memory system for ease of language understanding (ELU). Int J Audiol 2008;47:S99-S105. doi:10.1080/14992020802301167.  Back to cited text no. 19
    
20.
Rönnberg J, Lunner T, Zekveld AA, Sörqvist P, Danielsson H et al. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front System Neuro 2013;7:31. doi: 10.3389/fnsys.2013.00031  Back to cited text no. 20
    
21.
Yumba KW. Cognitive processing speed, working memory, and the intelligibility of hearing aid-processed speech in persons with hearing impairment. Front Psychol 2017;8:1308. doi: 10.3389/fpsyg.2017.01303  Back to cited text no. 21
    
22.
Vaughan N, Storzbach D, Furukawa I. Sequencing versus nonsequencing working memory in understanding of rapid speech by older listeners. J Am Acad Audiol 2008;17:506-18.  Back to cited text no. 22
    
23.
Humes LE, Kidd GR, Lentz JJ. Auditory and cognitive factors underlying individual differences in aided speech understanding among older adults. Front System Neuro 2013;7.  Back to cited text no. 23
    
24.
Ohlenforst B, Souza P, Mac Donald E. Exploring the relationship between working memory, compressor speed and background noise characteristics. Ear Hear 2016;37:137-143. doi: 10.1097/AUD.000000000000000240.  Back to cited text no. 24
    
25.
Souza P, Arehart KH, Shen J, Anderson M,, Kates JM. Working memory and intelligibility of hearing-aid processed speech. Front Psychol 2015;6:526.  Back to cited text no. 25
    
26.
Ng EHN, Rudner M, Lunner T et al. Effects of noise and working memory capacity on memory processing of speech for hearing-aid users. Int J Audiol 2013;52:433-41. doi:10.3109/14992027.2013.776181.  Back to cited text no. 26
    
27.
Yumba KW. Impact of background noise, hearing aid noise reduction and fast-acting compression strategies on intelligibility of speech for hearing impaired listeners. Acta Otorhinolaryngolo Italica 2018;38:1-11 doi: 10.14639/0392-100X-1978.  Back to cited text no. 27
    
28.
Daneman M, Carpenter PA. Individual differences in working memory and reading. J Verbal Learn Verbal Behav 1980;19:450-66.  Back to cited text no. 28
    
29.
Rönnberg J. Cognitive and communicative function: the effects of chronological age and "handicap age". Europ J Cognit Psychol 1990;2:253-73.  Back to cited text no. 29
    
30.
Rönnberg J, Rudner M, Lunner T et al. Hearing loss, cognition and speech understanding: the n200 study. Int J Audiol 2016;55:623-42.  Back to cited text no. 30
    
31.
Wiig EH, Nielsen NP, Minthon L, Warkentin S. AQT: a quick test of cognitive speed. San Antonio, TX: Pearson /Psych. Corp 2002.  Back to cited text no. 31
    
32.
Hällgren M, Larsby B, Arlinger S. A Swedish version of the hearing in noise test (HINT) for measurement of speech recognition. Int J Audiol 2006;45(4), p. 227-237.  Back to cited text no. 32
    
33.
Monsell S. Control of mental processes. In: Bruce V (Ed), Unsolved mysteries of the mind; Tutorial essays in cognition. Hove: Erlbaum, Taylor & Francis; 1996. pp. 93-148.  Back to cited text no. 33
    
34.
Logan GD. On the ability to inhibit thought and action: A users’ guide to the stop signal paradigm. In: Dagenbach D, Carr TH (Eds), Inhibitory Processes in Attention, Memory, and Language. San Diego, CA, US: Academic Press; 1994. pp. 189-239.  Back to cited text no. 34
    
35.
Raven J. Raven progressive matrices. In: Mac Callum RS (Eds), Handbook of Nonverbal assessment. Boston, MA: Springer; 2003.  Back to cited text no. 35
    
36.
Hagerman B, Kinnefors V. Efficient adaptive methods for measurements of speech perception thresholds in quiet and noise. Scand J Audiol 1995;24:71-7.  Back to cited text no. 36
    
37.
Brand T. Analysis and optimization of psychophysical procedures in audiology Bibliotheksund Information system, Oldenburg 2000.  Back to cited text no. 37
    
38.
Larsby B, Hällgren M, Lyxell B et al. Cognitive performance and perceived effort in speech processing tasks: effects of different noise backgrounds in normal-hearing and hearing-impaired subjects. Int J Audiol 2005;44:131-43. doi:10.1080/14992020500057244.  Back to cited text no. 38
    
39.
Wingfield A. Cognitive factors in auditory performance: context, speed of processing and constraints of memory. J Am Acad Audiol 1996;7:175-82.  Back to cited text no. 39
    
40.
Neher T. Relating hearing loss and executive functions to hearing aid users’ preference for, and speech recognition with, different combinations of binaural noise reduction and microphone directionality. Front Neurosc 2014;8:391.  Back to cited text no. 40
    
41.
Rudner M, Foo C, Rönnberg J, Lunner T. Cognition and aided speech recognition in noise: specific role for cognitive factors following nine-week experience with adjusted compression settings in hearing aids. Scand J Psychol 2009;50:405-18.  Back to cited text no. 41
    
42.
Humes LE. Factors underlying speech recognition performance in elderly hearing aid wearers. J Acoust Soc Am 2002;112:1112-32.  Back to cited text no. 42
    
43.
Souza P, Arehart KH. Robust relationship between reading pan and speech recognition in noise. Int J Audiol 2015;54:705-13.  Back to cited text no. 43
    

Top
Correspondence Address:
Wycliffe K Yumba
Department of Behavioural Sciences and Learning, Linköping University, SE-581 83 Linköping University, SE-581 83 Linköping
Sweden
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/nah.NAH_59_18

Rights and Permissions



 
 
    Tables

  [Table 1], [Table 2]

This article has been cited by
1 The Views and Experience of Audiologists Working in Flemish Hearing Aid Centers Concerning Cognition Within Audiological Practice
Katrien Kestens, Sofie Degeest, Hannah Keppler
American Journal of Audiology. 2022; : 1
[Pubmed] | [DOI]
2 Influences of listener gender and working memory capacity on speech recognition in noise for hearing aid users
Wycliffe K. Yumba
Speech, Language and Hearing. 2020; : 1
[Pubmed] | [DOI]



 

Top