P23Session 1 (Thursday 9 January 2025, 15:25-17:30)Examining the effect of prior knowledge on speech processing in cochlear implant users: A visual world paradigm study
Cochlear implants (CIs) help to restore hearing in individuals with severe to profound hearing loss. While many CI users can understand speech well in quiet environments, it becomes challenging when multiple people speak simultaneously. Previous studies in normal-hearing listeners have shown that knowing in advance who is going to speak and where to listen can improve speech understanding (Kitterick et al., 2010, JASA 127:2498–2508, doi:10.1121/1.3327507) and that prior knowledge of a speaker’s location can reduce cognitive load during speech processing (Koelewijn et al., 2015, Hear Res 323:81–90, doi:10.1016/j.heares.2015.02.004). Still, it is unknown to what extent CI users can benefit from such information and how the presence or absence of prior knowledge impacts cognitive load during speech processing.
The aim of this study is to examine whether CI users benefit from information about the spatial position and the voice of a target talker when presented against a competing talker.
To understand the intricacies of speech-on-speech masking at a fine-grained temporal level, we use the Visual World Paradigm (VWP; Tanenhaus et al., 1995, Science 268:1632–1634, doi:10.1126/science.7777863; Abdel-Latif et al., under review), which is based on the finding that gaze fixations and speech processing are closely linked in time. We employ the VWP using matrix sentences from the Oldenburg Sentence Test (OLSA; Wagener et al., 1999, Z Audiol 38:44–56). Following Meister et al. (2020, JASA 147:EL19, doi:10.1121/10.0000499), two competing OLSA sentences are presented simultaneously, with the target sentence indicated by the keyword “Stephen.” Participants are instructed to focus their gaze on icons representing the target sentence and to verbally recall the sentence after a retention period. Two different conditions are considered, one with and one without a priori information about the target talker’s voice or the spatial position.
Preliminary results in terms of gaze fixation (as a proxy for attention), pupil size (as a proxy for cognitive load), and speech intelligibility are reported and discussed.
Funding: This project is funded by the Deutsche Forschungsgemeinschaft (ME2751/6-1).