SPIN2025: The Best of British! SPIN2025: The Best of British!

P29Session 1 (Thursday 9 January 2025, 15:25-17:30)
Multimodal communicative signals facilitating communicative success for hard-of-hearing individuals in noisy contexts

Anna Palmann, James Trujillo
University of Amsterdam, Netherlands

Linda Drijvers
Donders Institute, Radboud University, Nijmegen, Netherlands

Floris Roelofsen
University of Amsterdam, Netherlands

Communication is inherently multimodal, requiring interlocutors to combine signals from the auditory with those from the visual modality. If, however, one of these modalities is impaired, as it is the case for people who are hard-of-hearing, compensation frequently occurs in the other modality. Thus, it is expected that people with hearing impairments differ from normal hearers in their use and processing of co-speech visual signals, i.e., gestures, head movements, or facial expressions. To date, it remains unclear to what extent people who are hard-of-hearing rely on visual information in communication with each other, for example by using sign-supported speech, lipreading or the enhancement of kinematic gesture and head features. In dyadic interactions, where speakers accommodate each other in order to facilitate mutual understanding, the production and comprehension of these signals are intertwined. In this research, we focus primarily on the production of multimodal communicative signals and indirectly also capture comprehension by investigating the role these signals play in communicative success in interactions of hard-of-hearing individuals in noisy contexts.

In a preliminary online survey, we collected data from hard-of-hearing individuals in the Netherlands. We inquired about their clinical history, their use of visual communication and hearing aids, communicative contexts that are perceived as challenging, and strategies for successful communication in such situations. Using a mixed-methods approach, we found, among other things, that although most participants regularly use hearing aids, the performance of these devices in filtering signals in background noise is still perceived as poor. Thus, communication difficulties in noisy contexts persist even among hearing aid users. To facilitate communicative success despite these challenges, participants reported a variety of strategies ranging from pre-planning the situational set-up to, most importantly, modifying acoustic and kinematic features during the conversation.

Building on these preliminary self-report findings, we will further investigate which multimodal communicative signals emerge in background noise and which of them are associated with communicative success in a naturalistic dialogue setting. For this, dyads of hard-of-hearing and control dyads of normal-hearing individuals will engage in free as well as in task-based dialogue while exposed to social and non-social background noise. In addition to analyzing acoustic and linguistic features of speech, we will use motion capture to analyze different aspects of gesture and head kinematics, facial expressions, and body posture. Using a data-driven machine learning approach, we will then assess which multimodal communicative features play a role in successful communication, hypothesizing that communicative success and the production of multimodal communicative signals differ both between dyad groups and between type of background noise. With this research, we hope to provide insights into how to make communication easier and more successful for hard-of-hearing individuals, especially in noisy contexts.

Last modified 2024-11-22 15:45:01