SPIN2025: The Best of British! SPIN2025: The Best of British!

P17Session 1 (Thursday 9 January 2025, 15:25-17:30)
The third Clarity Enhancement Challenge

Jon Barker
University of Sheffield, UK

John Culling
Cardiff University, UK

Trevor Cox
University of Salford, UK

Michael Akeroyd, Graham Naylor
University of Nottingham, UK

Simone Graetzer
University of Salford, UK

Jennifer Firth
University of Nottingham, UK

Jianyuan sun
University of Sheffield, UK

The Clarity Enhancement Challenges (CECs) seek to facilitate development of novel processing techniques for improving the intelligibility of speech in noise for hearing-aid users through a series of signal-processing challenges. Each challenge provides entrants with a set of stimuli for development and testing of their algorithms. Evaluations are conducted with similar but unseen material. Algorithms are permitted to use unlimited processing resources, but must be causal in the sense that output at time t must be independent of input at t+5 ms (i.e., in use, the algorithm could cause a lag of no more than 5 ms). The performances of the algorithms are assessed using objective measures of speech intelligibility and subjective measures conducted with a panel of hearing-impaired listeners. In CEC3, three different tasks were prepared, which each increased aspects of realism when compared with CEC2. For Task 1, the synthetised sixth-order ambisonic room impulse responses of CEC2 were replaced with real sixth-order ambisonic impulse responses, recorded in rooms using the em64 from MH Acoustics. As in CEC2, virtual ambisonic sources were processed to generate six hearing-aid input signals, three on each side of a head that rotates towards the target source. These impulse responses were used to generate virtual target sources within the room with up to 3 interfering sounds. For Task 2, the hearing aid input signals were directly recorded using microphones in hearing-aid shells during a listening task with target and interfering sounds presented from loudspeakers. Concurrent head orientation data was recorded using infrared motion tracking. For Task 3, real, often mobile interfering sounds were directly recorded in 6th-order ambisonics using the em64, 1) at various roadside locations, 2) at a railway station and 3) in a drinks party. Ambisonic impulse responses were also recorded in order to add target voices to these scenes and the hearing-aid microphone signals were again generated for a moving head. The performances of the algorithms were assessed using algorithmic estimates of speech intelligibility and objective measures conducted with a panel of hearing-impaired listeners. The enhanced signals from the challenge entrants showed that speech intelligibility as measured using the hearing-aid speech perception index (Kates & Arehart, 2021, doi:10.1016/j.specom.2020.05.001) could be substantially improved in each of the tasks. Further evaluation of the output-signal intelligibility for hearing-impaired listeners is ongoing and will be reported at the meeting.

Last modified 2024-11-22 15:45:01