Journal
JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH
Volume 53, Issue 1, Pages 18-33Publisher
AMER SPEECH-LANGUAGE-HEARING ASSOC
DOI: 10.1044/1092-4388(2009/08-0140)
Keywords
speech recognition; listening effort; dual-task paradigm; audio and audiovisual speech cues in noise; processing capacity
Funding
- Canadian Institutes of Health Research Funding Source: Medline
Ask authors/readers for more resources
Purpose: Using a dual-task paradigm, 2 experiments (Experiments 1 and 2) were conducted to assess differences in the amount of listening effort expended to understand speech in noise in audiovisual (AV) and audio-only (A-only) modalities. Experiment 1 had equivalent noise levels in both modalities, and Experiment 2 equated speech recognition performance levels by increasing the noise in the AV versus A-only modality. Method: Sixty adults were randomly assigned to Experiment 1 or Experiment 2. Participants performed speech and tactile recognition tasks separately (single task) and concurrently (dual task). The speech tasks were performed in both modalities. Accuracy and reaction time data were collected as well as ratings of perceived accuracy and effort. Results: In Experiment 1, the AV modality speech recognition was rated as less effortful, and accuracy scores were higher than A only. In Experiment 2, reaction times were slower, tactile task performance was poorer, and listening effort increased, in the AV versus the A-only modality. Conclusions: At equivalent noise levels, speech recognition performance was enhanced and subjectively less effortful in the AV than A-only modality. At equivalent accuracy levels, the dual-task performance decrements (for both tasks) suggest that the noisier AV modality was more effortful than the A-only modality.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available