Influence of speaking style adaptations and semantic context on the time course of word recognition in quiet and in noise

This study examines the effects of different listener-oriented speaking styles and semantic contexts on online spoken word recognition using eyetracking. In Experiment 1, different groups of listeners participated in a word-identification-in-noise and in a pleasantness-rating task. Listeners heard sentences with high- versus low-predictability semantic contexts produced in infant-directed speech, Clear Speech, and Conversational Speech. Experiment 2 (in silence) and 3 (in noise) investigated the time course of visual fixations to target objects when participants were listening to different speaking styles and contexts. Results from all experiments show that relative to conversational speech, both infant-directed speech and Clear Speech improved word recognition for high-predictability sentences, in quiet as well as in noise. This indicates that established advantages of infant-directed speech for young listeners cannot be attributed only to affect; the acoustic enhancements in infant-directed speech benefit adult speech processing as well. Furthermore, in silence (Experiment 2) lexical access was facilitated by contextual cues even in conversational speech; but in noise (Experiment 3) listeners reliably focused the target only when a combination of contextual cues and listener-adapted acoustic–phonetic cues were available. These findings suggest that both semantic cues and listener-oriented acoustic enhancements are needed to facilitate word recognition, especially in adver...
Source: Journal of Phonetics - Category: Speech-Language Pathology Source Type: research