Filter
-
(203)
-
(176)
-
(8)
-
(188)
-
(66)
-
(10)
-
(252)
-
(89)
-
(1)
-
(49)
-
(287)
-
(16)
-
(35)
-
(794)
-
(43)
-
(14)
-
(1210)
-
(394)
-
(463)
-
(434)
6491 - 6500
of 7020 results
-
Seeing a speaker's face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker's face provides temporal cues to auditory cortex, and articulatory information from the speaker's mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However, it remains unclear how the integration of these cues varies as a function of listening conditions. Here, we sought to provide insight on these questions by examining EEG responses in humans (males and females) to natural audiovisual (AV), audio, and visual speech in quiet and in noise. We represented our speech stimuli in terms of their spectrograms and their phonetic features and then quantified the strength of the encoding of those features in the EEG using canonical correlation analysis (CCA). The encoding of both spectrotemporal and phonetic features was ...Jun 9, 2021