Preattentive processing of audio-visual emotional signals.

Foecker, Julia, Gondan, M and Röder, B (2011) Preattentive processing of audio-visual emotional signals. Acta Psychologica, 137 (1). pp. 36-47. ISSN 0001-6918

Full content URL:

Full text not available from this repository.

Item Type:Article
Item Status:Live Archive


Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face-voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face-voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection.

Additional Information:The final version of this article is available online at
Keywords:multisensory, faces, voices
Subjects:C Biological Sciences > C800 Psychology
Divisions:College of Social Science > School of Psychology
ID Code:32881
Deposited On:09 Aug 2018 15:14

Repository Staff Only: item control page