Cross-modal integration of affective facial expression and vocal prosody: an EEG study

Abstract

We have all experienced how a telephone conversation can be more challenging than speaking face to face. Understanding the intended meaning of a speaker’s words requires forming an impression of the current mental state of the speaker, including her beliefs, intentions, and emotional state (Sperber & Wilson, 1995). Facial expressions are an important source of this information. In this study, we wondered at what point emotional information from faces was integrated with the auditory processing stream. We hypothesized that the N400 component, which is sensitive to meaning at a variety of levels (Lau, Phillips, & Poeppel, 2008; Van Berkum, Van Den Brink, Tesink, Kos, & Hagoort, 2008), would be affected by incongruous emotions in face/voice pairs.

Publication
Poster presented at SALC III
Date