Book contents
- Frontmatter
- Contents
- Figures
- Contributors
- Acknowledgments
- 1 On representing events – an introduction
- 2 Event representation in serial verb constructions
- 3 The macro-event property
- 4 Event representation, time event relations, and clause structure
- 5 Event representations in signed languages
- 6 Linguistic and non-linguistic categorization of complex motion events
- 7 Putting things in places
- 8 Language-specific encoding of placement events in gestures
- 9 Visual encoding of coherent and non-coherent scenes
- 10 Talking about events
- 11 Absent causes, present effects
- References
- Index
9 - Visual encoding of coherent and non-coherent scenes
Published online by Cambridge University Press: 01 March 2011
- Frontmatter
- Contents
- Figures
- Contributors
- Acknowledgments
- 1 On representing events – an introduction
- 2 Event representation in serial verb constructions
- 3 The macro-event property
- 4 Event representation, time event relations, and clause structure
- 5 Event representations in signed languages
- 6 Linguistic and non-linguistic categorization of complex motion events
- 7 Putting things in places
- 8 Language-specific encoding of placement events in gestures
- 9 Visual encoding of coherent and non-coherent scenes
- 10 Talking about events
- 11 Absent causes, present effects
- References
- Index
Summary
Introduction
Perceiving and talking about events taking place in the world around us is an essential part of our everyday life and crucial for social interaction with other human beings. Visual perception and language production are both involved in this complex cognitive behavior and have been investigated individually in numerous empirical studies. Extensive models have been provided for both domains (see Hoffmann 2000; Levelt 1989, for overviews). But an integrative approach to the interface between vision and speaking, to “seeing for speaking,” is still lacking. Psycholinguists have only recently begun to experimentally investigate how visual encoding and linguistic encoding interact when we describe events and their protagonists or participants (see Henderson and Ferreira, 2004b). These studies have answered some, but raised many more general and specific questions:
How does visual encoding of events evolve; how detailed are representations of the visual world generated at various points during visual encoding?
How is visual encoding linked to stages of linguistic encoding for speaking?
Is the visual encoding of an event influenced by the linguistic task that subjects have to perform in experiments (e.g., describing scenes with full sentences vs. naming individual scene actors, and so on)?
Is visual encoding influenced by the type of stimulus – in particular, are there differences between line drawings and naturalistic stimuli?
Does the encoding of (parts of) coherent scenes differ from the encoding of (parts of) scenes in which objects, animals or people do not interact in ways that could be straightforwardly interpreted as meaningful, coherent action?
- Type
- Chapter
- Information
- Event Representation in Language and Cognition , pp. 189 - 215Publisher: Cambridge University PressPrint publication year: 2010
- 6
- Cited by