Book contents
- Frontmatter
- Contents
- List of figures
- List of tables
- List of contributors
- Acknowledgements
- 1 Why different, why the same? Explaining effects and non-effects of modality upon linguistic structure in sign and speech
- Part I Phonological structure in signed languages
- Part II Gesture and iconicity in sign and speech
- Part III Syntax in sign: Few or no effects of modality
- Part IV Using space and describing space: Pronouns, classifiers, and verb agreement
- 13 Pronominal reference in signed and spoken language: Are grammatical categories modality-dependent?
- 14 Is verb agreement the same crossmodally?
- 15 The effects of modality on spatial language: How signers and speakers talk about space
- 16 The effects of modality on BSL development in an exceptional learner
- 17 Deictic points in the visual–gestural and tactile–gestural modalities
- Index
- References
17 - Deictic points in the visual–gestural and tactile–gestural modalities
Published online by Cambridge University Press: 22 September 2009
- Frontmatter
- Contents
- List of figures
- List of tables
- List of contributors
- Acknowledgements
- 1 Why different, why the same? Explaining effects and non-effects of modality upon linguistic structure in sign and speech
- Part I Phonological structure in signed languages
- Part II Gesture and iconicity in sign and speech
- Part III Syntax in sign: Few or no effects of modality
- Part IV Using space and describing space: Pronouns, classifiers, and verb agreement
- 13 Pronominal reference in signed and spoken language: Are grammatical categories modality-dependent?
- 14 Is verb agreement the same crossmodally?
- 15 The effects of modality on spatial language: How signers and speakers talk about space
- 16 The effects of modality on BSL development in an exceptional learner
- 17 Deictic points in the visual–gestural and tactile–gestural modalities
- Index
- References
Summary
Introduction
A Deaf-Blind person has only one channel through which conventional language can be communicated, and that channel is touch. Thus, if a Deaf-Blind person uses signed language for communication, he must place his hands on top of the signer's hands and follow that signer's hands as they form various handshapes and move through the signing space. A sign language such as American Sign Language (ASL) that is generally perceived through vision must, in this case, be perceived through touch.
Given that contact between the signer's hands and the receiver's hands is necessary for the Deaf-Blind person to perceive a signed language, we may wonder about the absence of the nonmanual signals of visual–gestural language (e.g. eyebrow shifts, head orientation, eye gaze). These elements play a significant role in the grammar of signed languages, often allowing for the occurrence of various word orders and syntactic structures. One of the central questions motivating this study was how the absence of such nonmanual elements might influence the form that tactile-gestural language takes.
Thus, this study began as an effort to describe the signed language production of Deaf-Blind individuals with a focus on areas where nonmanual signals would normally be used in visual–gestural language. However, after reviewing the narrative data from this study, it quickly became evident that the Deaf-Blind subjects did not utilize nonmanual signals in their signed language production.
- Type
- Chapter
- Information
- Modality and Structure in Signed and Spoken Languages , pp. 442 - 468Publisher: Cambridge University PressPrint publication year: 2002
References
- 7
- Cited by