Hostname: page-component-77c89778f8-fv566 Total loading time: 0 Render date: 2024-07-19T02:13:58.580Z Has data issue: false hasContentIssue false

Gesture or sign? A categorization problem

Published online by Cambridge University Press:  26 April 2017

Corrine Occhino
Affiliation:
Department of Linguistics, University of New Mexico, Albuquerque, NM 87131-0001. cocchino@unm.eduwilcox@unm.eduhttp://www.unm.edu/~wilcox
Sherman Wilcox
Affiliation:
Department of Linguistics, University of New Mexico, Albuquerque, NM 87131-0001. cocchino@unm.eduwilcox@unm.eduhttp://www.unm.edu/~wilcox

Abstract

Goldin-Meadow & Brentari (G-M&B) rely on a formalist approach to language, leading them to seek objective criteria by which to distinguish language and gesture. This results in the assumption that gradient aspects of signs are gesture. Usage-based theories challenge this view, maintaining that all linguistic units exhibit gradience. Instead, we propose that the distinction between language and gesture is a categorization problem.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

The relationship between signed languages and gesture poses a thorny problem. Goldin-Meadow & Brentari (G-M&B) bring to bear important contributions regarding how and what to call the “gestural” and “linguistic” pieces of this puzzle. We agree with their suggestion that speech and gesture should be considered an integrated multimodal construction. Where we disagree is with their assumptions, first that this dichotomy is itself categorical (we contend it is not), second that language (signed or spoken) is wholly categorical while gesture is wholly gradient, and third, that the (putative) gradient elements of signed languages are therefore gesture.

False dichotomies, arising from false assumptions, lead to false conclusions. The world presented by G-M&B is one of a clear dichotomy between categorical, discrete, countable, invariable, and stable on the one hand (i.e., language), and gradient, uncountable, variable, and idiosyncratic on the other (i.e., gesture).

This dichotomy is too simplistic to describe gesture. Studies from co-speech gesture have called into question the assumption that gesture is holistic. Calbris (Reference Calbris1990), for example, showed that quotable gestures in French can be decomposed into meaningful units of handshape, location, and movement. Gesture is clearly not wholly idiosyncratic. Núñez and Sweetser (Reference Núñez and Sweetser2006) have shown that metaphorically motivated co-speech gestures have highly regular forms referring to the past or the future. The question is to what extent do gestures, functioning within a multimodal system alongside speech, become entrenched within speakers and conventionalized across the speech community. As G-M&B point out, when taken out of this multimodal and multifunctional context, gestures become more language-like (Singleton et al. Reference Singleton, Morford and Goldin-Meadow1993). Thus, we have a gradient from gesture to language.

The dichotomy is also too simplistic to describe language. G-M&B cite morphology as an exemplar of discreteness. Hay and Baayen (Reference Hay and Baayen2005), however, showed that people's behavior in experimental tasks judging morphological complexity is not categorical. They concluded that gradedness is part and parcel of grammar.

G-M&B's dichotomies are the historical remnants of structuralist/formalist approaches. These approaches assume an almost exclusive reliance on digital representations composed of discrete and listable symbols; the division of language into separate, building block components such as phonetics, phonology, lexicon, and morphology; and a default assumption of classical categories with strict boundaries, as opposed to prototype categories with degrees of membership. The dichotomies arose because these approaches set up a distinction between a mental object, whether language versus parole or competence versus performance. This ideal linguistic object “consists of well-defined discrete categories and categorical grammaticality criteria,” while “real language can be highly variable, gradient, and rich in continua” (Bod et al. Reference Bod, Hay and Jannedy2003, p. 1).

Usage-based approaches to language (Bybee Reference Bybee2001; Reference Bybee2010; Langacker Reference Langacker2008) move beyond these dichotomies, leading to a more cognitively sound view of language and its mental representation. As Bybee (Reference Bybee2010, p. 2) noted, “All types of units proposed by linguists show gradience, in the sense that there is a lot of variation within the domain of the unit (different types of words, morphemes, syllables) and difficulty setting the boundaries of that unit.” Langacker (Reference Langacker2008, p. 13) concluded that the world of discrete units and sharp boundaries has been imposed on language, rather than discovered in its use.

Usage-based approaches take language in use as the source material from which language users construct grammars. Rather than assuming a priori categorical and nongradient building blocks that are rendered fuzzy and gradient when performed, usage-based approaches contend that networks with varying levels of complexity, specificity, and schematicity emerge as language users extract the commonality in multiple experiences. G-M&B point to the high variability of location, for example, in agreeing verbs, and argue that location is therefore gestural. A usage-based approach would suggest that variability of locations in verb agreement constructions leads to schematic representations in signers’ grammars. These schematic locations exist alongside more specific elements of the construction – for example, the handshape. When highly schematic elements such as location are also highly productive, as is the case for agreeing verbs, the result is high variability when these constructions are put to innovative use.

If criteria such as discreteness versus gradience cannot be used to categorize elements of use as language versus gesture, how can this determination be made? Typologists identify categories across languages in terms of shared function (Croft Reference Croft2001). But identifying shared function across speech-gesture constructions and sign constructions is not easy. As G-M&B admit, researchers are still using hearing speakers’ gestures, as determined by hearing researcher judgment, as a guide. The approach is to categorize certain elements of a usage event as speech and others as gesture, then to search in signed languages for forms similar to those categorized as gesture in spoken language. The danger lies in making the unwarranted assumption that similar forms share the same function. Recent brain studies suggest the contrary. Newman et al. (Newman et al. Reference Newman, Supalla, Fernandez, Newport and Bavelier2015) found that lifelong experience with a visual language alters the neural network, so that gesture is processed more like language in native signers – what is gesture for a hearing person is language for a deaf person.

Classifying a particular usage event as language or gesture is a categorization task. When making a categorization judgment, people compare a structure extracted from experience and stored in memory to a new experience. To the extent that the new experience is judged to be similar to the stored experience, it is categorized as an instance of that structure. When categorization is applied to language constructions, speakers and signers are, in effect, making grammaticality judgments.

Whether intentionally or not, assumptions have been carried forward from structuralist/formalist theories that impede our ability to understand the nature of signed and spoken language and their relation to gesture. Although G-M&B offer an excellent case that speech and gesture are inseparable parts of an integrated system, we are not convinced that the elements they classify as gesture in spoken language function as gesture in signed languages.

References

Bod, R., Hay, J. & Jannedy, S. (2003) Probabilistic linguistics. MIT Press.Google Scholar
Bybee, J. (2001) Phonology and language use. Cambridge University Press.Google Scholar
Bybee, J. (2010) Language, usage and cognition. Cambridge University Press.Google Scholar
Calbris, G. (1990) The semiotics of French gestures. Indiana University Press.Google Scholar
Croft, W. (2001) Radical construction grammar: Syntactic theory in typological perspective. Oxford University Press.Google Scholar
Hay, J. B. & Baayen, R. H. (2005) Shifting paradigms: Gradient structure in morphology. Trends in Cognitive Science 9(7):342–48.Google Scholar
Langacker, R. W. (2008) Cognitive grammar: A basic introduction. Oxford University Press.Google Scholar
Newman, A. J., Supalla, T., Fernandez, N., Newport, E. L. & Bavelier, D. (2015) Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture. Proceedings of the National Academy of Sciences of the United States of America 112(37):11684–89.Google Scholar
Núñez, R. E. & Sweetser, E. (2006) With the future behind them: Convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time. Cognitive Science 30(3):401–50.Google Scholar
Singleton, J. L., Morford, J. P. & Goldin-Meadow, S. (1993) Once is not enough: Standards of well-formedness in manual communication created over three different timespans. Language 69(4):683715.Google Scholar