8 - Syntax
from Part IV - Graph-Based Natural Language Processing
Published online by Cambridge University Press: 01 June 2011
Summary
This chapter covers the use of graph-theoretical algorithms for tasks in syntax, including part-of-speech tagging using graphs that encode word and tag dependencies; dependency parsing using minimum spanning trees; prepositional attachment using word-dependency distributions induced from random-walk models; and co-reference resolution using graphclustering and min-cut algorithms.
Part-of-Speech Tagging
Part-of-speech tagging is defined as the task of automatically assigning parts of speech to words. For instance, given the text, “This is a book,” a part-of-speech tagger identifies that “this” is a pronoun, “is” is a verb, “a” is a determiner, and “book” is a noun. Part-of-speech tagging is required by almost any text-processing task, including word-sense disambiguation, parsing, and semantic analysis. As one of the first processing steps in any such application, the accuracy of the part-of-speech tagger directly impacts the accuracy of any subsequent text-processing steps.
Although most of the part-of-speech taggers developed to date rely on machine-learning algorithms with features drawn from the surrounding text of an ambiguous word, there are also other approaches such as unsupervised tagging using clustering algorithms. In particular, Biemann's part-of-speech tagger (2006c) is based on the idea of word co-occurrence. First, a bipartite graph of words that appear next to one another is built, followed by the calculation of the second power of that graph's connectivity matrix, thereby connecting words that appear in the same context. Hence, words like red, green, and blue appear in the same cluster because they are distributionally similar (Lee 1997).
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2011