Hostname: page-component-7479d7b7d-767nl Total loading time: 0 Render date: 2024-07-13T21:33:51.197Z Has data issue: false hasContentIssue false

User-guided system development in Interactive Spoken Language Education

Published online by Cambridge University Press:  26 March 2001

ERIC ATWELL
Affiliation:
University of Leeds, Yorkshire, UK; e-mail: e.s.atwell@leeds.ac.uk, p.a.howarth@leeds.ac.uk, d.c.souter@leeds.ac.uk
PETER HOWARTH
Affiliation:
University of Leeds, Yorkshire, UK; e-mail: e.s.atwell@leeds.ac.uk, p.a.howarth@leeds.ac.uk, d.c.souter@leeds.ac.uk
CLIVE SOUTER
Affiliation:
University of Leeds, Yorkshire, UK; e-mail: e.s.atwell@leeds.ac.uk, p.a.howarth@leeds.ac.uk, d.c.souter@leeds.ac.uk
PATRIZIO BALDO
Affiliation:
DIDA*EL S.r.l., Milan, Italy; e-mail: baldo@didael.it
ROBERTO BISIANI
Affiliation:
Università di Milano Bicocca, Italy; e-mail: bisiani@disco.unimib.it, dario.pezzotta@disco.unimib.it
DARIO PEZZOTTA
Affiliation:
Università di Milano Bicocca, Italy; e-mail: bisiani@disco.unimib.it, dario.pezzotta@disco.unimib.it
PATRIZIA BONAVENTURA
Affiliation:
Universitaet Hamburg, Germany; e-mail: pbonaven@informatik.uni-hamburg.de, menzel@informatik.uni-hamburg.de
WOLFGANG MENZEL
Affiliation:
Universitaet Hamburg, Germany; e-mail: pbonaven@informatik.uni-hamburg.de, menzel@informatik.uni-hamburg.de
DANIEL HERRON
Affiliation:
Microsoft, Cambridge, UK; e-mail: dherron@microsoft.com
RACHEL MORTON
Affiliation:
Entropic Cambridge Research Laboratory Ltd, Cambridge, UK; e-mail: rim@entropic.co.uk
JUERGEN SCHMIDT
Affiliation:
Ernst Klett Verlag, Stuttgart, Germany; e-mail: j.a.schmidt@klett-mail.de

Abstract

This paper is a case study of user involvement in the requirements specification for project ISLE: Interactive Spoken Language Education. Developers of Spoken Language Dialogue Systems should involve users from the outset, particularly if the aim is to develop novel solutions for a generic target application area or market. As well as target end-users, SLDS developers should identify and consult ‘meta-level’ domain experts with expertise in human-to-human dialogue in the target domain. In our case, English language teachers and publishers provided generic knowledge of learners' dialogue preferences; other applications have analogous domain language experts. These domain language experts can help to pin down a domain-specific sublanguage which fits the constraints of current speech recognition technology: linguistically-naive end-users may expect unconstrained conversational English, but in practice, dialogue interactions have to be constrained in vocabulary and syntax. User consultation also highlighted a need to consider how to integrate speech input and output with other modes of interaction and processing; in our case the input speech signal is processed by speech recogniser, stress and mispronunciation detectors, and output responses are text and graphics as well as speech. This suggests a need to revisit the definition of ‘dialogue’: other SLDS developers should also consider the merits of multimodality as an adjunct to pure spoken language dialogue, particularly given that current systems are not capable of accurately handling unconstrained English.

Type
Research Article
Copyright
2000 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)