Book contents
- Frontmatter
- Contents
- Series editors' preface
- Preface
- 1 Introduction
- 2 Historical background
- 3 Validity
- 4 Positivistic designs
- 5 Naturalistic designs
- 6 Quantitative data gathering and analysis
- 7 Qualitative data gathering and analysis
- 8 Combining positivistic and naturalistic program evaluation
- 9 Conclusions
- References
- Author index
- Subject index
Series editors' preface
Published online by Cambridge University Press: 05 October 2012
- Frontmatter
- Contents
- Series editors' preface
- Preface
- 1 Introduction
- 2 Historical background
- 3 Validity
- 4 Positivistic designs
- 5 Naturalistic designs
- 6 Quantitative data gathering and analysis
- 7 Qualitative data gathering and analysis
- 8 Combining positivistic and naturalistic program evaluation
- 9 Conclusions
- References
- Author index
- Subject index
Summary
Program evaluation is important and difficult work in any field, and language education is no exception. The goal is sometimes to evaluate a program's effectiveness in absolute terms, sometimes to assess its quality against that of comparable programs, sometimes both. In ideal circumstances, evaluations receive cooperation from all parties and provide useful information to insiders on how their work can be improved, while offering accountability to outside stakeholders, such as host institutions, governments, and financial sponsors, as well as to students.
Circumstances are often less than ideal, however. Whether insiders or outsiders themselves, evaluators may be expected to employ recognized instruments and procedures, such as standardized proficiency tests and inferential statistics, for gathering and interpreting data. At the same time, they must also adapt to unique local conditions, where, for instance, objective measures yielding quantifiable data may be unwelcome, unusable, or unavailable. Worse, some stakeholders may have incompatible goals, conflicting interests in the outcome of an evaluation, and/or differing views about how it should be conducted. For example, an evaluation requiring full staff cooperation may be commissioned by a host institution, such as a university, with the aim of using the results to justify an already determined policy change with significant potential fall-out for program staff, including job losses. Training in conflict resolution may seem as useful as knowledge of applied linguistics in such cases, and the evaluator can easily end up taking sides or trying to play the role of mediator between warring parties.
- Type
- Chapter
- Information
- Language Program EvaluationTheory and Practice, pp. ix - xPublisher: Cambridge University PressPrint publication year: 1995