Book contents
- Frontmatter
- Contents
- List of figures
- List of examples
- Acknowledgements
- Preface
- Glossary of selected evaluation terms
- 1 Introduction
- 2 Compilation: setting the right foundations
- 3 Composition: designing for needs
- 4 Conducting process evaluation
- 5 Conducting economic evaluation
- 6 Conducting impact evaluation
- 7 Analysis, reporting and communications
- 8 Emerging challenges for evaluation and evaluators
- References
- Annex A The ROTUR framework for managing evaluation expectations
- Annex B Ready reckoner guide to experimentation choices in impact evaluation
- Index
- Social Research Association Shorts
3 - Composition: designing for needs
Published online by Cambridge University Press: 05 April 2022
- Frontmatter
- Contents
- List of figures
- List of examples
- Acknowledgements
- Preface
- Glossary of selected evaluation terms
- 1 Introduction
- 2 Compilation: setting the right foundations
- 3 Composition: designing for needs
- 4 Conducting process evaluation
- 5 Conducting economic evaluation
- 6 Conducting impact evaluation
- 7 Analysis, reporting and communications
- 8 Emerging challenges for evaluation and evaluators
- References
- Annex A The ROTUR framework for managing evaluation expectations
- Annex B Ready reckoner guide to experimentation choices in impact evaluation
- Index
- Social Research Association Shorts
Summary
• Scaling and scope: the differences between small ‘n’ and ‘large ‘n’ evaluation
• Deciding on ‘core’ information needs and sourcing available secondary evidence
• Identifying evidence gaps and using primary evidence collection to fill these
• Using hybrid evaluation designs to build robustness and triangulate sources
• Proportionality and cost-effectiveness in evaluation designs
• Taking early account of data protection and data security needs
• Anticipating the analysis requirements and implications.
Introduction
With the foundations set, attention turns from compilation to composition. This is where evaluators start to get to grips with the wider design issues that come ahead of the specific choices to be made about methodology. Attention to detail at this compilation stage is important and makes the difference between underpinning or undermining subsequent choices on what evidence to collect, where and how to collect it, and how to interpret it – issues returned to separately for process, economic and impact evaluation in Chapters 4, 5 and 6.
Scaling and scope
The starting point in composition is to consider the likely scale of the evaluation, building on judgements already made in compilation (Chapter 2). Scale is much more than an issue of ‘how big for the budget?’, and, ideally, decisions on scale should lead decisions on budget and not vice versa. In tackling scale, evaluators make a distinction between:
• small ‘n’ evaluation
• large ‘n’ evaluation
Here, ‘small’ and ‘large’ are relative to the nature of the intervention and its maturity, and ‘n’ relates to the overall size and scope of participation in what is being evaluated. A small ‘n’ evaluation, for example, might look at a trial of a new online application or assessment process for a particular beneficiary group. The trial might be carried out over a short period, perhaps in one city and with only a handful of applicants, with the evaluation aimed at early evidence of how well the new online system is working compared to an existing advisor and office-based system. As with most small ‘n’ evaluations, this might be expected to produce findings often quite speedily, looking at individuals’ experiences in some depth, perhaps by covering all or most of the few trial participants. This same intervention may later need a large ‘n’ evaluation when lessons have been learned from the trial, changes have been made and the new process has been rolled out nationally to replace or run alongside the ‘old’ system.
- Type
- Chapter
- Information
- Demystifying EvaluationPractical Approaches for Researchers and Users, pp. 43 - 62Publisher: Bristol University PressPrint publication year: 2017