Book contents
- Frontmatter
- Dedication
- Contents
- PREFACE
- 1 Introduction
- 2 Workload Data
- 3 Statistical Distributions
- 4 Fitting Distributions to Data
- 5 Heavy Tails
- 6 Correlations in Workloads
- 7 Self-Similarity and Long-Range Dependence
- 8 Hierarchical Generative Models
- 9 Case Studies
- 10 Summary and Outlook
- Appendix Data Sources
- Bibliography
- Index
1 - Introduction
Published online by Cambridge University Press: 05 March 2015
- Frontmatter
- Dedication
- Contents
- PREFACE
- 1 Introduction
- 2 Workload Data
- 3 Statistical Distributions
- 4 Fitting Distributions to Data
- 5 Heavy Tails
- 6 Correlations in Workloads
- 7 Self-Similarity and Long-Range Dependence
- 8 Hierarchical Generative Models
- 9 Case Studies
- 10 Summary and Outlook
- Appendix Data Sources
- Bibliography
- Index
Summary
Performance evaluation is a basic element of experimental computer science. It is used to compare design alternatives when building new systems, to tune parameter values of existing systems, and to assess capacity requirements when setting up systems for production use. Lack of adequate performance evaluation can lead to bad decisions, which result either in an inability to accomplish mission objectives or an inefficient use of resources. A good evaluation study, in contrast, can be instrumental in the design and realization of an efficient and useful system.
There are three main factors that affect the performance of a computer system:
The system's design.
The system's implementation.
The workload to which the system is subjected.
The first two factors are typically covered with some depth in vocational training and academic computer science curricula. Courses on data structures and algorithms provide the theoretical background for a solid design, and courses on computer architecture and operating systems provide case studies and examples of successful designs. Courses on performance-oriented programming and on object-oriented design, as well as programming labs, provide the working knowledge required to create and evaluate implementations. But there is typically little or no coverage of performance evaluation methodology in general and of workload modeling in particular.
Regrettably, performance evaluation is similar to many other endeavors in that it follows the GIGO principle: garbage-in-garbage-out. Evaluating a system with the wrong workloads will most probably lead to irrelevant results, which cannot be relied on. This motivates the quest for the “correct” workload model [716, 256, 653, 19, 731, 103, 235, 635]. It is the goal of this book to help propagate the knowledge and experience that have accumulated in the research community regarding workload modeling and to make it accessible to practitioners of performance evaluation.
To read more: Although performance evaluation in general and workload modeling in particular are typically not given much consideration in vocational and academic curricula, there has nevertheless been much research activity in this area.
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2015