Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- I Introduction to Queueing
- II Necessary Probability Background
- III The Predictive Power of Simple Operational Laws: “What-If” Questions and Answers
- IV From Markov Chains to Simple Queues
- V Server Farms and Networks: Multi-server, Multi-queue Systems
- 14 Server Farms: M/M/k and M/M/k/k
- 15 Capacity Provisioning for Server Farms
- 16 Time-Reversibility and Burke's Theorem
- 17 Networks of Queues and Jackson Product Form
- 18 Classed Network of Queues
- 19 Closed Networks of Queues
- VI Real-World Workloads: High Variability and Heavy Tails
- VII Smart Scheduling in the M/G/1
- Bibliography
- Index
14 - Server Farms: M/M/k and M/M/k/k
from V - Server Farms and Networks: Multi-server, Multi-queue Systems
Published online by Cambridge University Press: 05 February 2013
- Frontmatter
- Contents
- Preface
- Acknowledgments
- I Introduction to Queueing
- II Necessary Probability Background
- III The Predictive Power of Simple Operational Laws: “What-If” Questions and Answers
- IV From Markov Chains to Simple Queues
- V Server Farms and Networks: Multi-server, Multi-queue Systems
- 14 Server Farms: M/M/k and M/M/k/k
- 15 Capacity Provisioning for Server Farms
- 16 Time-Reversibility and Burke's Theorem
- 17 Networks of Queues and Jackson Product Form
- 18 Classed Network of Queues
- 19 Closed Networks of Queues
- VI Real-World Workloads: High Variability and Heavy Tails
- VII Smart Scheduling in the M/G/1
- Bibliography
- Index
Summary
In today's high-volume world, almost no websites, compute centers, or call centers consist of just a single server. Instead a “server farm” is used. The server farm is a collection of servers that work together to handle incoming requests. Each request might be routed to a different server, so that servers “share” the incoming load. From a practical perspective, server farms are often preferable to a single “super-fast” server because of their low cost (many slow servers are cheaper than a single fast one) and their flexibility (it is easy to increase/decrease capacity as needed by adding/removing servers). These practical features have made server farms ubiquitous.
In this chapter, we study server farms where there is a single queue of requests and where each server, when free, takes the next request off the queue to work on. Specifically, there are no queues at the individual servers. We defer discussion of models with queues at the individual servers to the exercises and later chapters.
The two systems we consider in this chapter are the M/M/k system and the M/M/k/k system. In both, the first “M” indicates that we have memoryless interarrival times, and the second “M” indicates memoryless service times. The third field denotes that k servers share a common pool of arriving jobs. For the M/M/k system, there is no capacity constraint, and this common pool takes the form of an unbounded FCFS queue, as shown later in Figure 14.3, where each server, when free, grabs the job at the head of the queue to work on.
- Type
- Chapter
- Information
- Performance Modeling and Design of Computer SystemsQueueing Theory in Action, pp. 253 - 268Publisher: Cambridge University PressPrint publication year: 2013