2 - Large-Scale File Systems and Map-Reduce
Summary
Modern Internet applications have created a need to manage immense amounts of data quickly. In many of these applications, the data is extremely regular, and there is ample opportunity to exploit parallelism. Important examples are:
(1) The ranking of Web pages by importance, which involves an iterated matrix-vector multiplication where the dimension is in the tens of billions, and
(2) Searches in “friends” networks at social-networking sites, which involve graphs with hundreds of millions of nodes and many billions of edges.
To deal with applications such as these, a new software stack has developed. It begins with a new form of file system, which features much larger units than the disk blocks in a conventional operating system and also provides replication of data to protect against the frequent media failures that occur when data is distributed over thousands of disks.
On top of these file systems, we find higher-level programming systems developing. Central to many of these is a programming system called map-reduce. Implementations of map-reduce enable many of the most common calculations on large-scale data to be performed on large collections of computers, efficiently and in a way that is tolerant of hardware failures during the computation.
Map-reduce systems are evolving and extending rapidly. We include in this chapter a discussion of generalizations of map-reduce, first to acyclic workflows and then to recursive algorithms. We conclude with a discussion of communication cost and what it tells us about the most efficient algorithms in this modern computing environment.
- Type
- Chapter
- Information
- Mining of Massive Datasets , pp. 18 - 52Publisher: Cambridge University PressPrint publication year: 2011