Book contents
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- 1 Concurrent Processes
- 2 Basic Models of Parallel Computation
- 3 Elementary Parallel Algorithms
- 4 Designing Parallel Algorithms
- 5 Architectures of Parallel Computers
- 6 Message-passing Programming
- 7 Shared-memory Programming
- Solutions to Selected Exercises
- Glossary
- References
- Index
Preface
Published online by Cambridge University Press: 06 January 2017
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- 1 Concurrent Processes
- 2 Basic Models of Parallel Computation
- 3 Elementary Parallel Algorithms
- 4 Designing Parallel Algorithms
- 5 Architectures of Parallel Computers
- 6 Message-passing Programming
- 7 Shared-memory Programming
- Solutions to Selected Exercises
- Glossary
- References
- Index
Summary
Solving contemporary scientific and technology problems requires the use of computers with a high speed of computation. Over the last 60 years the rate of this speed has increased 16 trillion (1012) times. In the 1950s the speed of computation of a Univac 1 computer was about 1 kflop/s (flop denotes a floating-point operation), and in 2015 China's supercomputer Tianhe-2 (Milky Way-2), which contained 3 120 000 cores working in parallel, achieved the computation speed of more than 33 Pflop/s (petaflop stands for one quadrillion, 1015, floating-point operations). Despite a significant increase in computational capabilities, researchers simplify models of considered problems because their numerical simulation takes too long. The demand for more and more computing power has increased and it is believed that this trend will continue in the future. There are several reasons behind this trend. Models of investigated phenomena and processes have become more complex and larger amounts of data are being processed. The requirements regarding accuracy of results also grow, which entails a higher resolution of models being developed. The fields in which, through large computing power, significant results have been achieved include: aeronautics, astrophysics, bioinformatics, chemistry, economics and trade, energy, geology and geophysics, materials science, climatology, cosmology, medicine, meteorology, nanotechnology, defense, and advanced engineering.
For example, in the U.S. National Aeronautics and Space Agency (NASA), simulation problems related to research missions conducted by space shuttles have been investigated [50]. A parallel computer SGIAltix with 10 240 processors, consisting of 20 nodes each holding 512 processors, installed in the J. Ames Center allowed for simulation of a pressure distribution around a space shuttle during its flight. A package of computational fluid dynamics used for this goal was a tool for designing geometry of the parts of a space shuttle, that is, launchers and orbital units. Another group of issues resolved in the NASA research centers concerned jet drive units. One of the tasks was to simulate a flow of liquid fuel supplied to a space shuttle main engine by a turbine pump ([31], sect. 2.4).
In order to improve aircraft performance and safety, NASA conducts research in new aircraft technologies. One of the objectives is the accurate prediction of aerodynamic and structural performance for rotorcraft designed for civil and military applications.
- Type
- Chapter
- Information
- Introduction to Parallel Computing , pp. xxi - xxviiiPublisher: Cambridge University PressPrint publication year: 2017