Tuning high-performance scientific codes: The use of performance models to control resource usage during data migration and I/O

J. Lee*, M. Winslett, X. Ma, S. Yu

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

5 Citations (Scopus)

Abstract

Large-scale parallel simulations are a popular tool for investigating phenomena ranging from nuclear explosions to protein folding. These codes produce copious output that must be moved to the workstation where it will be visualized. Scientists have a variety of tools to help them with this data movement, and often have several different platforms available to them for their runs. Thus questions arise such as, which data migration approach is best for a particular code and platform? Which will provide the best end-to-end response time, or lowest cost? Scientists also control how much data is output, and how often. From a scientific perspective, the more output the better; but from a cost and response time perspective, how much output is too much? To answer these questions, we built performance models for data migration approaches and verified them on parallel and sequential platforms. We use a 3D hydrodynamics code to show how scientists can use the models to predict performance and tune the I/O aspects of their codes.

Original languageEnglish
Pages181-195
Number of pages15
Publication statusPublished - 2001
Externally publishedYes
Event2001 International Conference on Supercomputing - Sorento, Italy
Duration: 17 Jun 200121 Jun 2001

Conference

Conference2001 International Conference on Supercomputing
Country/TerritoryItaly
CitySorento
Period17/06/0121/06/01

Fingerprint

Dive into the research topics of 'Tuning high-performance scientific codes: The use of performance models to control resource usage during data migration and I/O'. Together they form a unique fingerprint.

Cite this