Recovering transient data: Automated on-demand data reconstruction and offloading for supercomputers

Sudharshan Vazhkudai*, Xiaosong Ma

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

It has become a national priority to build and use PetaFlop supercomputers. The dependability of such large systems has been recognized as a key issue that can impact their usability. Even with smaller, existing machines, failures are the norm rather than an exception. Research has shown that storage systems are the primary source of faults leading to supercomputer unavailability. In this paper, we envision two mechanisms, namely on-demand data reconstruction and eager data offloading, to address the availability of job input/output data. These two techniques aim to allow parallel jobs and post-job processing tools to continue execution despite storage system failures in supercomputers. Fundamental to both approaches is the definition and acquisition of recovery-related parallel file system metadata, which is then coupled with transparent remote data accesses. Our approach attempts to maximize the utilization of precious supercomputer resources by improving the accessibility of transient job data. Further, the proposed methods are best-effort in nature and complement existing file system recovery schemes, which are designed for persistent data. Several of our previous studies help in demonstrating the feasibility of the proposed approaches.

Original languageEnglish
Pages (from-to)14-18
Number of pages5
JournalOperating Systems Review (ACM)
Volume41
Issue number1
DOIs
Publication statusPublished - 1 Jan 2007
Externally publishedYes

Keywords

  • Data reconstruction
  • File system recovery
  • Supercomputer avalability

Fingerprint

Dive into the research topics of 'Recovering transient data: Automated on-demand data reconstruction and offloading for supercomputers'. Together they form a unique fingerprint.

Cite this