16:00 - 16:30
WfBench: Automated Generation of Scientific Workflow Benchmarks

Taina Coleman, Loic Pottier
University of Southern California, CA

Henri Casanova
University of Hawaii at Manoa, HI

Ketan Maheshwari, Sean Wilkinson, Frederic Suter, Mallikarjun Shankar, Rafael Ferreira da Silva
Oak Ridge National Laboratory, TN

Justin Wozniak
Argonne National Laboratory, IL

The prevalence of scientific workflows with high computational demands calls for their execution on various distributed computing platforms, including large-scale leadership-class high-performance computing (HPC) clusters. To handle the deployment, monitoring, and optimization of workflow executions, many workflow systems have been developed over the past decade. There is a need for workflow benchmarks that can be used to evaluate the performance of workflow systems on current and future software stacks and hardware platforms.

We present a generator of realistic workflow benchmark specifications that can be translated into benchmark code to be executed with current workflow systems. Our approach generates workflow tasks with arbitrary performance characteristics (CPU, memory, and I/O usage) and with realistic task dependency structures based on those seen in production workflows. We present experimental results that show that our approach generates benchmarks that are representative of production workflows, and conduct a case study to demonstrate the use and usefulness of our generated benchmarks to evaluate the performance of workflow systems under different configuration scenarios.