Software

The Scalable Heterogeneous Computing Benchmark Suite (SHOC)

The Scalable Heterogeneous Computing Benchmark Suite (SHOC) is a collection of benchmark programs testing the performance and stability of systems using computing devices with non-traditional architectures for general purpose computing, and the software used to program them. Its initial focus is on systems containing Graphics Processing Units (GPUs) and multi-core processors, and on the OpenCL programming standard. It can be used on clusters as well as individual hosts.

In addition to OpenCL-based benchmark programs, SHOC also includes a Compute Unified Device Architecture (CUDA) version of many of its benchmarks for comparison with the OpenCL version.

Contact: Jeff Vetter

URL: https://github.com/vetter/shoc/wiki

Computational Infrastructure for Nuclear Astrophysics (CINA) 2.1

Description: The Computational Infrastructure for Nuclear Astrophysics (CINA) is a cloud computing system that enables users the capability to robustly simulate, share, store, analyze and visualize explosive nucleosynthesis events such as novae, X-ray bursts and core-collapse supernovae via a web-deliverable, easy-to-use, Java application. Users can also upload, modify, merge, store and share the complex input data required by these simulations. Version 2.1 expand CINA's set of simulation types to include the cold rapid neutron process (cold r-process), which takes place in the supernovae environment. This release also allows users to generate 1D plots comparing simulation results to over a dozen observational datasets and 1D plots displaying the fractional difference between simulation results and the available observations. The plotted data can then be downloaded in tabular form. In addition, version 2.1 enhances the Element Synthesis Animator tool by allowing users to restrict the proton number range of results plotted on the animated chart of the nuclides. This capability allows users to zoom in on a particular region of the nuclide chart and export images and movies of the selected region of interest.

Contact: Eric Lingerfelt, lingerfeltej@ornl.gov

URL: www.nucastrodata.org/infrastructure

SystemBurn

Description: Syste mBurn is a software package designed to maximize the computational and I/O load on a system, with the goal of providing an upper threshold for the power consumed by the system. SystemBurn provides a set of pluggable, mixed workloads to achieve this, including CPU, Accelerator, Network, and Disk loads.

Contact: Josh Lothian

URL: https://github.com/jlothian/systemburn

Current Version: 3.1.0

DOWNLOAD

Attributions:
This work was supported by the United States Department of Defense and used resources of the Extreme Scale Systems Center at Oak Ridge National Laboratory.

SystemConfidence

Description: SystemConfidence implements an method for examining the latencies present in high-speed networks. Using high precision timers, SystemConfidence performs communication between all nodes of your network, and output suitable for input the GNUplot.

Contact: Josh Lothian

URL: https://github.com/jlothian/sysconfidence

Current Version: 1.0.1

DOWNLOAD

Attributions:
This work was supported by the United States Department of Defense and used resources of the Extreme Scale Systems Center at Oak Ridge National Laboratory.

Unstructrually Banded Nonlinear Eigenvalue Software

Description: The package contains both Matlab and C++ implementations of solving unstructurally banded nonlinear eigenvalues with a Kublanovskaya type method.

Contact: C. Kristopher Garrett

Current Version: 1.0

DOWNLOAD

Attributions:
Supported in part by NSF grants DMS-1115834 and DMS-1317330, and a Research Gift Grant from Intel Corporation

INDDGO

Version 2.0 of INDDGO (Integrated Network Decomposition & Dynamic programming for Graph Optimization problems) was released. This release includes major new functionality for the calculation of statistics on graphs. Blair Sullivan of NCSU is primarily responsible for the project. For this release, significant contributions we made by the ESSC Benchmarks team consisting of Matt Baker, Sarah Powers, Jonathan Schrock, and Josh Lothian.

URL: https://github.com/bdsullivan/INDDGO

Gleipnir (svn r2568) (http://csrl.unt.edu/gleipnir)

Gleipnir is a memory tracing tool, built as a plug-in for the Valgrind Binary Instrumentation framework. Gleipnir maps memory access information to source code variables/structures (using debug-information or user-defined dynamic regions using client-interface calls). The goal of Gleipnir is to expose application memory access behavior for optimization/analysis purposes. This release provides better support for MPI applications and larger codes.

Contact: Tomislav Janjusic, janjusict@ornl.gov

HERCULES 2.3 (r344)

HERCULES is an Open64-based, PROLOG-backed, system for program constraint programming and customizable, user-level, transformation formulation. In addition to the core system, it offers a source code base scanner for patterns (hscan) and numerous transformation directives in an F90 compiler (hslf90).

Contact: Christos Kartsaklis, kartsaklisc@ornl.gov

TASMANIAN

The goal of the Predictive Methods Team (PMT) at Oak Ridge National Laboratory is to develop architecture-aware uncertainty quantification and data assimilation capabilities for applications that dominate the focus of the DOE mission. Examples include enhancement of reliability of smart energy grids, development of renewable energy technologies, vulnerability analysis of water and power supplies, understanding complex biological networks, climate change estimation and safe and cost-effective designs of current and future energy storage devices. As the complexity of these systems increase, scientists and engineers are relied upon to provide expert analysis and to inform decision makers concerning the behavior of, and more importantly to assess the risk associated with, predictive simulations. Accomplishing this goal requires rigorous mathematical and statistical analysis of innovative massively scalable stochastic algorithms. These approaches must overcome several challenges which arise when applying UQ methodologies to the science areas listed above, including:

  1. Detection and quantification of high-dimensional stochastic QoIs with a specified certainty;
  2. Reducing the computational burden required to perform rigorous UQ;
  3. Efficient strategies for UQ that exploit greater levels of parallelism provided by emerging many-core architectures;
  4. Systematic assimilation of the uncertainty in measured data for correcting model bias, calibrating parameter interrelations and improving confidence in predicted responses.

To overcome these challenges, the PMT at ORNL has developed the The Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN (TASMANIAN), which is a feature-rich environment for extreme-scale UQ. TASMANIAN v.1.0 includes our adaptive sparse grid stochastic collocation approaches, constructing from both global (hierarchical Lagrange) and local (multi resolution wavelets) basis functions.  Moreover, we also include a massively scalable implementation of the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm for accelerating Markov Chain Monte Carlo (MCMC) convergence, used in all parameter estimation and Bayesian inference algorithms.

Contact: Miroslav Stoyanov

URL: http://tasmanian.ornl.gov