The Scalable Heterogeneous Computing Benchmark Suite (SHOC)

The Scalable Heterogeneous Computing Benchmark Suite (SHOC) is a collection of benchmark programs testing the performance and stability of systems using computing devices with non-traditional architectures for general purpose computing, and the software used to program them. Its initial focus is on systems containing Graphics Processing Units (GPUs) and multi-core processors, and on the OpenCL programming standard. It can be used on clusters as well as individual hosts.

In addition to OpenCL-based benchmark programs, SHOC also includes a Compute Unified Device Architecture (CUDA) version of many of its benchmarks for comparison with the OpenCL version.

Contact: Jeff Vetter


Computational Infrastructure for Nuclear Astrophysics (CINA) 2.1

Description: The Computational Infrastructure for Nuclear Astrophysics (CINA) is a cloud computing system that enables users the capability to robustly simulate, share, store, analyze and visualize explosive nucleosynthesis events such as novae, X-ray bursts and core-collapse supernovae via a web-deliverable, easy-to-use, Java application. Users can also upload, modify, merge, store and share the complex input data required by these simulations. Version 2.1 expand CINA's set of simulation types to include the cold rapid neutron process (cold r-process), which takes place in the supernovae environment. This release also allows users to generate 1D plots comparing simulation results to over a dozen observational datasets and 1D plots displaying the fractional difference between simulation results and the available observations. The plotted data can then be downloaded in tabular form. In addition, version 2.1 enhances the Element Synthesis Animator tool by allowing users to restrict the proton number range of results plotted on the animated chart of the nuclides. This capability allows users to zoom in on a particular region of the nuclide chart and export images and movies of the selected region of interest.

Contact: Eric Lingerfelt,



Description: Syste mBurn is a software package designed to maximize the computational and I/O load on a system, with the goal of providing an upper threshold for the power consumed by the system. SystemBurn provides a set of pluggable, mixed workloads to achieve this, including CPU, Accelerator, Network, and Disk loads.

Contact: Josh Lothian


Current Version: 3.1.0


This work was supported by the United States Department of Defense and used resources of the Extreme Scale Systems Center at Oak Ridge National Laboratory.


Description: SystemConfidence implements an method for examining the latencies present in high-speed networks. Using high precision timers, SystemConfidence performs communication between all nodes of your network, and output suitable for input the GNUplot.

Contact: Josh Lothian


Current Version: 1.0.1


This work was supported by the United States Department of Defense and used resources of the Extreme Scale Systems Center at Oak Ridge National Laboratory.

Unstructrually Banded Nonlinear Eigenvalue Software

Description: The package contains both Matlab and C++ implementations of solving unstructurally banded nonlinear eigenvalues with a Kublanovskaya type method.

Contact: C. Kristopher Garrett

Current Version: 1.0


Supported in part by NSF grants DMS-1115834 and DMS-1317330, and a Research Gift Grant from Intel Corporation


Version 2.0 of INDDGO (Integrated Network Decomposition & Dynamic programming for Graph Optimization problems) was released. This release includes major new functionality for the calculation of statistics on graphs. Blair Sullivan of NCSU is primarily responsible for the project. For this release, significant contributions we made by the ESSC Benchmarks team consisting of Matt Baker, Sarah Powers, Jonathan Schrock, and Josh Lothian.


Gleipnir (svn r2568) (

Gleipnir is a memory tracing tool, built as a plug-in for the Valgrind Binary Instrumentation framework. Gleipnir maps memory access information to source code variables/structures (using debug-information or user-defined dynamic regions using client-interface calls). The goal of Gleipnir is to expose application memory access behavior for optimization/analysis purposes. This release provides better support for MPI applications and larger codes.

Contact: Tomislav Janjusic,

HERCULES 2.3 (r344)

HERCULES is an Open64-based, PROLOG-backed, system for program constraint programming and customizable, user-level, transformation formulation. In addition to the core system, it offers a source code base scanner for patterns (hscan) and numerous transformation directives in an F90 compiler (hslf90).

Contact: Christos Kartsaklis,


The goal of the Predictive Methods Team (PMT) at Oak Ridge National Laboratory is to develop architecture-aware uncertainty quantification and data assimilation capabilities for applications that dominate the focus of the DOE mission. Examples include enhancement of reliability of smart energy grids, development of renewable energy technologies, vulnerability analysis of water and power supplies, understanding complex biological networks, climate change estimation and safe and cost-effective designs of current and future energy storage devices. As the complexity of these systems increase, scientists and engineers are relied upon to provide expert analysis and to inform decision makers concerning the behavior of, and more importantly to assess the risk associated with, predictive simulations. Accomplishing this goal requires rigorous mathematical and statistical analysis of innovative massively scalable stochastic algorithms. These approaches must overcome several challenges which arise when applying UQ methodologies to the science areas listed above, including:

  1. Detection and quantification of high-dimensional stochastic QoIs with a specified certainty;
  2. Reducing the computational burden required to perform rigorous UQ;
  3. Efficient strategies for UQ that exploit greater levels of parallelism provided by emerging many-core architectures;
  4. Systematic assimilation of the uncertainty in measured data for correcting model bias, calibrating parameter interrelations and improving confidence in predicted responses.

To overcome these challenges, the PMT at ORNL has developed the The Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN (TASMANIAN), which is a feature-rich environment for extreme-scale UQ. TASMANIAN v.1.0 includes our adaptive sparse grid stochastic collocation approaches, constructing from both global (hierarchical Lagrange) and local (multi resolution wavelets) basis functions.  Moreover, we also include a massively scalable implementation of the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm for accelerating Markov Chain Monte Carlo (MCMC) convergence, used in all parameter estimation and Bayesian inference algorithms.

Contact: Miroslav Stoyanov



Mahout is an open-source, data mining project being developed by Apache that can be used for recommendation, clustering, and classification.  This poster will focus on Mahout's ability to cluster numeric data, specifically that of nuclear reactor simulation data.  The simulation data is clustered by Mahout to investigate reactor performance, failure prediction, and similar processes.

Contact: Ronald Allen

Virtual Integrated Battery Environment

VIBE for CAEBAT project is designed to study a coupled battery system involving the electro-chemistry, thermal transport and structural dynamics.

Contact: Srikanth Allu


The Tribal Build, Integrate, and Test System (TriBITS) is a framework built on top of the open-source CMake set of tools that is designed to handle large software development projects involving multiple independent development teams and multiple source repositories.  TriBITS also defines a complete software development, testing, and deployment system supporting processes consistent with modern agile software development best practices.  TriBITS was initially developed as a package-based CMake build and test system for the Trilinos project.  The system was later factored out as the reusable TriBITS system and adopted as the build architecture for the Consortium for the Advanced Simulation of Light Water Reactors (CASL) VERA code.  TriBITS was also adopted as the native build and test system for a number of CASL-related codes including the ORNL codes Exnihilo and SCALE in addition to a number of non-ORNL codes.

Contact: Roscoe Bartlett


The Simulation-based High-efficiency Advanced Reactor Prototyping software suite is a group of advanced physics and infrastructure codes aimed at accurately and effectively modeling nuclear reactor core systems. SHARP takes advantage of advanced single-physics codes for neutronics, thermal hydraulics, and structural mechanics while providing flexibility and extensibility through a solution transfer coupling system.

Contact: Andrew Bennett

Xolotl Plasma-Surface Interactions

Xolotl is an open-source plasma-surface interactions simulator that is currently in development. This poster will focus on the visualization interface recently added to Xolotl features, which will allow the user to interpret its results by plotting physical outputs, as well as software performance- monitoring data.

Contact: Sophie Blondel


Researchers, sponsors and decision-makers increasingly recognize the need for more global, open and reproducible data-based science that can address scientific and societal challenges. In response to this need, the NSF DataONE project (led from the University of New Mexico <>) has created a robust, stable and powerful global cyberinfrastructure platform comprised of: (1) a federation of Member Nodes (data repositories located worldwide); (2) Coordinating Nodes  that support replication, indexing, data discovery and other network services; and (3) an Investigator Toolkit that supports the entire research data life cycle from planning through analysis and preservation. DataONE creates methods for data interoperability across archives that are heterogeneous geographically, architecturally, and semantically through a well-design, openly available and deployable set of  service programming interfaces. Phase I of DataONE has focused on biological, ecological, and environmental sciences. DataONE contains over 400,000 data objects across 19 data repositories. It has 20,000 users and currently offers 13 investigator toolkit tools.  Exploratory and visualization science use cases have included visual inter-comparison of climate models and continental scale statistical prediction of bird migration and occurrence maps.

Contact: John Cobb

NiCE Reactor Analyzer plug-in

The Reactor Analyzer provides fast and easy nuclear reactor post-simulation data analysis in NiCE, the NEAMS Integrated Computational Environment. It currently supports light water reactors (LWRs) and sodium-cooled fast reactors, and it targets quantities of interest, such as aggregated fuel pin powers and flux.

Contact: Jordan Deyton


The Extreme-scale Simulator (xSim) is a performance investigation toolkit that permits running native high-performance computing applications in a controlled environment with millions of concurrent execution threads, while observing application performance in a simulated extreme-scale system for hardware/software co-design.

Contact: Christian Engelmann


JADE is an integrated development environment for creating and simulating quantum computing programs. It provides an extensible and reconfigurable environment for guiding the input and parsing of mathematical problem and a simulation management system for launching jobs.

Contact: Travis Humble


QITKAT is a library of signal processing blocks tailored to quantum communication tasks. It builds on the open source GNU Radio framework for real-time, stream-based processing. QITKAT includes hardware interfaces for driving quantum communication experiments.

Contact: Travis Humble

Xolotl Plasma-Surface Interactions

Xolotl PSI is a simulator currently under development. This poster will focus on a particular feature of Xolotl, its performance-monitoring infrastructure, which supports an "always on" collection of performance data.

Contact: Crystal Jernigan


HERCULES is an Open64-based, PROLOG-backed, system for program constraint programming and customizable, user-level, transformation formulation. In addition to the core system, it offers a source code base scanner for patterns (hscan) and numerous transformation directives in an F90 compiler (hslf90).

Contact: Christos Kartsaklis


Bellerophon is an n-tier software system developed to support CHIMERA, a production-level HPC application that simulates core-collapse supernova events at the petascale. Bellerophon enables CHIMERA's geographically dispersed team of collaborators to perform data analysis in near real-time, address software engineering tasks, and access multiple elements of the team's workflow management needs from a cross-platform, web-deliverable desktop application.

Contact: Eric Lingerfelt

DOE Accelerated Climate Modeling for Energy Workflow

The DOE BER is launching a new project called Accelerated Climate Modeling for Energy. The workflow component has the goal of increasing the rate of discovery by reducing scientists' effort in setting and running experiments, increasing the amount of science completed per computation allocation, and providing repeatability of experiments. To accomplish this, we are building an automated workflow that takes a model configuration, sets it up and runs it on OLCF's Titan and handles all of the data transfer, publication, diagnostics, and provenance automatically.

Contact: Benjamin Mayer


The Scalable runTime Component Infrastructure (STCI) is a modular runtime environment used to support HPC system software and tools. The STCI library includes building blocks and customizable Agents to launch, monitor and manage parallel jobs on Linux clusters and Cray supercomputers.

Contact: Thomas Naughton


The Cluster Command and Control (C3) tools are a suite of cluster tools developed at ORNL that are useful for both administration and application support. The suite includes tools for cluster-wide command execution, file distribution and gathering, process termination, remote shutdown and restart.

Contact: Thomas Naughton


Programming with big data in R is a collection of R packages that simplify the scaling of advanced data analytics in R on platforms ranging from small clusters to supercomputers. Our packages provide infrastructure to use and develop advanced parallel analytics that scale to tens of thousands of cores on supercomputers but also provide simple parallel solutions for multicore laptops.

Contact: George Ostrouchov


Advanced Multi-Physics Package

Simulations of multiphysics systems are becoming increasingly important in many science application areas. We describe the design and capabilities of the Advanced Multi-Physics (AMP) package designed with multi-domain multi-physics applications in mind. AMP currently builds powerful and flexible deterministic multiphysics simulation algorithms from lightweight operator, solver, linear algebra, material database, discretization, and meshing interfaces through a package agnostic framework.

Contact: Bobby Philip

Data Transfer Kit

The Data Transfer Kit (DTK) is a library providing parallel data transfer services for multi physics problems. The library is designed to provide scalable algorithms to exchange fields in coupled physics simulations. DTK is being developed as part of the broader coupling toolkit generated by the CASL program for modeling and simulation of light water reactors.

Contact: Stuart Slattery

MPI Fault Tolerance

ORNL is working on a reference implementation of the ongoing proposal for MPI fault tolerant. The current proposal focuses on User-Level Failure Mitigation (ULFM); a flexible approach providing process fault tolerance by allowing the application to react to failures while maintaining a minimal execution path in failure-free executions. The focus is on returning control to the application by avoiding deadlocks due to failures within the MPI library. We consider the proposed set of functions to constitute a minimal basis which allows libraries and applications to increase the fault tolerance capabilities by supporting additional types of failures, and to build other desired strategies and consistency models to tolerate faults. The reference implementation developed at ORNL is based on the Scalable runTime Component Infrastructure (STCI), developed at ORNL, as well as an extension of the MPI implementation of the Open-MPI layer (extension led by UTK).

Contact: Geoffroy Vallée

Common Communications Interface

The CCI project is an open-source communication interface that aims to provide a simple and portable API, high performance, scalability for the largest deployments, and robustness in the presence of faults. Targeted towards high performance computing (HPC) environments as well as large data centers, CCI can provide a common network abstraction layer (NAL) for persistent services as well as general interprocess communication. As a result, CCI is suitable for high performance communication on a local area network (LAN), as well as for the terabit data transfer over dedicated wide area networks (WAN) such as DOE's ESNet infrastructure.

Contact: Geoffroy Vallée


NEAMS Integrated Computational Environment is a broad-purpose GUI-based environment for simplifying common tasks in the many domains of computational sciences. As part of a pilot project with Argonne National Lab over the latter half of 2013, NiCE added support for Sodium-cooled Fast Reactor (SFR) simulations in the way of input generation, simulation launching, and intuitive post-simulation analysis tools. This poster will focus on the back-end design and data structures implemented in NiCE to make this SFR modeling project successful.

Contact: Anna Wojtowicz