LDRD Projects

A hierarchical regional modeling framework for decadal-scale hydro-climatic predictions and impact assessments

Team Members: Moetasim Ashfaq (PI), Shih-Chieh Kao, Abdoul Oubeidillah, Rui Mei, Danielle Touma, Syeda Absar, Valentine Anantharaj

Probabilistic prediction of climate change at decadal-scale, and quantification of its impacts on natural and human systems is a great scientific challenge, and one that has significant potential to inform decision-making regarding the management of climate risk. Due to their policy relevance, 8-10 global modeling groups will undertake decadal-scale prediction experiments of the global climate as part of the Fifth Coupled Model Inter-comparison Project (CMIP5), whose results will be an integral part of the Fifth Assessment Report of the International Panel on Climate Change (IPCC-AR5). While a new generation of General Circulation Models (GCMs) has improved substantially in the simulation of the large-scale response to climate forcing, GCM resolution and accuracy is still inadequate to fully understand the response of regional-to-local-scale processes, particularly those governing hydro-climatic extremes. To improve on the limitations of GCMs, we will develop a hierarchical regional modeling framework to substantially enhance our ability in the quantitative understanding of the decadal-scale climate change at national and sub-national levels, and its potential implications for energy, water resources, and other sectors. This framework will use a suite of Earth system models and statistical techniques to downscale predictions from a multi-model ensemble of IPCC-AR5 global models to an ultra-high horizontal resolution of 4 km over the United States and South Asia.

Integrated Computational Modeling and Innovative Processing to Eliminate Rare Earths from Wrought Magnesium Alloys

ORNL Team Members: Balasubramaniam Radhakrishnan, Zhili Feng, Ryan Dehoff, Sarma Gorti, Robert Patton, William Peter, Amit Shyam and Srdjan Simunovic

Project Description: The project seeks to integrate micromechanical material models, process models and experimental data through a multi-objective optimization tool to design a microstructure that optimizes two conflicting material properties. The integrated approach addresses a critical need in the microstructural design of structural materials for vehicle applications where conflicting properties have to be optimized, and provides the necessary framework to enable innovation in processing and the exploration of new micro-mechanical concepts. The integrated approach will be used to develop a rare earth free wrought magnesium alloy that meets or exceeds the current specifications in rare earth containing alloys for strength and ductility. The approach will target three innovative processes – friction stir extrusion, ultrasonic roll bonding and additive roll bonding to introduce mechanical alloying of magnesium with titanium to provide additional twin and slip systems in the alloy in the ultra-fine grain size range. Mechanical alloying will offer the potential to utilize nanotwins as structural barriers to dislocation flow as well as to allow glide of dislocations at the twin-matrix interface - a novel concept that has demonstrated simultaneous increase in strength and ductility in ultra fine grained copper.

Results and Accomplishments: An initial benchmark problem was defined that relates material microstructure and two different attributes of the microstructure, grain orientation and grain misorientation distribution, and couples with a genetic algorithm to optimize the two attributes. The problem was successfully solved. The problem was then extended to a coupling between a material micro-mechanics code that defines two material properties, strength and ductility as a function of the above microstructural attributes with a genetic algorithm. The two codes were run in parallel and an optimization of the strength and ductility was demonstrated. The optimization approach is being extended to other optimization methods that exist in the computational tool kit DAKOTA. Methods have also been developed to represent the input texture for optimization studies using orientation distribution functions that describe strong basal texture and weak basal texture in magnesium alloys. The crystal plasticity code used for the property computations has been extended to AZ31 alloy incorporating additional twin and slip systems as demonstrated in previous mechanical alloying studies of Ti with Mg. An additional glide system along the twin-matrix interface has also been introduced in order to study the influence of slip along the twin-matrix interface on the strength and ductility of AZ31. On the experimental side, a number of severe deformation processing experiments were attempted in order to demonstrate mechanical alloying of Ti with Mg/AZ31 alloy. The friction stir processing (FSP) approach showed a clear demonstration of Ti solubility in AZ31 and has been selected as the process of choice moving forward.

A novel uncertainty quantification paradigm for enabling massively scalable predictions of complex stochastic simulations

Chris Baker, Tom Evans and Clayton Webster (PI)

We propose a transformational methodology related to the efficient, accurate and robust computation of statistical quantities of interest (QoIs), i.e., the information used by engineers and decision makers, that are determined from solutions of complex stochastic simulations. Our objective is to effect a transition from current stochastic polynomial approximation techniques, which lack the ability to easily harness powerful high-performance computing (HPC) architectures, to a truly predictive science. That is, to develop an innovative approach for uncertainty quantification (UQ) that provides quantitative bounds on the applicability of extreme scale calculations. This remains a fundamental difficulty in most applications that dominate the focus of the DOE mission. Guided by this grand challenge in UQ, we propose a rigorous mathematical procedure for exploiting multicore extreme parallelism by simultaneously propagating numerous realizations of a multi-dimensional multi-resolution adaptive sparse grid stochastic collocation approach, through both the complex physics and the solver stack in a novel semi-intrusive fashion. This paradigm builds on existing progress in generic programming to selectively couple ensembles of the hierarchical decomposition while enabling advanced recycling and block solver techniques. This massively scalable UQ framework will be made available through a recently developed ORNL Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN (TASMANIAN) and will be demonstrated by quantifying a moderately large number of uncertainties in a radiation transport calculation using the highly visible ORNL Denovo neutronics code.

Model-Inspired Science Priorities for Evaluating Tropical Ecosystem Response to Climate Change

ORNL team members: Forrest Hoffman (PI), Richard Norby, Xiaojuan Yang, Lianhong Gu, David Weston, and Jitendra Kumar

Carbon cycling in tropical forests and the feedbacks from tropical ecosystems to the climate system are critical uncertainties in current-generation Earth System Models (ESMs) that must be resolved to more reliably project global responses to climate change. The objectives of this ORNL Laboratory Directors' Research and Development (LDRD) project are to provide model improvements and initial model experiments and analyses that define the critical science objectives for a future planned DOE Next Generation Ecosystem Experiments (NGEE) project focused on tropical ecosystems and to provide guidance for an intensive campaign of structured observations and manipulative experiments. Leaf gas exchange measurements of tropical plant species, being performed at the Smithsonian Tropical Research Institute (STRI) in Panama, will drive improvements in how ESMs represent photosynthesis and phosphorus limitations to carbon cycling. Model improvements based on these data will be implemented into the Community Land Model (CLM4), and experiments using the improved model will define the relative sensitivity of tropical forests to elevated carbon dioxide, climate warming, and drought. Cluster analysis that combines current-generation models with large-scale climate and geophysical observations will provide quantitative delineation of tropical regions that are most important for intensive observations and modeling. Together, these model products will guide the development of an experimental framework, define critical field experiments, and initiate the iterative process of model-experiment interaction.

Toward Scalable Algorithms for Kinetic Equations: A New Hybrid Approach to Capturing Multiscale Phenomena

ORNL team members: Cory Hauck, Yulong Xing, and Jun Jia

The goal of this LDRD project is to design and implement a hybrid method for the efficient solution of multi-scale kinetic equations, which play a fundamental role in many areas of mathematical physics. Kinetic models are mesoscopic: on one hand, they provide a detailed and accurate description of particle-based systems in regimes where traditional fluid dynamic models are either invalid or simply not available. On the other hand, they can incorporate important features from a molecular- or quantum-level description that cannot be modeled directly because of the prohibitive computational cost. Consequently, kinetic models are an indispensable tool in the mathematical description of gas dynamics, plasma physics, multiphase flow, and radiative transport - all key components of current and future energy generation systems. Moreover, in recent years, kinetic theory has emerged as an important analytical tool in other important applications, including traffic and network models, computational biology and chemistry, and self-organization of complex systems.

Even with exascale resources, the full resolution of all dynamical scales in a kinetic equation is generally not possible. Rather, a robust simulation capability will only be achieved through the advent of new numerical methods that exploit the mathematical structure in the equations. In particular, details of mesoscopic processes that do not affect macroscopic behavior can be approximated by reduced models with significant savings, both in floating point operations and in memory storage and transfer. The mathematical challenge is how to derive these models and incorporate them into a rigorous mathematical framework that guarantees accurate solutions. The proposed hybrid methods involves an intelligent separation of the kinetic equation based on dynamical scales; a closure method for approximation of functions using limited data; modern, low-memory, high-order discretization techniques; and implementation on GPUs. The resulting algorithms will be useful for a variety of application specialists, particularly in high performance computing environments.

A comprehensive theoretical/numerical tool for electron transport in mesoscale-heterostructures

ORNL team members: Xiaoguang Zhang (PI), Mina Yoon, An-Ping Li, Don Nicholson

Modeling electron transport in electronic devices using macroscopic equations is a mature industry that generates billions of dollars per year. First-principles, quantum-mechanical modeling of atomic-scale transport has seen significant progress over the past decade. We identify a gap between these two length scales - the mesoscale, where effects described by macroscopic equations are dominant only in nanoscale devices because they depend on some power of a characteristic device length. One such phenomenon is space charge limited currents (SCLCs), a central feature of energy-related devices such as light-emitting diodes and organic solar cells, which entail carrier injection. A critical voltage for the onset of injection is proportional to the square of the device length. State-of-the-art commercial modeling tools do not treat such phenomena adequately. In particular, the available theory of SCLCs in the presence of traps either neglects or treats phenomelogically key physical effects such as the interplay between dopants and traps, the Frenkel effect, inter-trap tunneling (relevant at high trap concentrations), etc. Our goal is to develop a set of comprehensive simulation tools for modeling mesoscopic electronic devices that exhibit a strong SCLC effect, in particular quasi-zero, one, or two-dimensional structures, based on rigorous physics formulations of the pertinent problems. Such a toolset will provide a flexible foundation to build future energy research programs, e.g., battery modeling where charge carriers are ions, and is likely to attract industry interest.

Hardware/Software Resilience Co-Design Tools for Extreme-scale High-Performance Computing

ORNL team members: Christian Engelman (PI) and Thomas Naughton

The path to exascale computing poses several research challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Resilience, i.e., providing efficiency and correctness in the presence of faults, is one of the most important exascale computer science challenges as systems scale up in component count (100,000-1,000,000 nodes with 1,000-10,000 cores per node by 2020) and component reliability decreases (7 nm technology with near-threshold voltage operation by 2020). Several high-performance computing (HPC) resilience technologies have been developed. However, there are currently no tools, methods, and metrics to compare them and to identify the cost/benefit trade-off between the key system design factors: performance, resilience, and power consumption. This work focuses on developing a resilience co-design toolkit with definitions, metrics, and methods to evaluate the cost/benefit trade-off of resilience solutions, identify hardware/software resilience properties, and coordinate interfaces/responsibilities of individual hardware/software components. The primary goal of this project is to provide the tools and data needed by HPC vendors to decide on future architectures and to enable direct feedback to HPC vendors on emerging resilience threats.

Stochastic parameterization of the influence of subgrid scale land heterogeneity on convection in a climate model

ORNL team members: Richard Archibald, Salil Mahajan (PI), Jiafu Mao, Ben Mayer, Daniel S. McKenna, Dan Ricciuto (PI), Xiaoying Shi, Clayton Webster.

The convective parameterization in a climate model is a crucial component in the model climate system. State-of-the-science climate model simulations of precipitation tend to be too wide spread and to lack the intense episodes of precipitations found in observations. Our hypothesis is that model precipitation fails to capture to influence of small scale landscape heterogeneity and to a lesser extent dynamic surface variability. We propose to test this hypothesis and at the same time develop a stochastic parameterization of the surface fluxes that accounts for the influence of landscape heterogeneity. In the first stage of this project we will couple variants of the community land model (CLM) to the single column Community Land Model (SCAM) to create a basic deterministic land atmosphere simulator and characterize this system. The by degrees we will introduce increasing degrees of stochastic parameterization by knowledge of landscape heterogeneity and dynamic variability. We will conduct initial sensitivity test by comparing column model simulations and where available high resolution regional simulations. Ultimately to goal is to introduce the stochastic parameterization into a fully 3-D version of CAM and test model sensitivity against regional precipitations statistics.

Improved Metagenomic Analysis with Confidence Quantification for Biosurveillance of Novel and Man-made Threats

ORNL team members: Chongle Pan (PI), Loren Hauser, Miriam Land, and Richard Stouder

Genetically engineered and naturally occurring, novel pathogens are currently blind spots of biosurveillance. Such threats confound or defeat standard biosurveillance techniques based on PCR and immunological assays. This challenge can be addressed by using a metagenomics approach, in which all microorganisms in a field sample are directly sequenced together. The genome of a pathogen, even if it is altered by natural recombination or genetic engineering, can be computationally reconstructed and identified from background microorganisms. The pathogen’s genome provides information on its full malicious capability, evolutionary origin, and genetic manipulation history. Although next-generation sequencing technologies are ready for biosurveillance deployment, the informatics needed to extract information from the vast amount of sequencing data still requires significant improvements in terms of detection accuracy, analysis speed, and information content. The objective of this proposal is to develop an integrated informatics solution for applying metagenomics to accurately detect novel or man-made pathogens and comprehensively characterize their genetic capabilities in a short turnaround time. The key technical innovation of the proposed work is the construction of probabilistic error models of next-generation sequencing data and the quantification of statistical confidence in threat detection and characterization. The proposed informatics solution will be systematically tested and evaluated for different biosurveillance scenarios using real-world and simulated metagenomic data.

Towards a Resilient and Scalable Infrastructure for Big Data

ORNL Team Members: Brad Settlemyer (PI), Sarp Oral, David Dillow, and Feiyi Wang

Commercial data-intensive computing and simulation science have divergent requirements in a number of areas, but both share the need for a resilient and scalable infrastructure for extreme-scale I/O. To address the requirements for these key drivers for computing platforms before the end of the decade, we must conduct research on novel software/hardware architectures and establish our laboratory as a key contributor to the state of the art in distributed I/O systems to satisfy the performance, capacity, and resiliency requirements of the Big Data era. In this project, we propose initial research required to design and evaluate the scalable object storage infrastructure that addresses the fundamental issues present in any Big Data system -- resiliency and performance of I/O at scale. To this end, we propose the following research:

  1. New asynchronous storage programming models and advanced storage scheduling schemes that improve scalability and performance, provide quality of service guarantees, and support both scientific analysis techniques and modern loosely-coupled analysis codes;
  2. New data location and replication schemes that ensure data access is mapped to devices that perform the target workloads with high performance while also duplicating important data to durable media in a timely manner ensuring that data is available even when portions of the storage system are unavailable (e.g., due to outages or scheduled maintenance); and
  3. An evaluation of techniques for constructing hierarchical storage systems composed of multiple media types including magnetic disk, tape, and emerging non-volatile memory technologies such as Flash and evaluating candidate architectures with diverse Big Data workloads.