The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Basic and applied research programs are focused on computational sciences, intelligent systems, and information technologies.

Our mission includes working on important national priorities with advanced computing systems, working cooperatively with U.S. Industry to enable efficient, cost-competitive design, and working with universities to enhance science education and scientific awareness. Our researchers are finding new ways to solve problems beyond the reach of most computers and are putting powerful software tools into the hands of students, teachers, government researchers, and industrial scientists.


Nageswara Rao Part of Team Recognized with Best Paper Award (December 14, 2016)

CSMD researcher Nageswara Rao's paper "Experimental Analysis of File Transfer Rates Over Wide-Area Dedicated Connections" received a best paper award at the 18th IEEE International Conference on High Performance Computing and Communications. The HPCC 2016 conference is the 18th edition of a forum for engineers and scientists in academia, industry, and government to address the resulting profound challenges and to present and discuss their new ideas, research results, applications and experience on all aspects of high performance computing and communications. IEEE HPCC 2016 is sponsored by IEEE, IEEE Computer Society, and IEEE Technical Committee on Scalable Computing (TCSC).

Other authors include: Qiang Liu, Satyabrata Sen, Gregory Hinkel, Neena Imam, Ian Foster, Rajkumar Kettimuthu, Bradley Settlemyer, Chase Q. Wu, and Daqing Yun.

New Genomics Pipeline Combines AWS, Local HPC, and Supercomputing (September 22, 2016)

Declining DNA sequencing costs and the rush to do whole genome sequencing (WGS) of large cohort populations – think 5000 subjects now, but many more thousands soon – presents a formidable computational challenge to researchers attempting to make sense of large cohort datasets. No single architecture is best. This month researchers report developing a hybrid approach that combines cloud (AWS), local high performance compute (LHPC) clusters, and supercomputers.

Their fascinating paper, A hybrid computational strategy to address WGS variant analysis in >5000 samples, spells out in some detail the obstacles associated with using each resource and how to divide the work to maximize throughput and minimize cost. Computational resources used included: Amazon AWS; a 4000-core in-house cluster at Baylor College of Medicine; IBM power PC Blue BioU at Rice University and Rhea at Oak Ridge National Laboratory (ORNL). DNAnexus was also a collaborator.

Read the full story [HERE].

Emilio Ramirez sees better outcomes

Just a few years ago, Emilio Ramirez spent his days operating and adjusting settings to optimize thermal performance at a Central California bioenergy power plant.

Emilio, a California native who is now a University of Tennessee doctoral candidate working with Oak Ridge National Laboratory, acquired his first experience with bioenergy as the lead operator of a biomass circulating fluidized-bed combustion facility. Although the work was steady and intriguing, Emilio knew he wanted to contribute to bioenergy on a national scale.

Read the full story [HERE].

OLCF Researchers Scale R to Tackle Big Science Data Sets (July 6, 2016)

Sometimes lost in the discussion around big data is the fact that big science has long generated huge data sets. "In fact, large-scale simulations that run on leadership-class supercomputers work at such high speeds and resolution that they generate unprecedented amounts of data. The size of these datasets—ranging from a few gigabytes to hundreds of terabytes—makes managing and analyzing the resulting information a challenge in its own right," notes an article posted on Oak Ridge National Laboratory site yesterday.

Now, a group at ORNL's Oak Ridge Leadership Computing Facility has developed an updated version of R – programming with big data in R (pbdR) – that is helping OLCF users cut large data sets down to size.

Read more here - https://www.hpcwire.com/2016/07/06/olcf-researchers-scale-r-tackle-big-science-data-sets/

Science community scrutinizes p-values, reproducibility (April 7, 2016)

P-values have been in the news lately, with at least one journal announcing that it would no longer publish papers containing them because the statistics were too often used to support lower-quality research.

The American Statistical Association has released a statement on p-values, an unprecedented action by the ASA in its 177-year history, says the Computer Science & Mathematics Division's George Ostrouchov. The paper is currently an open access accepted manuscript in The American Statistician that may still undergo minor edits.

George says, "From a statistician's perspective, p-values and statistical significance summarize a complex and narrow situation rendering their interpretation to be highly context dependent. Unfortunately, in our quest to simplify things, they are often used to incorrectly justify broader statements, leading to the current reproducibility crisis in many fields. My favorite commentary on the ASA statement is at FiveThirtyEight."

More articles and blogs about the ASA statement can be found at Following up on the ASA Statement on P-Values and Statistical Significance.

ORNL researchers may continue to use P-values.

Nageswara Rao's Work Recognized with Best Paper Award (March 16, 2016)

Nageswara Rao's paper "Performance Comparison of SDN Solutions for Switching Dedicated Long-Haul Connections" received a best paper award (one of two) at The International Symposium on Advances in Software Defined Networking and Network Functions Virtualization, SOFTNETWORKING 2016, February 21 - 25 in Lisbon, Portugal.

Read the full citation [here].

CSMD Researcher Recognized (February 24, 2016)

Dr. Satyabrata Sen was elevated to the grade of IEEE Senior member.

Dr. Sen is a research scientist in the Center for Engineering Systems Advanced Research Group and is working on the statistical detection and estimation algorithms for radar signal processing and network fusion techniques. His research interests are in the areas of statistical signal processing, compressive sensing, asynchronous distributed tracking, network fusion, and their applications in radar, communications, and sensor arrays. He has co-authored 33 research articles in the top IEEE journals and international conferences, and contributed to 2 book chapters.

Dr. Sen is the recipient of the 2016 Sidney D. Drell Academic Award by the Intelligence and National Security Alliance (INSA) for his contributions to the U.S. Intelligence Community, and the Eugene P. Wigner Fellowship at ORNL from February 2011 to January 2013 for being an outstanding early-career professional.

Simunovic part of team awarded patent by USPTO (January 19, 2016)

Srdjan Simunovic (along with Donald L. Erdman III, Vlastimil Kunc, and Yanli Wang) was awarded patent number 9,239,277 by the USPTO. The patent, titled "Material mechanical characterization method for multiple strains and strain rates" pertains to systems and methods for characterizing mechanical properties when subject to multiple strains and strain rates.

This invention was made with government support under Contract No. DE-AC05-00OR22725 awarded by the U.S. Department of Energy.

Read the full citation [here].

Process variation threatens to slow down and even pause chip miniaturization (January 5, 2016)

For past several decades, the processor industry has enjoyed the benefits of chip miniaturization and the exponential increase in the number of on-chip transistors as predicted by Moore's law. However, as process technology scales to small feature sizes, precise control of fabrication processes has become increasingly difficult. As a result, 'process variation' (PV), which refers to the deviation in parameters from their nominal specifications, has greatly worsened.

In his paper "A Survey Of Architectural Techniques for Managing Process Variation" Sparsh Mittal investigates the impact of PV along with strategies for mitigating it in a wide range of system architectures, e.g. in CPUs, GPUs, in processor components (cache, main memory, processor core), in memory technologies (SRAM, DRAM, eDRAM, non-volatile memories e.g. PCM, resistive RAM) and in both 2D and 3D processors.

Read more at: http://phys.org/news/2016-01-variation-threatens-chip-miniaturization.html#jCp

CSMD's Sen receives Drell Academic Award (December 17, 2015)

Satya Sen of the Computer Science & Mathematics Division has been awarded the 2016 Sidney D. Drell Academic Award by Intelligence and National Security Alliance, an organization of public, private and academic sectors within the national security and intelligence communities. Satya came to ORNL as a Wigner Fellow and is currently a member of CSMD's Complex Systems group. His award is based on his contributions to the Department of Homeland Security Domestic Nuclear Detection Office's Intelligent Radiation Sensing System projects. The award is named for Sidney Drell, a Fermi Award-winning theoretical physicist and arms control expert and professor emeritus at Stanford Linear Accelerator Center.

Maier Named Fellow of the American Physical Society (November 18, 2015)

Thomas Maier, a senior research staff member in ORNL's Computer Science and Mathematics Division and in the Center for Nanophase Materials Sciences, focuses on many-body theory of correlated electron systems including unconventional superconductors, multilayers and nanostructures. He was cited by APS's Division of Condensed Matter Physics for "numerical and phenomenological calculations that have provided insight into cuprate and iron-pnictide superconductors."

Thomas was selected for the honor by the APS Council of Representatives and will be formally recognized at the APS's March meeting.

HPC Leading Institutes Announce Formation of the UCX Consortium to Expand Collaboration within HPC Community
(November 16, 2105)

Group Aims to Expedite Advances in High Performance Computing Worldwide; Takes the Next Step toward Achieving Exascale Performance

The OpenUCX community today, under the leadership of Oak Ridge National Laboratory, announced its intention to form the UCX Consortium, an industry group focused on the proliferation and continued evolution of the UCX High-Performance Computing Communication Framework. The Consortium will be formed to increase collaboration between government laboratories, universities and commercial businesses within the HPC community, expanding the possibilities for discovery and advancement. Members of the Consortium will benefit from shared knowledge and resources, a fast and flexible access to a wide range of worldwide utilities and communication directives. Furthermore, they will profit from a production-grade low-level flexible communication software environment, which can be used as a vehicle for revolutionary research, a key to foster innovation.

Read the full press release [here].

Agarwal awarded patent by USPTO (November 15, 2015)

Pratul Agarwal was awarded patent number 9,195,795 by the USPTO. The patent, titled "Identification and modification of dynamical regions in proteins for alteration of enzyme catalytic effect" describes a method to provide an inexpensive and efficient solution by utilizing computer simulations, in combination with available experimental data, to build suitable models and investigate the enzyme activity.

Read the full citation [here].

New ORNL Paper: Survey on Asymmetric Multicore Processors (November 14, 2015)

ORNL researcher Sparsh Mittal has authored a new paper entitled "A Survey Of Techniques for Architecting and Managing Asymmetric Multicore Processors." Now accepted for ACM Computing Surveys 2015, the document reviews nearly 125 papers.

Modern computing systems have become highly diverse in terms of their workloads, usage pattern, scale and optimization objective. Hence, even a highly-optimized "monolithic core" cannot simultaneously meet such diverse and often conflicting requirements. To address this challenge, asymmetric multicore processors have been proposed which feature cores of different types (e.g. big and LITTLE) in the same processor. Qualcomm Snapdragon 810, Samsung Exynos 5 Octa and Nvidia Tegra X1 are some examples of asymmetric multicores.

Download the paper [here].


CORE-Direct Wins R&D100 Award (November 13, 2015)

Manjunath Gorentla Venkata and Pavel Shamis receive the R&D100 Award at the awards ceremony in Las Vegas

CORE-Direct (Collectives Offload Resource Engine), developed with Mellanox Technologies with Pavel Shamis as ORNL team leader.

Oak Ridge National Laboratory's CORE-Direct is an application acceleration and scaling technology available with the InfiniBand HCA ecosystem for HPC, big data and data center applications. CORE-Direct's software and hardware is available by Mellanox. CORE-Direct technology accelerates the main determinant of performance and scalability in parallel applications, the group data exchange operations. To achieve this, it adds software and hardware capabilities to offload and execute the data exchange operations on the network, abstract the memory hierarchies on the node and provide a powerful abstraction to be used by applications, offering a novel and comprehensive solution. The testament to this is the wide and successful adoption of the technology-more than 28% of supercomputers on the Top 500 list of world's fastest supercomputers use CORE-Direct technology.

Widely recognized as the "Oscars of Invention," the R&D 100 Awards identify and celebrate the top technology products of the year. Past winners have included sophisticated testing equipment, innovative new materials, chemistry breakthroughs, biomedical products, consumer items, and high-energy physics. The R&D 100 Awards spans industry, academia, and government-sponsored research.

This research was done in the context of the Department of Energy Office of Science's FastOS program. This program is focused on exploratory work in operating systems and runtimes for petascale and beyond supercomputers.

CAEBAT III Kick-off Held in DC (November 3, 2016)

On Nov. 3, 2015, a joint kickoff for the third phase of the Computer Aided Engineering for Batteries (CAEBAT) program was held in Washington, DC.

The kickoff for the third phase of CAEBAT also marks the start of the ORNL-led Consortium for Advanced Battery Simulation (CABS), a collaboration between ORNL, Lawrence Berkeley National Laboratory (LBNL) and Sandia National Laboratories (SNL). CABS is a three-year, $1.525M/yr. effort.

Two major barriers for increasing battery energy density and power, increasing safety and reducing cost are: (1) insufficient understanding of the underlying physical phenomena that limit battery performance and safety, particularly the role of microstructure, and (2) lack of validated predictive simulation tools. CABS will address (1) by developing new experiments for properties with largest uncertainties and developing new validated models that allow researchers to explore battery response under both normal and abusive conditions, and will address (2) by deploying increasingly capable and computationally efficient releases of the Open Architecture Software (OAS) and components of the Virtual Integrated Battery Environment (VIBE), developed as part of CAEBAT 1.

CABS will operate as an integrated partnership. LBNL will provide data for properties and validation of microstructure models. SNL will perform high-resolution microstructure simulations. ORNL will develop and perform new mechanics experiments, develop homogenized, layer-resolved, and microstructure models, and will deploy software components through VIBE/OAS while enhancing its extensibility and improving computational performance through implementation of new hybrid / adaptive methods and other numerical improvements. ORNL will also serve as the lead institution, performing overall management.

Please contact John Turner for more information.

Rao Receives Patent (September 11, 2015)

Nagi Rao received a patent for his work - Failure detection in high-performance clusters and computers using chaotic map computations

This patent relates to failure detection and more particularly to fault detection of computing machines that utilize multiple processor cores and accelerators.

A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

Opportunities for Nonvolatile Memory Systems in Extreme-Scale High Performance Computing (November 2015)

A paper from CSMD'S Future Technologies Group, titled "Opportunities for Nonvolatile Memory Systems in Extreme-Scale High Performance Computing" has been highlighted on HPCWire website . This paper discusses scaling challenges for DRAM and also explores the scope for emerging non-volatile memories, e.g. Flash, STT-RAM etc.

Oak Ridge National Laboratory to Co-Lead DOE's New HPC for Manufacturing Program (September 24, 2015)

Oak Ridge National Laboratory (ORNL) is collaborating with Lawrence Livermore and Lawrence Berkeley National Laboratories (LLNL and LBNL) on a new US Department of Energy (DOE) program designed to fund and foster public-private R&D projects that enhance US competitiveness in clean energy manufacturing.

"The Manufacturing Demonstration Facility has worked with numerous industry partners to overcome challenges in areas of advanced manufacturing, and ORNL is excited by the prospect of extending and accelerating this success through modeling and simulation," said John Turner, Group Leader for Computational Engineering and Energy Sciences and ORNL lead for HPC4Mfg. He added, "We look forward to collaborating with colleagues at LLNL and LBNL, and with industry partners, to apply our computational expertise to challenging clean energy manufacturing problems." Read the article [here].

ACME continues to soar with yet another award (September 15, 2015)

Launched in 2014, ACME is a multi-laboratory initiative to harness the power of supercomputers like ORNL's Titan and Argonne's Mira to develop fully coupled state-of-the-science Earth system models for climate change research and scientific and energy applications. Eight national labs, the National Center for Atmospheric Research, four academic institutions, and one private sector company are collaborating on the 10-year project. Researchers from ORNL's Climate Change Science Institute (CCSI) were instrumental in developing and defending the project plan and are leading or co-leading several project teams, including the teams responsible for new land model development (Peter Thornton), assessing and improving model performance on high-performance computing platforms (Patrick Worley), and developing and evaluating simulation workflow tools (Kate Evans).

CSMD staff mentor HVA student (September 15, 2015)

Harding Valley Academy's Feldman ready to shed light on climate change
Following an internship at Oak Ridge National Laboratory (where he was mentored by CSMD's Kate Evans), 18-year-old Sam Feldman not only has chosen his profession - he's passionate about helping to reverse the effects of man-made climate change.
"That's when I first found out about climate change research, and that got me interested," Feldman, a Hardin Valley Academy 2015 graduate and advanced placement honors student, said. "All the data we studied pointed toward it being a reality. And the fact that ice sheets are melting [at the earth's poles]. ... I specifically studied ice sheets so I learned a lot about how they are currently melting."

Pavel Shamis (ORNL) and Gilad Shainer (Mellanox) announce the UCX Unified Communication X Framework. (September 15, 2015)

UCX is a collaboration between industry, laboratories, and academia to create an open-source production grade communication framework for data centric and HPC applications

Jay Jay Billings Interview (September 15, 2015)

CSMD researcher Jay Jay Billings was the subject of the latest User Spotlight interview which is part of the Eclipse Newsletter. The Eclipse technology is a vendor-neutral, open development platform supplying frameworks and exemplary, extensible tools (the "Eclipse Platform").
To read the full interview please go here - To read the full interview please go [here].

ORNL Announces the Land Ice Verification and Validation Tool Kit (LIVVKIT)

Kate Evans

On July 15, 2015, developers from Oak Ridge National Laboratory (ORNL) and Los Alamos National Laboratory (LANL) released the Land Ice Verification and Validation Software kit (LIVVkit), which was developed within the DOE ASCR/BER SciDAC project titled, Prediction of Ice Sheet and Climate Evolution at Extreme Scales (PISCEES).  With their collaborators at LANL, the ORNL team created the first capability to systematically evaluate the continental-scale Community Ice Sheet Model (CISM) using an advanced python environment with a fully interactive website.

Please go [here] to read the full release.

Science Highlights

Approach Developed for Automatically Characterizing Parallel Application Communication Patterns (November 2015)

P.C. Roth, J.S. Meredith, J.S. Vetter

Researchers developed an approach for automatically recognizing and concisely representing the communication patterns of parallel applications that use the Message Passing Interface (MPI). Characterizing parallel application communication patterns requires considerable expertise but greatly simplifies tasks such as proxy application validation, performance problem diagnosis, and debugging. This automated approach significantly reduces the expertise necessary for effective characterization.

To enable characterization of parallel application communication patterns by non-experts, we developed an approach for automatically recognizing and parameterizing communication patterns in MPI-based applications. Beginning with a communication matrix that indicates how much data each process transferred to every other process during the application's run, we use an automated search to recognize communication patterns within this matrix. At each search step, we recognize patterns from a pattern library in the communication matrix. Using a technique similar to astronomy's "sky subtraction," when we recognize a pattern we remove it from the matrix and apply our recognition approach recursively to the resulting matrix. Because more than one pattern might be recognized at each search step, the search produces a search results tree whose paths between root and leaves represent collections of patterns recognized in the original matrix. The path that accounts for the most of the original communication matrix's traffic corresponds to the collection of patterns that best explains the application's communication behavior. We implemented our approach in a tool called AChax that was highly effective in recognizing the communication patterns in a synthetic communication matrix and the regular communication patterns in matrices obtained from the LAMMPS molecular dynamics and LULESH shock hydrodynamics applications.

Funding for this work was provided by the Office of Advanced Scientific Computing Research, U.S. Department of Energy. The work
was performed at ORNL.

P.C. Roth, J.S. Meredith, J.S. Vetter, "Automated Characterization of Parallel Application Communication Patterns," 24th International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC-2015), Portland, Oregon, June 2015.

Explicit Integration with GPU Acceleration for Large Kinetic Networks (November 2015)

Benjamin Brock, Andrew Belt, Jay Jay Billings, and Mike Guidry

Researchers were able to achieve a 13X performance boost by migrating to GPUs. This increase in performance makes it possible to solve kinetic systems in parallel on GPUs.

We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems in various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.

This work was performed at ORNL and UTK.

Publication: Benjamin Brock, Andrew Belt, Jay Jay Billings, and Mike Guidry, "Explicit integration with GPU acceleration for large kinetic networks", Journal of Computational Physics, 302 (2015)

Trace-Driven Memory Access Pattern Recognition in Computational Kernels

Christos Kartsaklis and Tomislav Janjusic (ORNL); EunJung Park and John Cavazos (University of Delaware) (November 2015)

Researchers developed a methodology and tool to classify memory access patterns from traces.

Classifying memory access patterns is paramount to the selection of the right set of optimizations and determination of the parallelization strategy. Static analyses suffer from ambiguities present in source code, which modern compilation techniques, such as profile-guided optimization, alleviate by observing runtime behavior and feeding back into the compilation flow. We implemented a dynamic analysis technique for recognizing memory access patterns, with application to the stencils domain.

This work was performed at ORNL using OLCF resources.

Publication: EunJung Park, Christos Kartsaklis, Tomislav Janjusic, and John Cavazos. "Trace-Driven Memory Access Pattern Recognition in Computational Kernels", in WOSC 2015: 2nd Workshop on Optimizing Stencil Computations, in conjunction with SPLASH 2014, Portland, OR, October 2014.

BEAM Introduces "Push-button" Execution of Dynamically Generated HPC Data Analysis Workflows at OLCF, NERSC, and CADES (November 2015)

E. J. Lingerfelt, S. Jesse, A. Belianinov, E. Endeve, M. Shankar, M. B. Okatan, O. Ovchinikov, C. T. Symons, R. K. Archibald

The Bellerophon Environment for Analysis of Materials (BEAM) has added the capability for instrument scientists at IFIM and CNMS to perform and monitor near real-time data analysis by dynamically generating and executing HPC workflows on Titan at OLCF, Edison and Hopper at NERSC, and the Pileus compute cluster at CADES.

This work will accelerate scientific discovery by enabling IFIM and CNMS users to perform robust data analysis in only a fraction of the time required using today's techniques. BEAM users will be able to directly utilize DOE HPC compute and data resources to efficiently integrate and analyze complex multi-modal data with no previous experience as a computer programmer or HPC user.

Work was performed at ORNL, IFIM, CNMS, OLCF, and NERSC and is supported by the LDRD program of Oak Ridge National Laboratory.

Poster: E. J. Lingerfelt, S. Jesse, A. Belianinov, E. Endeve, M. Shankar, M. B. Okatan, O. Ovchinikov, C. T. Symons, R. K. Archibald, "Unifying In Silico and Empirical Experiments in CADES: Scalable Analysis of High-dimensional Nanophase Materials Imaging Data with BEAM", 2015 Smoky Mountains Computational Sciences and Engineering Conference, Sept. 2, 2015, Gatlinburg, TN.

Black Carbon Aerosols Induced Northern Hemisphere Tropical Expansion (November 2015)

M. Kovilakam and S. Mahajan

In a suite of experiments forced with a range of Black Carbon aerosols (BC) within estimated uncertainty bounds, we have analyzed the impact of BC on the expansion of the Tropics using a variety of metrics that quantify the extent of Tropics. These experiments suggest that the tropical expansion increases nearly linearly with increasing BC forcing due to the relative warming of the mid-latitudes by increased absorption of solar radiation by BC.

Global Climate Models (GCMs) underestimate the observed trend in tropical expansion. Recent studies partly attribute it to black carbon aerosols (BC), which are poorly represented in GCMs. We conduct a suite of idealized experiments with the Community Atmosphere Model (CAM4) coupled to a slab ocean model forced with increasing BC concentrations covering a large swath of the estimated range of current BC radiative forcing while maintaining their spatial distribution. The Northern Hemisphere (NH) tropics expand polewards nearly linearly as BC radiative forcing increases (0.70 W-1 m2), indicating that a realistic representation of BC could reduce GCM biases. We find support for the mechanism where BC induced midlatitude tropospheric heating shifts the maximum meridional tropospheric temperature gradient polewards resulting in tropical expansion. We also find that the NH poleward tropical edge is nearly linearly correlated with the location of the inter- tropical convergence zone (ITCZ), which shifts northwards in response to increasing BC.

Publication: Kovilakam M. and S. Mahajan (2015): Black carbon aerosols induced Northern Hemisphere Tropical Expansion, Geophysical Research Letters, 42, 4964-4972, doi:10.1002/2015GL064559

An Energy-Stable Convex Splitting for the Phase-Field Crystal Equation (November 2015)

KAUST research group, Oak Ridge National Laboratory

Researchers developed a discretization for the phase-field crystal equation, which guarantees mass and energy conservation while maintaining second order accuracy in space and time. They were able to prove that conservation is achieved discretely and demonstrate this on simulations in two- and three-dimensions.

The phase-field crystal equation, a parabolic, sixth-order and nonlinear partial differential equation, has generated considerable interest as a possible solution to problems arising in molecular dynamics. Nonetheless, solving this equation is not a trivial task, as energy dissipation and mass conservation need to be verified for the numerical solution to be valid. This work addresses these issues, and proposes a novel algorithm that guarantees mass conservation, unconditional energy stability and second-order accuracy in time. Numerical results validating our proofs are presented, and two- and three-dimensional simulations involving crystal growth are shown, highlighting the robustness of the method.

Publication: P. Vignal, L. Dalcin, D.L. Brown, N. Collier, V.M. Calo, An energy-stable convex splitting for the phase-field crystal equation, Computers & Structures Volume 158, 1 October 2015, pp 355–368.

A Continuation Multilevel Monte Carlo Algorithm (November 2015)

KAUST research group, Oak Ridge National Laboratory

Researchers designed a multilevel Monte Carlo algorithm for stochastic differential equation where the number of levels, the grid resolutions, and number of samples per level are determined so as to minimize computational costs while satisfying constraints on the bias and statistical error. The parameters of the numerical models used to estimate costs and errors are dynamically calibrated based on information obtained as the process continues. We demonstrate that our algorithm runs anywhere from 2-12 times faster than standard techniques.

Publication: N. Collier, A. Haji-Ali , F. Nobile, E. von Schwerin, R. Tempone, A continuation multilevel Monte Carlo algorithm, BIT Numerical Mathematics, June 2015, Volume 55, Issue 2, pp 399-432.

Efficient Storage of Data to HPSS (November 2015)

Benjamin Mayer

The researcher was able to reduce (by a factor of 1,000) the number of meta-data records needed to track stored Climate Science simulation data. As Climate data represents a significant portion of Titan computing resources, this reduction in records can have a significant impact on HPSS operations.

The ACME project is a major consumer of cycles from world class capability computing systems such as Titan. In the process of doing our science on these large-scale machines, simulation data are generated and need to be archived for future analysis. The system to satisfy this archival need is the High Performance Storage System (HPSS). The details of HPSS operation are many but one of the limiting factors in how much data the system can handle is related to the number of files the system needs to keep track of, so called meta-data operations. Storing data to HPSS can be a time consuming process for a user, and complicated given the interaction between different project requirements and HPSS system requirements. To alleviate these issues we have have codified the archiving of simulation files to a tar file along with the logic of reliable storage to the HPSS system. These operations are preformed through submitting jobs to the queuing system of the Data Transfer Nodes allowing very large file sets to be processed, automatic recording of outcomes and potential faults. The use of the queuing system also allows for large numbers of the jobs to be submitted without worry of overloading the system compared to manually running the processes due to limitations in how many queued process will be run concurrently. This program will be use as part of a larger whole to provide efficient, reliable, low effort and low cost storage of critical simulation files for the ACME project, while be able to be adapted to other projects as well. This program will also be used as a base for the data handling portion of a large Automated Workflow effort inside of ACME.

This work is sponsored by DOE via the ACME project.

Fidelity of Climate Extremes in High Resolution Global Climate Simulations (November 2015)

Mahajan S., K. J. Evans, M. Branstetter, V. Anantharaj and J. K. Leifeld

Researchers at ORNL developed a regionalization framework to quantify extreme events. Using the framework they demonstrated that simulations with the high-resolution global climate model better capture stationary and non-stationary precipitation extremes than low-resolution version of the model as represented by Generalized Extreme Value (GEV) distributions. This regionalization framework is implemented in parallel allowing for a quick analysis of global climate extremes in ultra-high resolution global climate models, speeding up data analysis by several orders of magnitude. We find that parameters of the GEV distribution models of precipitation extremes are better represented in the high-resolution simulations.

Precipitation extremes have tangible societal impacts. Here, we assess if current state of the art global climate model simulations at high spatial resolutions (0.35◦x0.35◦) capture the ob- served behavior of precipitation extremes in the past few decades over the continental US. We design a correlation-based regionalization framework to quantify precipitation extremes, where samples of extreme events for a grid box may also be drawn from neighboring grid boxes with statistically equal means and statistically significant temporal correlations. We model precipitation extremes with the Generalized Extreme Value (GEV) distribution fits to time series of annual maximum precipitation. Non-stationarity of extremes is captured by including a time- dependent parameter in the GEV distribution. Our analysis reveals that the high-resolution model substantially improves the simulation of stationary precipitation extreme statistics particularly over the Northwest Pacific coastal region and the Southeast US. Observational data exhibits significant non-stationary behavior of extremes only over some parts of the Western US, with declining trends in the extremes. While the high resolution simulations improve upon the low resolution model in simulating this non-stationary behavior, the trends are statistically significant only over some of those regions.

Publication: Mahajan S., K. J. Evans, M. Branstetter, V. Anantharaj and J. K. Leifeld (2015): Fidelity of precipitation extremes in high resolution global climate simulations, Procedia Computer Science

Towards a Science of Tumor Forecast for Clinical Oncology (November 2015)

T. Yankeelov, V. Quaranta , K.J. Evans, and E. Rericha

Researchers provided a numerical weather prediction perspective on the development of phenomenological models focused on the prediction of tumor growth. They also provided examples of potential benefit including using available patient data as an initial state to predict outcomes, test sensitivity of therapies to initial therapies.

Researchers propose that the quantitative cancer biology community makes a concerted effort to apply lessons from weather forecasting to develop an analogous methodology for predicting and evaluating tumor growth and treatment response. Currently, the time course of tumor response is not predicted; instead, response is only assessed post hoc by physical examination or imaging methods. This fundamental practice within clinical oncology limits optimization of a treatment regimen for an individual patient, as well as to determine in real time whether the choice was in fact appropriate. This is especially frustrating at a time when a panoply of molecularly targeted therapies is available, and precision genetic or proteomic analyses of tumors are an established reality. By learning from the methods of weather and climate modeling, we submit that the forecasting power of biophysical and biomathematical modeling can be harnessed to hasten the arrival of a field of predictive oncology. With a successful methodology toward tumor forecasting, it should be possible to integrate large tumor-specific datasets of varied types and effectively defeat one cancer patient at a time.

Publication: T. Yankeelov, V. Quaranta , K.J. Evans, and E. Rericha (2015). Towards a Science of Tumor Forecast for Clinical Oncology. Cancer Research 75:918-23, doi:10.1158/0008-5472.CAN-14-2233. DOE co-author: Katherine Evans, ORNL


December 5, 2017- Koby Hayashi: The CP Decomposition: Efficient Algorithms and Application to Neuro-Imaging