The Computational Mathematics activities include the developing and deploying computational and applied mathematical capabilities for modeling, simulating, and predicting complex phenomena of importantance to the Department of Energy (DOE). A particular focus is on developing novel scalable algorithms for exploiting novel high performance computing resources for scientific discovery as well as decision-sciences.

Discrete Math

Many complex systems of importance to DOE consist of networks of discrete components - for example, cyber networks, the electric power grid, social networks - whose behavior can drive energy demand, and biological (e.g. gene regulatory or metabolic) networks. Furthermore, the networks of interest to DOE are often very large, containing hundreds of millions of elements or more, and can range from relatively static in structure (e.g. the power grid), to rapidly changing and evolving (e.g. social networks). Mathematical analysis and models of complex networks can be used to computationally explore many practical questions about the performance and behaviors of real systems. Much of the existing research on complex networks has focused primarily on the identification of important invariant properties including degree distribution, clustering coefficients, shortest-paths, and connectivity. However, these properties tend to measure either very "local" interactions or very "global" properties, and thus often fail to capture the dynamics of processes on the network, where the characteristics of the coupling between local structure and network-wide phenomena is critical. The discrete mathematics research at ORNL currently focuses on elucidating and utilizing this "intermediate scale" structure in complex networks to enable scalable analysis and expand the types of queries which can be run on massive graphs/datasets. This includes work on developing new algorithms and implementations for solving graph optimization problems using tree decompositions and dynamic programming on parallel HPC architectures; integrating structural graph theory constructs with ideas from hyperbolic geometry and metric spaces to better define an intermediate scale "skeleton" for networks; and developing approaches to problems in dimensionality reduction which take advantage of network structure to avoid densification of sparse graph data. ORNL is also represented on the steering committee for the Graph 500 benchmark, which aims to provide performance metrics for data-intensive supercomputing applications to guide hardware and software design for supporting these increasingly important discrete problems.

Kinetic Theory

Kinetic modeling is a fundamental tool in the analysis and simulation of large particle-based systems. In a kinetic description, such systems are characterized by a positive distribution function which gives the number of particles with a given momentum at a given point in space and time. It is a mean-field description that, on the one hand, improves the accuracy of popular fluid models that are not valid away from equilibrium while, on the other hand, can incorporate important features of finer scale models, such as molecular dynamics and quantum mechanics without necessarily resolving these finer scales.

Traditional applications for kinetic modeling include dilute gases, plasmas, radiative transport, and multiphase flow. New and emerging applications include semi-classical descriptions of quantum mechanics, traffic and network modeling, and biological processes of self organization. In all these applications, kinetic transport shares a remarkably similar mathematical structure, featuring convection in phase space (with velocity/momentum advection and field-driven acceleration) and collision terms with integral operators.

The large phase space associated with the kinetic description has, in the past, made simulations impractical in most settings. However, recent advances in computer resources and numerical algorithms are making kinetic models more tractable. At Oak Ridge National Laboratory, the goal of the mathematics program is to continue and accelerate this trend by designing algorithms to leverage the computational power of the world's fastest computers. Particular emphasis is currently on three areas: (i) moment models, which track the evolution of a handful of weighted averages in the momentum space; (ii) asymptotic preserving methods, which capture asymptotic limits of kinetic equations without having to resolve unnecessary microscopic dynamics; and (iii) hybrid models that combine low-resolution and high-resolution models of particle systems in a single, efficient framework

Linear Algebra

The aim of the Numerical Linear Algebra research program is developing key numerical linear algebra techniques in support of large-scale computational science applications of importance to the Office of Science. The project is developing algorithms that minimize data movement and global synchronization in preparation for exascalescale computing that will have an impact on applications in fusion, materials, and nuclear energy. The technology roadmapspredict power and data movement will be important issues in the path to exascale computing. The project re-evaluates and adapts out-of-core (or external memory) algorithms and the latest development in communication avoiding algorithm to minimize data movement and communication for numerical linear algebra. In the near future, the parallel out-of-core algorithms such as LU, QR, and Cholesky factorization can be adapted to minimize the amount of costly data movement between GPU and CPU. This would allow efficient solution of significant problems that are larger than available GPU device memory on the Cray XK6 Titan Supercomputer.

High-order Accurate Solvers

The objective of this research is to investigate the combination of high-order accurate Krylov deferred correction (KDC) and discontinuous Galerkin (DG) methods for high-performance computing. High-order schemes are usually more efficient than lower order ones for long simulation times in a time-dependent problem, because they have very low numerical diffusion and allow for a coarse spatial mesh. C. W. Shu has shown that high-order schemes can resolve solution structures, which are impractically expensive to obtain for low-order ones. Recently, high-order numerical schemes have attracted increasing attention for exascale computing because of their high computation intensity and efficient use of memory.

Scalable stencil-based solver algorithms (Contact Ralf Deiterding)

This research concentrates on developing highly scalable algorithms and modular software in support of numerical methods for solving partial differential equations with stencil-based discretization approaches. Stencil codes require only nearest neighbor information in order to perform a numerical update and are thereby amenable to efficient and hybrid parallelization. Focussing on structured patch-based kernels, presently considered disretizations include finite difference, shock-capturing finite volume and Lattice Boltzmann methods. Explicit as well as implicit matrix-free schemes are investigated. A current primary emphasis lies on increasing the scalability of multi-level techniques like multigrid and patch-based adaptive mesh refinement by reducing communication steps and volume, for instance, by incorporating asynchronous iteration ideas. Further on, we develop embedded boundary and level set algorithms that allow the application of scalable and dynamically adaptive stencil codes in geometrically complex energy-related multi-physics scenarios.

Multiresolution and Adaptive Numerical Environment for Scientific Simulations(MADNESS) (Contact George Fann)

Numerical modeling softwares and solvers are key components of successful and accurate simulations for effective predictions and analysis. The results help us to improve our scientific theories as well as product and engineering designs. A representative of the most successful of the traditional software and solver environment is MATLAB with a core that is based on matrix and vector operations. In order to represent multiple physics interactions, complicated geometries and their interfaces, the user is still required to choose the appropriate representations, code-interactions and solution methodologies. The underlying conversion of the equations and geometries, their discretizations and representations as well as the overall solution algorithm still depend on the user.

Developments in advanced mathematics and computer science at ORNL have resulted in an integrated approach of rigorously representing the mathematics underlying the hierarchy of physics models with an effective run-time environment for scalable and accurate computation on today's most advanced architectures. A recent shift in paradigm has resulted in working with representations of functions and operators and their approximations directly; the user can concentrate on the appropriate solution algorithms.

The MADNESS Project's research focuses on two themes: the mathematical methods and the run-time system which supports the computation. The first seeks to represent and apply operators (integral and differential) to functions in the most accurate, effective and compact form, dynamically and adaptively, in a compatible way with the description of the boundary conditions and geometries. The second integrates a software environment to successfully apply the mathematical operations and solution methodologies in the most time-effective manner, solve the problems to users' desire accuracy in the fastest time, and achieve high performance and scalability on modern high performance computers. Most engineering software today achieves at most 10-15% of the peak performance of the processors.

The MADNESS software has been applied to discover new results in computational chemistry and materials, nuclear structures, molecular particle-fluid interactions and laser induced molecular reactions. More applications are under investigation and development. MADNESS is capable, out of the box, on heterogeneous (accelerator and GPU equipped) Linux laptops and desktops, clusters to the Leadership Class supercomputers such as Crays and IBM Blue-Gene.

Our new mathematical approach incorporates the traditional adaptive pseudo-spectral element based methods and more recent analysis-based methods in a mathematical framework which guarantees the precision of the computation with corresponding commensurate workload and time to solution. Representations of functions and operators, and geometry, (in 1-6D) are derived in a rigorous, adaptive and compact representation for provable guaranteed precision. For certain common operators, such as derivatives of functions, and Green's functions and convolutions for solutions of integral equations (e.g., such as Poisson, which is used in fluid-dynamics and electrostatics) are precomputed and tabulated for fast applications. The MADNESS software environment is based on a local adaptive pseudo-spectral approach which combines time and space adaptivity as required. Hierarchies of nonlinear and multidimensional physics equations at different scale are decomposed in a rigorous manner to compute appropriate contributions. Our research is on developing the most compact and scalable representation for functions and operators, from highly oscillatory, singular and discontinuous to complex boundaries with nonlinear interactions. Interfaces to external linear solver libraries are available.

The research in run-time environments has resulted in a user-friendly high-level representation of the physics and their mathematical equations for ease of programming and debugging. A mini-compiler operates on the user's code, breaks down the mathematical operations into a distributed task based computing model with interprocessor scheduling and data dependency analysis (direct acyclic graph) for maximizing throughput by reducing process stalling. This is a software distributed version of a local processor hardware threading model. Users can issue distributed run-time load balancing. Work is underway for dynamic work stealing as well as for resilience and fault-tolerance.

(Contact George Fann)


MADNESS (Multiresolution ADaptive NumErical Scientific Simulation) is a framework for scientific simulation in many dimensions using adaptive multiresolution methods in multiwavelet bases. Computation in many dimensions (3, 6 and higher) is made practical through the use of separated representations of functions and/or operators. Scientific applications are composed in terms of functions and operators using thePython programming language. The high-level Python object model is supported with C/C++ and Fortran routines for efficient execution. The current implementation is a proof-of-principle prototype that will be freely distributed in the near future.

OLCF researchers a part of team recognized for work on software package

A team led by ORNL's Robert Harrison has been awarded an R&D 100 Award by R&D Magazine. The team's award stems from the development of theMultiresolution Adaptive Numerical Environment for Scientific Simulations, or MADNESS, submitted and developed by ORNL with a team led by Harrison and George Fann. The team also includes Rebecca Hartman-Baker, also a member of Oak Ridge Leadership Computing Facility's (OLCF's) Scientific Computing Group. MADNESS is a free, open-source, general purpose, user-friendly software package for the development scientific simulations from laptops to massively parallel supercomputers. MADNESS utilizes the latest parallel computing and solution methodologies to solve many dimensional integral and differential equations accurately and precisely for real-world problems. MADNESS provides a new platform for scientists and engineers to easily create new applications with assurance in the exactness of their results. Through the 2011 INCITE program, Harrison and his team have an allocation of 75 million hours on OLCF's Jaguar to model chemical catalysts and interfaces at the petascale. Each year R&D Magazine selects 100 technological products that highlight the most outstanding advances in technology. These awards, sometimes referred to as the "Academy Awards of Science,"are chosen by an expert panel of independent judges and the editors of R&D Magazine. This is Harrison's second R&D 100 Award. As the principal architect of NWChem, a chemistry code that was included in the software package MS3, Harrison was part of an R&D 100 team in 1999. "I want to congratulate this year's R&D 100 award winners,"Energy Secretary Steven Chu said. "The Department of Energy's national laboratories are at the forefront of innovation, and it is gratifying to see their work recognized once again. The cutting-edge research and development done in our national labs is helping to meet our energy challenges, strengthen our national security and enhance our economic competitiveness."


Uncertainty Quantification

A truly predictive science for many problems of scientific and national interest requires advanced mathematical and computational tools that explain how the uncertainties, ubiquitous in all modeling efforts, affect our predictions and understanding of complex phenomena. This remains a fundamental difficulty in most applications that dominate the focus of the DOE mission. Examples include enhancement of reliability of smart energy grids, development of renewable energy technologies, vulnerability analysis of water and power supplies, understanding complex biological networks, climate change estimation and design and licensing of current and future nuclear energy reactors. Often these systems describe physical and biological processes exhibiting highly nonlinear, or even worse, discontinuous/bifurcating phenomena at a diverse set of length and/or time scales. Moreover, predictive simulation of these systems requires significantly more computational effort than high-fidelity deterministic simulations; particularly when the random input data (coefficients, forcing terms, initial and boundary conditions, geometry, etc.) are affected by large amounts of uncertainty. Even for these high-dimensional stochastic problems, simulation code verification, model validation, and uncertainty quantification (UQ) are indispensable tasks demanded to justify a predictive capability in a mathematically and scientifically rigorous manner.


ORNL's research focusses on the development of several transformational methodologies related to the efficient, accurate and robust computation of statistical quantities of interest, i.e., the information used by engineers and decision makers, that are determined from solutions of complex stochastic simulations. Our objective is to harness powerful HPC architectures through innovative approaches for UQ that provide quantitative bounds on the applicability of extreme scale calculations. We are currently concentrating on the development of rigorous mathematical procedures for combating the curse of dimensionality and for exploiting multicore extreme parallelism. These methods are based on multi-dimensional multi-resolution adaptive sparse stochastic non-intrusive and semi-intruisve approximations. The latter paradigm builds on existing progress in generic programming to selectively couple ensembles of our hierarchical decomposition while enabling advanced recycling and block solver techniques. Finally, our massively scalable UQ framework is being made available through an ORNL Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN (TASMANIAN).

(Contact Miroslav Stoyanov)

Visit the site HERE