Events

Workshops and Conferences

Beyond Lithium Ion VIII (June 2-4, 2015)

Sreekanth Pannala and Jason Zhang

Significant advances in electrical energy storage could revolutionize the energy landscape. For example, widespread adoption of electric vehicles could greatly reduce dependence on finite petroleum resources, reduce carbon dioxide emissions and provide new scenarios for grid operation. Although electric vehicles with advanced lithium ion batteries have been introduced, further breakthroughs in scalable energy storage, beyond current state-of-the-art lithium ion batteries, are necessary before the full benefits of vehicle electrification can be realized.

Motivated by these societal needs and by the tremendous potential for materials science and engineering to provide necessary advances, a consortium comprising IBM Research and five U.S. Department of Energy National Laboratories (National Renewable, Argonne, Lawrence Berkeley, Pacific Northwest, and Oak Ridge) will host a symposium June 2-4, 2015, at Oak Ridge National Laboratory. This is the eighth in a series of conferences that began in 2009.

Visit site [here].

Numerical and Computational Developments to Advance Multiscale Earth System Models (MSESM) (June 1-3, 2015)

Kate Evans

Substantial recent development of Earth system models has enabled simulations that capture climate change and variability at ever finer spatial and temporal scales. In this workshop we seek to showcase recent progress on the computational development needed to address the new complexities of climate models and their multi-scale behavior to maximize efficiency and accuracy. This workshop brings together computational and domain Earth scientists to focus on Earth system models at the largest scales for deployment on the largest computing and data centers.

Topics include, but are not limited to:

Visit the site [here].


ASCR Workshop on Quantum Computing for Science (February 17-18, 2015)

Travis Humble

At the request of the Department of Energy's (DOE) Office of Advanced Scientific Computing Research (ASCR), this program committee has been tasked with organizing a workshop to assess the viability of quantum computing technologies to meet the computational requirements in support of DOE's science and energy mission and to identify the potential impact of these technologies. As part of the process, the program committee is soliciting community input in the form of position papers. The program committee will review these position papers and, based on the fit of their area of expertise and interest, selected contributors will have the opportunity to participate in the workshop currently planned for February 17-18th, 2015 in Bethesda, MD.

Visit the site [here].

OpenSHMEM User Group - OUG2014 (October 7)

The OpenSHMEM User Group (OUG 2014) is a user meeting dedicated to the promotion and advancement of all aspects of the OpenSHMEM API and its tools eco-system. The goal of the meeting is to discuss and present the ongoing user experiences, research, implementations, and tools that use the OpenSHMEM API. A particular emphasis will be given to the ongoing research that can enhance the OpenSHMEM specification to leverage the emerging hardware while addressing the application needs.

Visit the site [here].

DOE Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting (June 16-20)

John Turner and Sreekanth Pannala

The DOE Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting was held on June 16-20, 2014, in Washington, D.C. John Turner, Group Leader of Computational Engineering and Energy Sciences (CEES) and Sreekanth Pannala, Distinguished Staff Member in CEES, attended. Dr. Pannala presented a summary of the Open Architecture Software (OAS) developed for the Computer Aided Engineering for Batteries (CAEBAT) project and described related activities such as the development of a common input format based on XML for battery simulation tools and a common "battery state" file format to facilitate transfer of information between simulation tools.

ORNL Software Expo (May 7)

Jay Billings and Dasha Gorin

There are programmers, researchers, and engineers all over the ORNL campus developing software or modifying existing programs to better meet either their goals, and/or help others reach theirs. Because we are so spread out, many scientists are unaware of others' projects--possibly missing out on important opportunities to collaborate or otherwise ease their burden. The Computer Science Research Group would like to provide a chance for everyone to come together and remedy this. We will be hosting a poster session in the JICS atrium on Wednesday, May 7th, from 9:00am-12:00pm. Presenters will be showcasing posters and/or live demos of their projects. Anyone working at ORNL, from interns to senior staff members, may register to present a (non-classified) project at www.csm.ornl.gov/expo; the deadline to register is April 16th. Non-presenting attendees do not need to register. Please join us; this is not only a great networking opportunity, but also a celebration of ORNL's diverse programming community!

Visit the site [here].

Spring Durmstrang Review (March 25-26)

The spring review for the Durmstrang project managed by the ESSC was held on March 25-26 in Maryland. Durmstrang is a DoD/ORNL collaboration in extreme scale high performance computing. The long term goal of the project is to support the achievement of sustained exascale processing on applications and architectures of interest to both partners. Steve Poole, Chief Scientist of CSMD, presented the overview and general status update at the spring review. Benchmarks R&D discussion was facilitated by Josh Lothian, Matthew Baker (left in photo), Jonathan Schrock, and Sarah Powers of ORNL; Languages and Compilers R&D discussion was facilitated by Matthew Baker, Oscar Hernandez, Pavel Shamis (right in photo), and Manju Venkata of ORNL; I/O and FileSystems R&D discussion was facilitated by Brad Settlemyer of ORNL; Networking R&D discussion was facilitated by Nagi Rao, Susan Hicks, and Paul Newman of ORNL; Power Aware Computing R&D discussion was facilitated by Chung-Hsing Hsu of ORNL; System Schedulers R&D discussion was facilitated by Greg Koenig, Tiffany Mintz, and Sarah Powers of ORNL. A special panel on Networking R&D was also convened to discuss best practices and path forward. Panelists included both DoD and ORNL members. The topics of discussion during the executive session of the review included continued funding/growth of the program, task progression, and development of performance metrics for the project.

SOS 18 (March 17-20)

On March 17-20, 2014, Jeff Nichols, Al Geist, Buddy Bland, Barney Maccabe, Jack Wells, and John Turner attended the 18th Sandia-Oak Ridge-Switzerland workshop (SOS18) [1]. The SOS workshops are co-organized by James Ang at Sandia National Laboratory, John Turner at ORNL, and Thomas Schulthess at the Swiss National Computing Center. The theme this year was "Supercomputers as scientific instruments", and a number of presentations examined this analogy extensively. John Turner, Computational Engineering and Energy Sciences Group Leader conveys the following impressions: (1) Python is ubiquitous, (2) Domain Specific Languages (DSLs) are no longer considered exotic, (3) proxy apps / mini-apps continue to gain popularity as a mechanism for domain developers to interact with computer scientists, and (4) there is increased willingness on the part of code teams to consider a full re-write of some codes, but funding for such activities remains unclear. Presentations can be obtained from the SOS18 web site [2].

[1] http://www.cscs.ch/sos18/index.html
[2] http://www.cscs.ch/sos18/agenda/index.html

2014 Gordon Research Conference on Batteries (March 9-14)

John Turner

On March 9-14, Computational Engineering and Energy Sciences Group Leader John Turner attended the 2014 Gordon Research Conference on Batteries [1] in Ventura, CA. Dr. Turner presented a poster titled "3D Predictive Simulation of Battery Systems" on behalf of the team working on battery simulation, Sreekanth Pannala, Srikanth Allu, Srdjan Simunovic, Sergiy Kalnaus, Wael Elwasif, and Jay Jay Billings. The work was funded through the Vehicle Technologies (VT) program office within the EERE [2] as part of the CAEBAT program [3]. This program, led by NREL and including industry and university partners, is developing computational tools for the design and analysis of batteries. CSDM staff are leading development of the shared computational infrastructure used across the program.

[1] http://www.grc.org/programs.aspx?year=2014&program=batteries
[2] DOE Office of Energy Efficiency and Renewable Energy (EERE)
[3] The Computer-Aided Engineering for Batteries (CAEBAT) program (http://www.nrel.gov/vehiclesandfuels/energystorage/
caebat.html
)

Advisory Council Review (March 6-7)

The third annual meeting of the Oak Ridge National Laboratory Computing and Computational Sciences Directorate (CCSD) advisory committee was convened March 6-7 to focus on two key areas of Directorate activities. The primary activities were to review CCSD's recent developments in computational and applied mathematics and CCSD's geospatial data science program and its impact on problems of national and global significance.

In the course of the review, CSMD researchers presented five posters covering their work in computational and applied mathematics.

Developing U.S. Phenoregions from Remote Sensing

Jitendra Kumar, Forrest Hoffman, and William Hargrove

Variations in vegetation phenology can be a strong indicator of ecological change or disturbance. Phenology is also strongly influenced by seasonal, interannual, and long-term trends in climate, making identification of changes in forest ecosystems a challenge. Normalized difference vegetation index (NDVI), a remotely sensed measure of greenness, provides a proxy for phenology. NDVI for the conterminous United States (CONUS) derived from the Moderate Resolution Spectroradiometer (MODIS) at 250 m resolution was used in this study to develop phenological signatures of ecological regimes called phenoregions. By applying a unsupervised, quantitative data mining technique to NDVI measurements for every eight days over the entire MODIS record, annual maps of phenoregions were developed. This technique produces a prescribed number of prototypical phenological states to which every location belongs in any year. Since the data mining technique is unsupervised, individual phenoregions are not identified with an ecologically understandable label. Therefore, we applied the method of MAPCURVES to associate individual phenoregions with maps of biomes, land cover, and expert-derived ecoregions. By applying spatial overlays with various maps, this "label-stealing" method exploits the knowledge contained in other maps to identify properties of our statistically derived phenoregions.

 

 

 

 

 

 

3D Virtual Vehicle

Sreekanth Pannala and John Turner

Advances in transportation technologies, sensors, and onboard computers continue to push the efficiency of vehicles, resulting in an exponential increase in design parameter space. This expansion spans the entire vehicle and includes individual components such as combustion engines, electric motors, power electronics, energy storage, and waste heat recovery, as well as weight and aerodynamics. The parameter space has become so large that manufacturers do not have the computational resources or software tools to optimize vehicle design and calibration. This expanded flexibility in vehicle design and control, in addition to stringent CAFE standards, is driving a need for the development of new high-fidelity vehicle simulations, optimization methods, and self-learning control methods. The biggest opportunity for improvements in vehicle fuel economy is improved integration and optimization of vehicle subsystems. The current industry approach is to optimize individual subsystems using detailed computational tools and to optimize the vehicle system with a combination of low order map-based simulations with physical prototype vehicles. Industry is very interested in reducing the dependence on prototype vehicles due to significant investment cost and time. With increasingly aggressive fuel economy standards, emissions regulations, and unprecedented growth in vehicle technologies, the current approach is simply not sufficient to meet these challenges. The increase in technologies has led to an exponential growth in parameter and calibration space. Advanced modeling and simulation through virtual vehicle framework can facilitate accelerated development of vehicles through the rapid exploration and optimization of parameter space, while providing guidance to more focused experimental studies. Each of the component areas require HPC resources, and an integrated system approach will likely approach Exascale.

 

 

Networking and Communications Research and Development

Pavel Shamis, Brad Settlemyer, Nagi Rao, Thomas Naughton, and Manju Gorentla

Universal Common Communication Substrate (UCCS) is a communication middleware that aims to provide a high performing low-level communication substrate for implementing parallel programming models. UCCS aims to deliver a broad range of communication semantics such as active messages, collective operations, puts, gets, and atomic operations. This enables implementation of one-sided and two-sided communication semantics to efficiently support both PGAS (OpenSHMEM, UPC, Co-Array Fortran, ect) and MPI-style programming models. The interface is designed to minimize software overheads, and provide direct access to network hardware capabilities without sacrificing productivity. This was accomplished by forming and adhering to the following goals: Provide a universal network abstraction with an API that addresses the needs of parallel programming languages and libraries. Provide a high-performance communication middleware by minimizing software overheads and taking full advantage of modern network technologies with communication-offloading capabilities. Enable network infrastructure for upcoming parallel programming models and network technologies.

 

 

 

 

 

 

 

 

Compute and Data Environment for Science (CADES)

Galen Shipman

The Compute and Data Environment for Science (CADES) provides R&D with a flexible and elastic compute and data infrastructure. The initial deployment consists of over 5 petabytes of high-performance storage, nearly half a petabyte of scalable NFS storage, and over 1000 compute cores integrated into a high performance ethernet and InfiniBand network. This infrastructure, based on OpenStack, provides a customizable compute and data environment for a variety of use cases including large-scale omics databases, data integration and analysis tools, data portals, and modeling/simulation frameworks. These services can be composed to provide end-to-end solutions for specific science domains.

 

 

 

 

 

 

 

 

 

 

 

 

 

Co-designing Exascale

Scott Klasky and Jeffrey Vetter

Co-design refers to a computer system design process where scientific problem requirements influence architecture design and technology and constraints inform formulation and design of algorithms and software. To ensure that future architectures are well-suited for DOE target applications and that major DOE scientific problems can take advantage of the emerging computer architectures, major ongoing research and development centers of computational science need to be formally engaged in the hardware, software, numerical methods, algorithms, and applications co-design process. Co-design methodology requires the combined expertise of vendors, hardware architects, system software developers, domain scientists, computer scientists, and applied mathematicians working together to make informed decisions about features and tradeoffs in the design of the hardware, software and underlying algorithms. CSMD is a Co-PI organization on all three ASCR Co-design Centers: Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), Center for Exascale Simulation of Advanced Reactors (CESAR), and Center for Exascale Simulation of Combustion in Turbulence (ExaCT). Read more about the ASCR Co-design centers [here].

 

 

 

 

 

 

 

 

OpenSHMEM Workshop (March 4-6)

Oscar Hernandez, Pavel Shamis, and Jennifer Goodpasture

The OpenSHMEM workshop for Extreme Scale Systems Center (ESSC) was held in Annapolis, Maryland on March 4-6. This workshop was held to promote the advancement of parallel programming with the OpenSHMEM programming interface and to help shape its future direction. The OpenSHMEM workshop is an annual event dedicated to the promotion and advancement of parallel programming with the OpenSHMEM programming interface and to helping shape its future direction. It is the premier venue to discuss and present the latest developments, implementation technology, tools, trends, recent research ideas and results related to OpenSHMEM and its use in applications. This year's workshop also emphasized the future direction of OpenSHMEM and related technologies, tools and frameworks. It also focused on future extensions for OpenSHMEM and hybrid programming on platforms with accelerators. Topics of interest for conference included (but were not limited to): Experiences in OpenSHMEM applications in any domain Extensions to and shortcomings of current OpenSHMEM specification Hybrid heterogeneous or many-core programming with OpenSHMEM and other languages or APIs (i.e. OpenCL, OpenACC, CUDA, OpenMP) Experiences in implementing OpenSHMEM on new architectures Low level communication layers to support OpenSHMEM or other PGAS languages/APIs Performance evaluation of OpenSHMEM or OpenSHMEM-based applications Power / energy studies of OpenSHMEM Static analysis and verification tools for OpenSHMEM Modeling and performance analysis tools for OpenSHMEM and/or other PGAS languages/APIs. Auto-tuning or optimization strategies for OpenSHMEM programs Runtime environments and schedulers for OpenSHMEM Benchmarks and validation suites for OpenSHMEM The workshop had participation from DoD, DoE Office of Science, and other National Labs such as Argonne National Lab and Sandia National Lab. Visit the site [here].

2014 Oil & Gas High-Performance Computing Workshop (March 6)

David Bernholdt, Scott Klasky, David Pugmire, Suzy Tichenor

On 6 March, Rice University in Houston hosted the seventh annual meeting on the computing an information technology challenges and needs in the oil and gas industry. CSMD researchers figured prominently on the day's program, which included 31 presentations an additional 31 research posters, and attracted over 500 registered participants. Computer Science Research Group Leader David Bernholdt gave a plenary talk titled "Some Assembly Required? Thoughts on Programming at Extreme Scale" which covered a broad range of issues in programming future systems, including programming models and languages, resilience, and the engineering of scientific software. Scientific Data Group Leader Scott Klasky gave a talk titled "Extreme Scale I/O using the Adaptable I/O System (ADIOS)" in the "Programming Models, Libraries, and Tools" track, which introduced ADIOS to the audience of Oil&Gas industry by describing the problems of data driven science and the approach taken by ADIOS to extreme scale data processing. Finally, David Pugmire, a member of the Scientific Data Group gave a talk titled" Visualization of Very Large Scientific Data" in the "Systems Infrastructure, Facilities, and Vizualization" track which addressed the challenges of current and future HPC systems for large scale visualization and analysis. Suzy Tichenor, Director of Industrial Partnerships for the Computing and Computational Sciences Directorate also participated in the workshop.

Prevent Three-Eyed Fish: Analyze Your Nuclear Reactor with Eclipse (March 1)

Jordan Deyton and Jay Jay Billings

Jordan Deyton co-presented with Jay Jay Billings at EclipseCon North America 2014 on Wednesday, March 19. The talk demonstrated the NEAMS Integrated Computational Environment (NiCE) and its post-simulation reactor analysis plugin called the Reactor Analyzer, which supports Light Water and Sodium-cooled Fast Reactor analysis.

Software Productivity for Extreme-Scale Science Workshop (January 13-14)

The ASCR Workshop on Software Productivity for Extreme-Scale Science (SWP4XS) was held 13-14 January 2014 in Rockville, MD. The meeting was organized by researchers from ANL, LBNL, LANL, LLNL, ORNL (David Bernholdt, CSR/CSMD), and the Universities of Alabama and Southern California at the behest of the US Department of Energy Office of Advanced Scientific Computing Research, to bring together computational scientists from academia, industry, and national laboratories to identify the major challenges of large-scale application software productivity on extreme-scale computing platforms.

The focus of the workshop was on assessing the needs of computational science software in the age of extreme-scale multicore and hybrid architectures, examining the scientific software lifecycle and infrastructure requirements for large-scale code development efforts, and exploring potential contributions and lessons learned that software engineering can bring to HPC software at scale. Participants were asked to identify short- and long-term challenge of scientific software that must be addressed in order to significantly improve the productivity of emerging HPC computing systems through effective scientific software development processes and methodologies.

The workshop included more than 70 participants, including ORNL researchers Ross Bartlett (CEES/CSMD), Al Geist (CTO/CSMD), Judy Hill (SciComp/NCCS), Jeff Vetter (FT/CSMD), in addition to organizer Bernholdt. Participants contributed 35 position papers in advance of the workshop, and the workshop itself included 19 presentations, a panel discussion, and three sets of breakout sessions, most of which are archived on the workshop's web site (http://www.orau.gov/swproductivity2014/)

An outcome of the workshop will be a report that articulates and prioritizes productivity challenges and recommends both short- and long-term research directions for software productivity for extreme-scale science.

Society for Industrial and Applied Mathematics Annual Meeting

The SIAM Annual Meeting is the largest applied math conference held every year. Guannan Zhang and Miroslav Stoyanov organized a mini-symposium on "Recent Advances in Numerical Methods for Partial Differential Equations with Random Inputs", which is a rapidly growing field that is of great importance to science. There were 12 invited speakers both national labs and academia including ORNL, Argonne National Lab, Florida State University, University of California, University of Pittsburgh, Virginia Tech, University of Minnesota, Auburn University. The mini-symposium was well attended by an even wider variety of researchers. The mini-symposium gave participants an opportunity to discuss their current work as well as future development of the field.

Durmstrang-2 Review

The Fall review for the Durmstrang-2 project was held on September 10-11 in Maryland. Durmstrang-2 is a DoD/ORNL collaboration in extreme scale high performance computing. The long term goal of the project is to support the achievement of sustained exascale processing on applications and architectures of interest to both partners. The Durmstrand-2 project is managed from the Extreme Scale Systems Center (ESSC) of CCSD.

Steve Poole, Chief Scientist of CSMD, presented the overview and general status update at the Fall review. Benchmarks R&D discussion was facilitated by Josh Lothian, Matthew Baker, Jonathan Schrock, and Sarah Powers of ORNL; Languages and Compilers R&D discussion was facilitated by Matthew Baker, Oscar Hernandez, Pavel Shamis, and Manju Venkata of ORNL; I/O and FileSystems R&D discussion was facilitated by Brad Settlemyer of ORNL; Networking R&D discussion was facilitated by Nagi Rao, Susan Hicks, Paul Newman, Neena Imam, and Yehuda Braiman of ORNL; Power Aware Computing R&D discussion was facilitated by Chung-Hsing Hsu of ORNL; System Schedulers R&D discussion was facilitated by Greg Koenig and Sarah Powers of ORNL. A special panel on Lustre was also convened to discuss best practices and path forward. Panelists included both DoD and ORNL members. The topics of discussion during the executive session of the review included continued funding/growth of the program, task progression, and development of performance metrics for the project.
Upcoming events for ESSC include: OpenSHMEM Birds-of-a-Feather session at Supercomputing 2013, OpenSHMEM booth at Supercomputing 2013, and OpenSHMEM Workshop (date to be announced).

ORNL R&D Staff to recently join the ESSC team is Dr. Tiffany Mintz (Computer Science Research Group).

CAEBAT Annual Review

The Computer-Aided Engineering for Batteries (CAEBAT) program [1] is funded through the Vehicle Technologies (VT) program office within the DOE Office of Energy Efficiency and Renewable Energy (EERE). This program, led by NREL and including industry and university partners, is developing computational tools for the design and analysis of batteries with improved performance and lower cost. CSDM staff in the Computational Engineering and Energy Science (CEES) and Computer Science (CS) groups are leading development of the shared computational infrastructure used across the program, known as the Open Architecture Software (OAS), as well as defining standards for input and battery "state" representations [2].

On Aug. 27, 2013, the ORNL team (Sreekanth Pannala, Srdjan Simunovic, Wael Elwasif, Sergiy Kalnaus, Jay Jay Billings, Taylor Patterson, and CEES Group Leader John Turner) hosted the CAEBAT Program Manager, Brian Cunningham, at ORNL. This visit served as an annual review for the ORNL CAEBAT effort, and provided a venue for the team to demonstrate progress in simulation capabilities, including an initial demonstration of the use of the NEAMS Integrated Computational Environment (NiCE) with OAS [3].

[1] http://www.nrel.gov/vehiclesandfuels/energystorage/caebat.html
[2] http://energy.ornl.gov/CAEBAT/
[3] http://sourceforge.net/apps/mediawiki/niceproject/index.php?title=CAEBAT

 

First OpenSHMEM Workshop: Experiences, Implementations and Tools
October 23-25

The OpenSHMEM workshop is an annual event dedicated to the promotion and advancement of parallel programming with the OpenSHMEM programming interface and to helping shape its future direction. It is the premier venue to discuss and present the latest developments, implementation technology, tools, trends, recent research ideas and results related to OpenSHMEM and its use in applications. This year's workshop will also emphasize the future direction of OpenSHMEM and related technologies, tools and frameworks. We will also focus on future extensions for OpenSHMEM and hybrid programming on platforms with accelerators. Although, this is an OpenSHMEM specific workshop, we welcome ideas used for other PGAS languages/APIs that may be applicable to OpenSHMEM.

Topics of interest for conference include (but are not limited to):

http://www.csm.ornl.gov/workshops/openshmem2013/

Fourth Workshop on Data Mining in Earth System Science
June 5-7

CSMD researcher Forrest Hoffman organized the Fourth Workshop on Data Mining in Earth System Science (DMESS 2013; http://www.climatemodeling.org/workshops/dmess2013/) with co-conveners Jitendra Kumar (ORNL), J. Walter Larson (Australian National University, AUSTRALIA), and Miguel D. Mahecha (Max Planck Institute for Biogeochemistry, GERMANY). This workshop was held in conjunction with the 2013 International Conference on Computational Sciences (ICCS 2013; http://www.iccs-meeting.org/iccs2013/) in Barcelona, Spain, on June 5-7, 2013, and was chaired by J. Walter Larson. Richard T. Mills and Brian Smith of ORNL both presented papers in the DMESS 2013 session. These papers were published in volume 18 of Procedia Computer Science and are available at http://dx.doi.org/10.1016/j.procs.2013.05.411 and http://dx.doi.org/10.1016/j.procs.2013.05.408

Special Symposium on Phenology
April 14-18

CSMD researcher Forrest Hoffman co-organized a Special Symposium on Phenology with Bill Hargrove and Steve Norman (USDA Forest Service) and Joe Spruce (NASA Stennis Space Center) at the 2013 U.S.-International Association for Landscape Ecology Annual Symposium (US-IALE 2013; http://www.usiale.org/austin2013/), which was held April 14-18, 2013, in Austin, Texas. Hoffman also gave an oral presentation in this symposium. Titled "Developing Phenoregion Maps Using Remotely Sensed Imagery", this presentation described application of a data mining algorithm to the entire record of MODIS satellite NDVI for the conterminous U.S. at 250 m resolution to delineate annual maps of phenological regions. In addition, I was co-author on four other oral presentations at the US-IALE Symposium, including one by Jitendra Kumar (ORNL) that described an imputation technique for estimating tree suitability from sparse measurements.

SOS 17 Conference
March 25-28

Successful workshop on Big Data and High Performance Computing hosted by ORNL in Jekyll Island Georgia

SOS is an invitation-only 2 1/2 day meeting held each year by Sandia labs, Oak Ridge National Laboratory, and Swiss Technical institute. This year it was hosted by ORNL in Jekyll Island Georgia on March 25-28, 2013.

The theme this year was "The intersection of High Performance Computing and Big Data." There were 40 speakers and panelists from around the world representing views from industry, academia, and national laboratories. The first day focused on the gaps between big computing and big data and the challenges of turning science data into knowledge. On the second day the talks and panels focused on where HPC and big data intersect and the state of big-data analysis software. The morning of the third day focused on the politics of big data including the issues of data ownership.

Findings of the meeting include the fact that large experimental facilities such as CERNs Large Hadron Collider, and the new telescopes coming online already generate prodigious amounts of scientific data. The volume and speed that data is generated requires that the data be analyzed on the fly and only a tiny fraction be kept. The amount kept still amounts to many petabytes. The attendees stressed how important provenance is to the use of the archived data by other researchers around the world. The majority of today's scientific data is only of value to the original researcher, because the data lacks the meta-data required for others to use it. The talks and panels clearly showed the intersection of high performance computing and big data. They also showed that the converse is not necessarily true, i.e. big data (as defined by Google and Amazon) does not require high performance computing. These vendors and their customers are able to get their work done on large, distributed networks of independent PCs. The meeting was filled with lively discussion, and provocative questions.

For those wanting to know more, the agenda and talks are posted on the SOS17 website: http://www.csm.ornl.gov/workshops/SOS17/

SIAM SEAS 2013 Annual Meeting
March 22-24

On March 22-24, Oak Ridge National Laboratory and the University of Tennessee hosted the 37th annual meeting of the SIAM Southeastern Atlantic Section. The meeting included approximately 160 registered participants, of which roughly 60 were students and 20 were from ORNL. There were 4 plenary talks, 24 mini-symposium sessions, seven contributed sessions, and a poster session. Awards were given to students for Best Paper and Best Poster presentations. Attendees were also given guided tours of the Graphite Reactor, the Spallation Neutron Source, and the National Center for Computational Science. The meeting was organized by Chris Baker (ORNL), Cory Hauck (ORNL), Jillian Trask (UT), Lora Wolfe (ORNL), and Yulong Xing (ORNL/UT).

Durmstrang-2
March 18-19

The semi-annual review for the Durmstrang-2 project was held on March 18-19 in Maryland. Durmstrang-2 is a DoD/ORNL collaboration in extreme scale high performance computing. The long term goal of the project is to support the achievement of sustained exascale processing on applications and architectures of interest to both partners. The Durmstrand-2 project is managed from the Extreme Scale Systems Center (ESSC) of CCSD.

Steve Poole, Chief Scientist of CSMD, presented the overview and general status update at the March review. Benchmarks R&D discussion was facilitated by Josh Lothian, Matthew Baker, Jonathan Schrock, and Sarah Powers of ORNL; Languages and Compilers R&D discussion was facilitated by Matthew Baker, Oscar Hernandez, Pavel Shamis, and Manju Venkata of ORNL; I/O and FileSystems R&D discussion was facilitated by Brad Settlemyer of ORNL; Networking R&D discussion was facilitated by Nagi Rao, Susan Hicks, Paul Newman, and Steve Poole of ORNL; Power Aware Computing R&D discussion was facilitated by Chung-Hsing Shu of ORNL; System Schedulers R&D discussion was facilitated by Greg Koenig and Sarah Powers of ORNL. The topics of discussion during the executive session of the review included continued funding/growth of the program and development of performance metrics for the project.

APS 2013 March Meeting
March 18-22

The American Physical Society (APS) March Meeting is the largest physics meeting in the world, focusing on research from industry, universities, and major labs. Participation in this years' meeting held in Baltimore, MD (March 18-22, 2013) by staff members of the Computational Chemical and Materials Sciences (CCMS) Group included 24 different talks (bold names are from CCMS).

Monojoy Goswami, Bobby G. Sumpter, "Morphology and Dynamics of Ion Containing Polymers using Coarse Grain Molecular Dynamics Simulation", Talk in Session T32: Charged and Ion Containing Polymers (March 21, 2013) APS National Meeting, Baltimore.

Debapriya Banerjee, Kenneth S. Schweizer, Bobby G. Sumpter, Mark D. Dadmun, "Dispersion of small nanoparticles in random copolymer melts", Talk in Session F32: Polymer Nanocomposites II (March 19, 2013) APS National Meeting, Baltimore.

Rajeev Kumar, Bobby G. Sumpter, S. Michael Kilbey II, "00003 Charge regulation and local dielectric function in planar polyelectrolyte brushes", Talk in Session U32: Charged Polymers and Ionic Liquids (March 21, 2013) APS National Meeting, Baltimore.

Alamgir Karim, David Bucknall, Dharmaraj Raghavan, Bobby Sumpter, Scott Sides, "In-situ Neutron Scattering Determination of 3D Phase-Morphology Correlations in Fullerene -Polymer Organic Photovoltaic Thin Films", Talk in Session Y33: Organic Electronics and Photonics-Morphology and Structure I (March 22, 2013) APS National Meeting, Baltimore.

Geoffrey Rojas, P. Ganesh, Simon Kelly, Bobby G. Sumpter, John Schlueter, Petro Maksymovych," Molecule/Surface Interactions and the Control of Electronic Structure In Epitaxial Charge Transfer Salts", Talk in Session U35: Search for New Superconductors III (March 21, 2013) APS National Meeting, Baltimore.

Geoffrey A. Rojas, P. Ganesh, Simon Kelly, Bobby G. Sumpter, John A. Schlueter, Petro Maksymovich, "Density Functional Theory studies of Epitaxial Charge Transfer Salts", Talk in Session N35: Search for New Superconductors III (March 20, 2013) APS National Meeting, Baltimore.

Arthur P. Baddorf, Qing Li, Chengbo Han, J. Bernholc, Humberto Terrones, Bobby G. Sumpter, Miguel Fuentes-Cabrera, Jieyu Yi, Zheng Gai, Peter Maksymovych, Minghu Pan," Electron Injection to Control Self-Assembly and Disassembly of Phenylacetylene on Gold", Talk in Session C33: Organic Electronics and Photonics - Interfaces and Contacts (March 18, 2013) APS National Meeting, Baltimore.

Mina Yoon, Kai Xiao, Kendal W. Clark, An-Ping Li, David Geohegan, Bobby G. Sumpter, Sean Smith, "Understanding the growth of nanoscale organic semiconductors: the role of substrates", Talk in Session Z33: Organic Electronics and Photonics - Morphology and Structure II (March 22, 2013) APS National Meeting, Baltimore.

Chengbo Han, Wenchang Lu, Jerry Bernholc, Miguel Fuentes-Cabrera, Humberto Terrones, Bobby G. Sumpter, Jieyu Yi, Zheng Gai, Arthur P. Baddorf, Qing Li,. Peter Maksymovych, Minghu Pan, "Computational Study of Phenylacetylene Self-Assembly on Au(111) Surface", Talk in Session C33: Organic Electronics and Photonics - Interfaces and Contacts (March 18, 2013) APS National Meeting, Baltimore.

Jaron Krogel, Jeongnim Kim, David Ceperley "Prospects for efficient QMC defect calculations: the energy density applied to Ge self-interstitials", Talk in Session J24: Quantum Many-Body Systems and Methods I (March 19, 2013) APS National Meeting, Baltimore.

Kendal Clark, Xiaoguang Zhang, Ivan Vlassiouk, Guowei He,Gong Gu, Randall Feenstra, An-Ping Li, "Mapping the Electron Transport of Graphene Boundaries Using Scanning Tunneling Potentiometry", Talk in Session G6: CVD Graphene - Doping and Defects (March 19, 2013) APS National Meeting, Baltimore.

Gregory Brown, Donald M. Nicholson, Markus Eisenbach, Kh. Odbadrakh "Wang-Landau or Statistical Mechanics", Talk in Session G6: Equilibrium Statistical Mechanics, Followed by GSNP Student Speaker Award (March 18, 2013) APS National Meeting, Baltimore.

Don Nicholson, Kh. Odbadrakh, German Samolyuk, G. Malcolm Stocks," Calculated magnetic structure of mobile defects in Fe", Session Y16: Magnetic Theory II (March 22, 2013) APS National Meeting, Baltimore.

Khorgolkhuu Odbadrakh, Don Nicholson, Aurelian Rusanu, German Samolyuk, Yang Wang, Roger Stoller, Xiaoguang Zhang, George Stocks, "Coarse graining approach to First principles modeling of structural materials", Session A43: Multiscale modeling--Coarse-graining in Space and Time I (March 18, 2013) APS National Meeting, Baltimore.

M. G. Reuter & P. D. Williams, "The Information Content of Conductance Histogram Peaks: Transport Mechanisms, Level Alignments, and Coupling Strengths" Talk in Session R43: Electron Transfer, Charge Transfer and Transport Session, (March 20,2013) APS National Meeting, Baltimore.

Paul R. C. Kent, Panchapakesan Ganesh, Jeongnim Kim, Mina Yoon, Fernando Reboredo, "Binding and Diffusion of Li in Graphite: Quantum Monte Carlo Benchmarks and validation of Van der Waals DFT" Talk in Session A5: Van der Waals Bonding in Advanced Materials – Materials Behavior, (March 18, 2013) APS National Meeting, Baltimore.

Peter Staar, Thomas Maier, Thomas Schulthess, "DCA+: Incorporating self-consistently a continuous momentum self-energy in the Dynamical Cluster Approximation" Talk in Session N24, APS National Meeting, Baltimore.

Thomas Maier, Peter Hirschfeld, Douglas Scalapino, Yan Wang, Andreas Kreisel, "Pairing strength and gap functions in multiband superconductors: 3D effects" Talk in Session G37: Electronic Structute Methods II,(March 20, 2013) APS National Meeting, Baltimore.

Thomas Maier, Yan Wang, Andreas Kreisel, Peter Hirschfeld, Douglas Scalapino, "Spin fluctuation theory of pairing in AFe2As2" Talk in Session G37: Electronic Structure Methods II,(March 20, 2013), APS National Meeting, Baltimore.

Peter Hirschfeld, Andreas Kreisel, Yan Wang, Milan Tomic, Harald Jeschke, Anthony Jacko, Roser Valenti, Thomas Maier, Douglas Scalapino, "Pressure dependence of critical temperature of bulk FeSe from spin fluctuation theory" Talk in Session G37: Electronic Structure Methods II (March 20, 2013), APS National Meeting, Baltimore.

Markus Eisenbach, Junqi Yin, Don M. Nicholson, Ying Wai Li, "First principles calculation of finite temperature magnetism in Ni", Talk in Session C17: Magnetic Theory I (March 18, 2013), APS National Meeting, Baltimore.

Madhusudan Ojha, Don M. Nicholson, Takeshi Egami, "Ab-initio atomic level stresses in Cu-Zr crystal, liquid and glass phases", Talk in Session G42: Focus Session: Physics of Glasses and Viscous Liquids I (March 19, 2013), APS National Meeting, Baltimore.

Junqi Yin, Markus Eisenbach, Don Nicholson, "Spin-lattice coupling in BCC iron", Talk in Session T39: Metals Alloys and Metallic Structures (March 21, 2013), APS National Meeting, Baltimore.

German Samolyuk, Yuri Osetsky, Roger Stoller, Don Nicholson, George Malcolm Stocks, "The modification of core structure and Peierls barrier of 1/2$<111>$ screw dislocation in bcc Fe in presence of Cr solute atoms", Talk in Session T39: Metals Alloys and Metallic Structures (March 21, 2013), APS National Meeting, Baltimore.

SIAM-CSE13
February 25 - March 1

The CSMD had a strong showing at SIAM-CSE13 with over 25 presentations from staff members from the division.  This conference is a leading conference in computer science and mathematics, drawing thousands of researchers from across the globe and supported jointly by NSF and DOE.  Division scientist organized eight different mini-symposiums with close to a hundred invited speakers in areas of modern libraries (Christopher Baker), climate (Kate Evans), nuclear simulations (Bobby Philip), kinetic theory (Cory Hauck), hybrid architecture linear algebra (Ed D'Azevedo), UQ and stochastic inverse problems (Clayton Webster), and Structural Graph Theory, Sparse Linear Algebra, and Graphical Models (Blair Sullivan).

Seminars

 

December 12, 2014 - Jay Jay Billings: Eclipse ICE: ORNL's Modeling and Simulation User Environment

Abstract:
In the past several years ORNL modeling and simulation projects have experienced an increased need for interactive, graphical user tools. The projects in question span advanced materials, batteries, nuclear fuels and reactors, nuclear fusion, quantum computing and many others. They all require four tasks that are fundamental to modeling and simulation: creating input files, launching and monitoring jobs locally and remotely, visualizing and analyzing results, and managing data. This talk will present the Eclipse Integrated Computational Environment (ICE), a general-purpose open source platform that provides integrated tools and utilities for creating rich user environments. It will cover both the robust, new infrastructure developed for modeling and simulation projects, such as new mesh editors and visualization tools, as well as the plugins for codes that are already supported by the platform and taking advantage of these features . The design philosophy of the project will also be presented as well as how the "n-body code problem" is solved by the platform. In addition to covering the services provided by the platform, this talk will also discuss ICE's place in the larger Eclipse ecosystem and how it became an Eclipse project. Finally, we will show how you can leverage it to accelerate your code deployment, use it to simplify your modeling and simulation project or get involved in the development.

Bio: Jay Jay Billings is a member of the research staff in the Computer Science Research group and leader of the ICE team.

December 12, 2014 - Andrew Ross: Large scale Foundation Nurtured Collaboration

Abstract:
Software and data are crucial to almost all organizations. Open Source Software and Open Data are a vital part of this. This presentation provides a glimpse of why an open approach to software and data results in far more than just free software and data, as measured in terms of freedoms and acquisition price. Collaboration across groups within large organizations and between organizations is hard. The Eclipse Foundation is the NFL of open collaborations. It provides governance structure, technology infrastructure, and many services to facilitate collaboration. This presentation will briefly examine this and how working groups hosted by the Eclipse Foundation are enabling collaboration for domains such as Scientific R&D, Internet of Things (IoT), Location aware technologies, and more. The results are important; such as:

From this presentation, audience members will get a brief taste of some of the collaboration opportunities, how to learn more, and how to get involved.

Bio: Andrew Ross is Director of Ecosystem Development at the Eclipse Foundation, a vendor neutral not-for-profit. He is responsible for Eclipse's collaborative working groups including the LocationTech and Science groups which collaboratively develop software for location-aware systems and scientific research respectively. Prior to the Eclipse Foundation, Andrew was Director of Engineering at Ingres where his team developed advanced spatial support features for the relational database and many applications. Before Ingres, Andrew developed highly available Telecom solutions based on open source technologies for Nortel.

December 10, 2014 - Beth Plale: The Research Data Alliance: Progress and Promise in Global Data Sharing

Abstract:
The Research Data Alliance is coming up on 1.5 years old along the road to realizing its vision of "researchers and innovators openly sharing data across technologies, disciplines, and countries to address the grand challenges of society." RDA has grown tremendously in the last 1.5 years from a handful of committed individuals to an organization with 1600 members in 70 countries. As one who was part of the small group that got RDA off the ground and remains deeply engaged, I will introduce the Research Data Alliance, take stock of it's impressive accomplishments to date, and highlight what I see as the opportunities it faces in realizing the grand goal RDA states so succinctly in its vision.

November 14, 2014 - Taisuke Boku: Tightly Coupled Accelerators: A very low latency communication system on GPU cluster and parallel programming

Accelerating devices such as GPU, MIC or FPGA are one of the most powerful computing resources to provide high performance/energy and high performance/space ratio for wide area of large scale computational science. On the other hand, the complexity of programming combining various frameworks such as CUDA, OpenCL, OpenACC, OpenMP and MPI is growing and seriously degrades the programmability and productivity.

We have been developing XcalableMP (XMP) parallel programming language for distributed memory architecture for PC clusters to MPP, and enhancing its capability to include accelerating devices for heterogeneous parallel processing systems. XMP is a sort of PGAS language, and XMP-dev and XMP-ACC are the extension for accelerating devices. On the other hand, we are also developing a new technology for inter-node GPU direct communication named TCA (Tightly Coupled Accelerators) architecture network from special hardware to the applications covered by this concept. Our on-going project vertically integrate all these components toward the new generation of parallel accelerated computing.

In this talk, I will introduce our on-going project which vertically integrates all these components toward the new generation of parallel accelerated computing.

BIO:
Prof. Taisuke Boku received Master and PhD degrees from Department of Electrical Engineering at Keio University. After his carrier as assistant professor in Department of Physics at Keio University, he joined to Center for Computational Sciences (former Center for Computational Physics) at University of Tsukuba where he is currently the deputy director, the HPC division leader and the system manager of supercomputing resources. He has been working there more than 20 years for HPC system architecture, system software, and performance evaluation on various scientific applications. In these years, he has been playing the central role of system development on CP-PACS (ranked as number one in TOP500 in 1996), FIRST (hybrid cluster with gravity accelerator), PACS-CS (bandwidth-aware cluster) and HA-PACS (high-density GPU cluster) as the representative supercomputers in Japan. He also contributed to the system design of K Computer as a member of architecture design working group in RIKEN and currently a member of operation advisory board of AICS, RIKEN. He received ACM Gordon Bell Prize in 2011. His recent research interests include accelerated HPC systems and direct communication hardware/software for accelerators in HPC systems based on FPGA technology.

November 13, 2014 - Eric Lingerfelt: Accelerating Scientific Discovery with the Bellerophon Software System

Abstract:
We present an overview of a software system, Bellerophon, built to support a production-level HPC application called CHIMERA, which simulates the temporal evolution of core-collapse supernovae. Developed over the last 5 years at ORNL, Bellerophon enables CHIMERA's geographically dispersed team of collaborators to perform job monitoring and real-time data analysis from multiple supercomputing resources, including platforms at OLCF, NERSC, and NICS. Its n-tier architecture provides an encapsulated, end-to-end software solution that enables the CHIMERA team to quickly and easily access highly customizable animated and static views of results from anywhere in the world via a web-deliverable, cross-platform desktop application. Bellerophon has quickly evolved into the CHIMERA team's de facto work environment for analysis, artifact management, regression testing, and other workflow tasks. We will also present plans to expand utilization and encourage adoption by generalizing the system for new HPC applications and domains.

Bio:
Eric Lingerfelt is a technical staff member and software engineer in the ORNL Computer Science and Mathematics Division's Computer Science Research Group. Mr. Lingerfelt specializes in developing n-tier software systems with web-deliverable, highly-interactive client-side applications that allow users to generate, access, visualize, manipulate, and share complex sets of data from anywhere in the world. For over a decade, he has designed, developed, and successfully delivered multiple software systems to the US Department of Energy and other customers in the fields of nuclear astrophysics, Big Bang cosmology, core-collapse supernovae, isotope sales and distribution, environmental science, nuclear energy, theoretical nuclear science, and the oil and gas industry. He is a 2011 ORNL Computing and Computational Sciences Directorate Distinguished Contributor and the recipient of the 2013 CSMD Most Significant Technical Contribution Award. Mr. Lingerfelt received his B.S. in Mathematics and Physics from East Tennessee State University in 1998 and his M.S. in Physics from the University of Tennessee in 2002.

November 12, 2014 - John Springer: Discovery Advancements Through Data Analytics

Abstract:
The Purdue Discovery Advancements Through Analytics (D.A.T.A.) Laboratory seeks to address the computational challenges surrounding data analytics in the life, physical, and social sciences by focusing on the development and optimization of parallel codes that perform analytics. They complement these efforts by also examining the aspects of data analytics related to user adoption as well as the best practices pertaining to the management of associated metadata. In this seminar, the lead investigator in the D.A.T.A. Lab, Dr. John Springer, will discuss the lab¹s past and current efforts and will introduce the lab's planned activities.

Bio:
John Springer is an Associate Professor in Computer and Information Technology at Purdue University and the Lead Scientist for High Performance Data Management Systems at the Bindley Bioscience Center at Discovery Park. Dr. Springer's discovery efforts focus on distributed and parallel computational approaches to data integration and analytics, and he serves as the leader of the Purdue Discovery Advancements Through
Analytics (D.A.T.A.) Laboratory.

November 10, 2014 - Christopher Rodrigues: High-Level Accelerator-Style Programming of Clusters with Triolet

Container libraries are popular for parallel programming due to their simplicity. Programs invoke library operations on entire containers, relying on the library implementation to turn groups of operations into efficient parallel loops and communication. However, their suitability for parallel programming on clusters has been limited, due to having a limited repertoire of parallel algorithm implementations under the hood.

In this talk, I will present Triolet, a high-level functional language for using a cluster as a computational accelerator. Triolet improves upon the generality of prior distributed container library interfaces by separating concerns of parallelism, loop nesting, and data partitioning. I will discuss how this separation is used to efficiently decompose and communicate multidimensional array blocks, as well as to generate irregular loop nests from computations with variable-size temporary data. These loop-building algorithms are implemented as library code. Triolet's compiler inlines and specializes library calls to produce efficient parallel loops. The resulting code often performs comparably to handwritten C.

For several compute-intensive loops running on a 128-core cluster (with 8 nodes and 16 cores per node), Triolet performs significantly faster than sequential C code, with performance ranging from slightly faster to 4.3× slower than manually parallelized C code. Thus, Triolet demonstrates that a library of container traversal functions can deliver cluster-parallel performance comparable to manually parallelized C code without requiring programmers to manage parallelism. Triolet carries lessons for the design of runtimes, compilers, and libraries for parallel programming using container APIs.

BIO:
Christopher Rodrigues got his Ph.D. in Electrical Engineering at the University of Illinois. He is one of the developers of the Parboil GPU benchmark suite. A computer architect by training, he has chased parallelism up the software stack, having worked on alias and dependence analysis, parallel programming for GPUs, statically typed functional language compilation, and the design of parallel libraries. He is interested in reducing the pain of writing and maintaining high-performance parallel code.

November 3, 2014 - Benjamin Lee: Statistical Methods for Hardware-Software Co-Design

Abstract: To pursue energy-efficiency, computer architects specialize and coordinate design across the hardware/software interface. However, coordination is expensive, with high non-recurring engineering costs that arise from an intractable number of degrees of freedom. I present the case for statistical methods to infer regression models, which provide tractability for complex design questions. These models estimate performance and power as a function of hardware parameters and software characteristics to permit coordinated design. For example, I show how to coordinate the tuning of sparse linear algebra with the design of the cache and memory hierarchy. Finally, I describe on-going work in using logistic regression to understand the root causes of performance tails and outliers in warehouse-scale datacenters.

BIO: Benjamin Lee is an assistant professor of Electrical and Computer Engineering at Duke University. His research focuses on scalable technologies, power-efficient architectures, and high-performance applications. He is also interested in the economics and public policy of computation. He has held visiting research positions at Microsoft Research, Intel Labs, and Lawrence Livermore National Lab. Dr. Lee received his B.S. in electrical engineering and computer science at the University of California, Berkeley and his Ph.D. in computer science at Harvard University. He did postdoctoral work in electrical engineering at Stanford University. He received an NSF Computing Innovation Fellowship and an NSF CAREER Award. His research has been honored as a Top Pick by IEEE Micro Magazine and has been honored twice as Research Highlights by Communications of the ACM.

October 28, 2014 - Robinson Pino: New Program Directions for Advanced Scientific Computing Research (ASCR)

October 24, 2014 - Qingang Xiong: Computational Fluid Dynamics Simulation of Biomass Fast Pyrolysis - From Particle Scale to Reactor Scale

Abstract: Fast pyrolysis, a prominent thermochemical conversion approach to produce bio-oil from biomass, has attracted increased interest. However, the fundamental mechanisms of biomass fast pyrolysis are still poorly understood and the design, operation and optimization of pyrolyzers are far from satisfactory because of the characteristics that complicated multiphase flows are coupled with complex devolatilization processes. Computational fluid dynamics (CFD) is a powerful tool to investigate the underlined mechanisms of biomass fast pyrolysis and help optimize efficient pyrolyzers. In this presentation, I will describe my postdoctoral work on CFD of biomass fast pyrolysis at both particle scale and reactor scale. For the particle-scale CFD, the lattice Boltzmann method is used to describe the flow and heat transfer processes. The intra-particle gas flow is modeled by the Darcy law. A lumped multi-step reaction kinetics is employed to model the biomass decomposition. Through the particle-scale CFD, detailed information on the evolution of a biomass particle is obtained. The velocity, temperature, and species mass fraction inside and surrounding the particles are presented. The evolutions of particle shape and density are monitored. For the reactor-scale CFD, we use the so-called multi-fluid model to simulate the multiphase hydrodynamics, in which all phases are treated as interpenetrating continua. Volume-fraction based mass, momentum, energy, and species conservation equations are employed to describe the density, velocity, temperature and mass fraction fields. Various submodels are used to close the conservation equations. Using this model, fluidized-bed and auger reactors are modeled. Parametric and sensitivity studies on the effects of operating conditions, devolatilization schemes, and submodel selections are investigated. It is expected that these multi-scale CFD simulation will contribute significantly to the accuracy improvement of industrial reactor modeling for biomass fast pyrolysis. Finally, I will discuss on some of my ideas about the future directions in the multi-scale CFD simulation of biomass thermochemical conversion.

Biography: Dr. Qingang Xiong is a postdoctoral research associate in the Department of Mechanical Engineering, Iowa State University. Dr. Xiong obtained his Ph.D. in Chemical Engineering from Institute of Process Engineering, Chinese Academy of Sciences in 2011. After his graduation, Dr. Xiong went to the University of Heidelberg, Germany, as a software engineer for half year to conduct GPU-based high performance computing of astrophysics. Dr. Xiong's research areas are computational fluid dynamics, CPU- and GPU-based parallel computing, heat and mass transfer, and biomass thermochemical conversion. Dr. Xiong has published more 20 scientific papers and given more than 15 conference presentations. Dr. Xiong serves as editorial board member for several journals and chair in international conferences.

October 23, 2014 - Liang Zhou: Multivariate Transfer Function Design

Visualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this talk, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations.

October 20, 2014 - John Schmisseur: New HORIzONS: A Vision for Future Aerospace Capabilities within the University of Tennessee

Recently, issues of national interest including the planned DoD Pivot to the Pacific and assured large payload access to space have renewed commitment to the development of high-speed aerospace systems. As a result, many agencies, including the Air Force, are exploring new technology systems to facilitate operation in the hypersonic flight regime. One facet of the Air Force strategy in this area has been a reemphasis of hypersonic testing capabilities at the Arnold Engineering Development Complex (AEDC) and the establishment of an Air Force Research Laboratory scientific research group co-located at the complex. These recent events provide an opportunity for the University of Tennessee to support the Air Force and other agencies in the realization of planned high-speed capabilities while simultaneously establishing a precedent for the integration of contributions across the UT system.

The HORIzON center (High-speed Original Research & InnovatiON) at the University of Tennessee Space Institute (UTSI) has been established to address the current, intermediate and strategic challenges faced by national agencies in the development of high-speed/hypersonic capabilities. Specifically, the center will foster the development of world-class basic research capabilities in the region surrounding AEDC, create a culture of discovery and innovation integrating elements from academia, government and small business, and take the lead in the development of a rational methodology for the integration of large scale empirical and numerical data sets within a digital environment.

Dr. Schmisseur's presentation will provide the background and motivation that has driven the establishment of the HORIzON center and highlight a few of the center's major research vectors. He will be visiting ORNL to explore how contributions from the DoE can be integrated within the HORIzON enterprise to support the achievement of our national goals in high-speed technology development.

October 14, 2014 - Krishna Chaitanya Gurijala: Shaped-Based Analysis

Shape analysis plays a critical role in many fields, especially in medical analysis. There has been substantial research performed for shape analysis in manifolds. On the contrary, shape-based analysis has not received much attention for volumetric data. It is not feasible to directly extend the successful manifold shape analysis methods, such as the heat diffusion, to volumes due to the huge computational cost. The work presented herein seeks to address this problem by presenting two approaches for shape analysis in volumes that not only capture the shape information efficiently but also reduce the computational time drastically.

The first approach is a cumulative approach and is called the Cumulative Heat Diffusion, where the heat diffusion is carried out by simultaneously considering all the voxels as sources. The cumulative heat diffusion is monitored by a novel operator called the Volume Gradient Operator, which is a combination of the well-known Laplace-Beltrami operator and a data-driven operator. The cumulative heat diffusion is computed by considering all the voxels and hence is inherently dependent on the resolution of the data. Therefore, we propose a second approach which is a stochastic approach for shape analysis. In this approach the diffusion process is carried out by using tiny massless particles termed shapetons. The shapetons are diffused in a Monte Carlo fashion across the voxels for pre-defined distance (serves as single time step) to obtain the shape information. The direction of propagation for the shapetons is monitored by the volume gradient operator. The shapeton diffusion is
a novel diffusion approach and is independent of the resolution of the data. These approaches robustly extract features, objects based on shape.

Both shape analysis approaches are used in several medical applications such as segmentation, feature extraction, registration, transfer function design and tumor detection. This work majorly focuses on the diagnosis of colon cancer. Colorectal cancer is the second leading cause of cancer related mortality in the United States. Virtual colonoscopy is a viable non-invasive screening method, whereby a radiologist can explore a colon surface to locate and remove the precancerous polyps (protrusions/bumps on the colon wall). To facilitate an efficient colon exploration, a robust and shape-preserving colon flattening algorithm is presented using the heat diffusion metric which is insensitive to topological noise. The flattened colon surface provides effective colon exploration, navigation, polyp visualization, detection, and verification. In addition, the flattened colon surface is used to consistently register the supine and prone colon surfaces. Anatomical landmarks such as the
taeniae coli, flexures and the surface feature points are used in the colon registration pipeline and this work presents techniques using heat diffusion to automatically identify them

 

September 30, 2014 - Stanley Osher: What Sparsity and l1 Optimization Can Do For You

Sparsity and compressive sensing have had a tremendous impact in science, technology, medicine, imaging, machine learning and now, in solving multiscale problems in applied partial differential equations, developing sparse bases for Elliptic eigenspaces. l1 and related optimization solvers are a key tool in this area. The special nature of this functional allows for very fast solvers: l1 actually forgives and forgets errors in Bregman iterative methods. I will describe simple, fast algorithms and new applications ranging from sparse dynamics for PDE, new regularization paths for logistic regression and support vector machine to optimal data collection and hyperspectral image processing. Credits: Stanley Osher, jointly with many others)

MORE ABOUT THE SPEAKER

Dr. Osher's awards and accomplishments are voluminous and exceptionally remarkable, just a few highlights of which include:

The Gauss prize citation summarized Dr. Osher's many achievements by stating that, "Stanley Osher has made influential contributions in a broad variety of fields in applied mathematics. These include high resolution shock capturing methods for hyperbolic equations, level set methods, PDE based methods in computer vision and image processing, and optimization. His numerical analysis contributions, including the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes and numerical schemes for Hamilton-Jacobi type equations have revolutionized the field. His level set contributions include new level set calculus, novel numerical techniques, fluids and materials modeling, variational approaches, high co-dimension motion analysis, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations; level set methods have been extremely influential in computer vision, image processing, and computer graphics. In addition, such new methods have motivated some of the most fundamental studies in the theory of PDEs in recent years, completing the picture of applied mathematics inspiring pure mathematics."

September 11, 2014 - Jeffrey Willert: Increased Efficiency and Functionality inside the Moment-Based Accelerated Thermal Radiation Transport Algorithm

Recent algorithm design efforts for thermal radiation transport (TRT) have included the application of "Moment-Based Acceleration" (MBA). These MBA algorithms achieve accurate solutions in a highly efficient manner by moving a large portion of the computational effort to a nonlinearly consistent low-order (reduced phase space) domain.

In this talk I will discuss recent improvements/advancements of the MBA-TRT algorithm. We explore the use of Anderson Acceleration to solve the nonlinear low-order system as a replacement to a more traditional Jacobian-Free Newton-Krylov solver. Additionally, the MBA-TRT algorithm has struggled when error from Monte Carlo calculations builds up over several time steps. This error often corrupts the low-order system and may prevent convergence of the nonlinear solver. We attempt to remedy this by implementing a "Residual" Monte Carlo algorithm in which the stochastic error is greatly reduced for the same or less computational cost. We conclude with a discussion of areas of future work.

September 9, 2014 - Swen Boehm: STCI - A scalable approach for tools and runtimes

The system community to is required to provide scalable and resilient communication substrates and run-time infrastructures by the ever-increasing complexity and scale of high-performance computer (HPC) systems and parallel scientific applications. Two system research efforts will be presented, focusing on adaptation and customization of HPC runtimes as well as the usability of such systems. The Scalable runTime Component Infrastructure (STCI) will be introduced, a modular library that enables the implementation of new scalable and resilient HPC run-time systems. Its unique modular architecture eases the adaption to a particular HPC system. Additionally, STCI is based on the concept of "agents", which allows to further customize run-time services. For instance, STCI's customizability was recently utilized to implement an MPMD style execution model on top of STCI. Finally, "librte," will be presented: a unified runtime abstraction API that aims at improving the usability of HPC systems by providing an abstraction to various run-time systems such as Cray ALPS, PMI, ORTE and STC. "librte" is used by the Universal Common Communication Substrate (UCCS) and provides an simple and well-defined interface to tool developers.

September 9, 2014 - Ewa Deelman: Science Automation with the Pegasus Workflow Management System

Abstract sent on behalf of the speaker:
Scientific workflows allow scientists to declaratively describe potentially complex applications that are composed of individual computational components. Workflows also include a description of the data and control dependencies between the components. This talk will describe example workflows in various science domains including astronomy, bioinformatics, earthquake science, gravitational-wave physics, and others. It will examine the challenges faced by workflow management systems when executing workflows in distributed and high-performance computing environments. In particular, the talk will describe the Pegasus Workflow Management System developed at USC/ISI. Pegasus bridges the scientific domain and the execution environment by automatically mapping high-level workflow descriptions onto distributed resources. It locates the input data and computational resources necessary for workflow execution. It also restructures the workflow for performance and reliability reasons. Pegasus can execute workflows on a laptop, a campus cluster, grids, and clouds. It can handle workflows with a single task or millions of tasks and has been used to manage workflows accessing and generating TeraBytes of data. The talk will describe the capabilities of Pegasus and how it manages heterogeneous computing environments.

BIO FROM SPEAKER:
Ewa Deelman is a Research Associate Professor at the USC Computer Science Department and the Assistant Director of Science Automation Technologies at the USC Information Sciences Institute. Dr. Deelman's research interests include the design and exploration of collaborative, distributed scientific environments, with particular emphasis on workflow management as well as the management of large amounts of data and metadata. In 2007, Dr. Deelman edited a book: "Workflows in e-Science: Scientific Workflows for Grids", published by Springer. She is also the founder of the annual Workshop on Workflows in Support of Large-Scale Science, which is held in conjunction with the Super Computing conference. In 1997 Dr. Deelman received her PhD in Computer Science from the Rensselaer Polytechnic Institute.

August 29, 2014 - C. David Levermore: Coarsening of Particle Systems

ABSTRACT FROM SPEAKER:
Each particle in a simulation of a system of particles usually represents a huge number of real particles. We present a framework for constructing the dynamics for a so-called coarsened system of simulated particles. We build an approximate solution to the Liouville equation for the original system from the solution of an equation for the phase-space density of a smaller system. We do this with a Markov approximation within a Mori-Zwanzig formalism based upon a reference density. We then identify the evolution equation for the reduced phase-space density as the forward Kolmogorov equation of a Markov process. The original system governed by deterministic dynamics is then simulated with the coarsened system governed by this Markov process. Both Monte Carlo (MC) and molecular dynamics (MD) simulations can be view from this framework. More generally, the reduced dynamics can have elements of both MC and MD.

August 21, 2014 - Quan Long: Laplace method for optimal Bayesian experimental design with applications in impedance tomography and seismic source inversion

Abstract sent on behalf of the speaker:
Laplace method is a widely used method to approximate an integration in statistics. We analyze this method in the context of optimal Bayesian experimental design and extend this method from the classical scenario, where parameters can be completely determined by the experiment, to the scenarios where an unidentifiable parametric manifold exists. We show that by carrying out this approximation the estimation of the expected Kullback-Leibler divergence can be significantly accelerated. The developed methodology has been applied to the optimal experimental design of impedance tomography and seismic source inversion.

August 18, 2014 - Pierre Gremaud: Impedance boundary conditions for flows on networks

Abstract sent on behalf of the speaker:

From hemodynamics to engineering applications, many flow problems are solved on networks. For feasibility reasons, computational domains are often truncated and outflow conditions have to be prescribed at the end of the domain under consideration.

We will show how to efficiently compute the impedance of specific networks and how to use this information as outflow boundary condition. The method is based on linearization arguments and Laplace transforms. The talk will focus on hemodynamics applications but we will indicate how to generalize the approach.

July 23, 2014 - Frédérique Laurent-Negre: High order moment methods for the description of spray: mathematical modeling and adapted numerical methods

Abstract sent on behalf of the speaker:
We consider a two-phase flow constituted of a dispersed phase of liquid droplets (a spray) in a gas flow. This type of flow occurs in many applications, such as two-phase combustion or solid propulsion. The spray is then characterized by its distribution in size and velocity, which satisfies a Boltzmann-type equation. As an alternative to Lagrangian methods that are commonly used for the numerical simulations, we have developed Eulerian models that can account for the polydisperse character of the sprays. They use moments in size and velocity of the distribution on fixed intervals of droplet size. These moments represent the number and the mass or the amount of surface area, the momentum ... of all droplets of a given size range. However, the space in which the moment vectors live becomes complex when high order moments are considered. A key point of numerical methods is then to ensure that the moment vector will stay in this space. We study here some mathematical models derived from the kinetic model as well as high-order numerical methods specifically developed to preserve the moment space.

July 22, 2014 - Christos Kavouklis: Numerical Solution of the 3D Poisson Equation with the Method of Local Corrections

Abstract sent on behalf of the speaker:
We present a new version of the Method of Local Corrections; a low communications algorithm for the numerical solution of the free space Poisson's equation on 3D structured grids. We are assuming a decomposition of the fine computational domain (which contains the global right hand side - charge) into a set of small disjoint cubic patches (e.g. of size 33^3). The Method of Local Corrections comprises three steps where Mehrstellen discretizations of the Laplace operator are employed; (i) A loop over the fine disjoint patches and the computation of local potentials on sufficiently large extensions of theirs (downward pass) (ii) An inexpensive global Poisson solve on the associated coarse domain with right hand side computed by applying the coarse mesh Laplacian to the local potentials of step (i) and (iii) A correction of the local solutions computed in step (i) on the boundaries of the fine disjoint patches based on interpolating the global coarse solution and a propagation of the corrections in the patch interiors via local Dirichlet solves (upward pass). Local solves in the downward pass and the global coarse solve are performed utilizing the domain doubling algorithm of Hockney. For the local solves in the upward pass we are employing a standard DFT Dirichlet Poisson solver. In this new version of the Method of Local Corrections we take into consideration the local potentials induced by truncated Legendre expansions of degree P of the local charges (the original version corresponded to P=0). The result is an h-p scheme that is P+1-order accurate and involves only local communication. Specifically, we only have to compute and communicate the coefficients of local Legendre expansions (that is, for instance, 20 scalars per patch for expansions of degree P=3). Several numerical simulations are presented to illustrate the new method and demonstrate its convergence properties.

July 17, 2014 - Kody John Hoffman Law: Dimension-independent, likelihood-informed (DILI) MCMC (Markov chain Monte Carlo) sampling algorithms for Bayesian inverse problems

July 1, 2014 - Xubin (Ben) He: High Performance and Reliable Storage Support for Big Data

Abstract sent on behalf of the speaker:
Big data applications have imposed unprecedented challenges in data analysis, storage, organization and understanding due to their heterogeneity, volume, complexity, and high velocity. These challenges are for both computer systems researchers who investigate new storage and computational solutions to support fast and reliable access to large datasets and application scientists in various disciplines who exploit these datasets of vital scientific interest for knowledge discovery. In this talk, I will talk about my research in data storage and I/O systems, particularly in solid-state devices (SSDs) and erasure codes to provide cost effective solutions for big data management for high performance and reliability.

Bio:
Dr. Xubin He is a Professor and the Graduate Program Director of Electrical and Computer Engineering at Virginia Commonwealth University. He is also the Director of the Storage Technology and Architecture Research (STAR) lab. Dr. He received his PhD in Electrical and Computer Engineering from University of Rhode Island, USA in 2002 and both his MS and BS degrees in Computer Science from Huazhong University of Science and Technology, China, in 1997 and 1995, respectively. His research interests include computer architecture, reliable and high availability storage systems and distributed computing. He has published more than 80 refereed articles in prestigious journals such as IEEE Transactions on Parallel and Distributed Systems (TPDS), Journal of Parallel and Distributed Computing (JPDC), ACM Transactions on Storage, and IEEE Transactions on Dependable and Secure Computing (TDSC), and at various international conferences, including USENIX FAST, USENIX ATC, Eurosys, IEEE/IFIP DSN, IEEE IPDPS, MSST, ICPP, MASCOTS, LCN, etc. He is the general co-chair for IEEE NAS'2009, program co-chair for MSST'2010, IEEE NAS'2008 and SNAPI'2007. Dr. He has served as a proposal review panelist for NSF, various chair roles and committee members for many professional conferences in the field. Dr. He was a recipient of the ORAU Ralph E. Powe Junior Faculty Enhancement Award in 2004, the TTU Chapter Sigma Xi Research Award in 2010 and 2005, and TTU ECE Most Outstanding Teaching Faculty Award in 2010. He holds one U.S. patent. He is a senior member of the IEEE, a member of the IEEE Computer Society and USENIX.

June 18, 2014 - Hari Krishnan: Enabling Collaborative Domain-Centric Visualization and Analysis in High Performance Computing Environments

Abstract and Bio sent on behalf of the speaker:
Multi-institutional interdisciplinary domain science teams are increasingly commonplace in modern high performance computing (HPC) environments. Visualization tools, such as VisIt and ParaView, have traditionally focused more on improving scalability, performance, and efficiency of algorithms over enabling ease of use and collaborative functionality that compliments the power of the HPC resources. In addition, visualization tools provide an algorithm-based infrastructure focusing on a diverse set of readers, plots, and operations rather than higher level domain-specific set of capabilities when providing solutions to the scientific community. This strategy yields a higher return on investment, but increases complexity for the user community.

As larger, more diverse teams of scientists become more common place they require applications tuned at providing the most use out of a heavily utilized and resource constrained distributed HPC environment. Standard methods of visualization and data sharing pose significant challenges detracting from users focus on scientific inquiry.

In this presentation I will highlight three new capabilities under development within VisIt to address these needs which enable domain scientists to refocus their efforts on more productive endeavors. These features include tailored visualization using a new PySide/PyQt infrastructure, a new parallel analysis framework supporting Python & R scripting, and a collaboration suite that allows sharing and communicating among a variety of display mediums from mobile devices to visualization clusters. The goal is to enhance the experience of domain scientists by streamlining their work environment, providing easy access to a complex set of resources, and enabling collaborations, sharing, and communication among a diverse team.

Bio:
Hari Krishnan graduated with his Ph.D. in computer science and works for the visualization and graphics group as a computer systems engineer at Lawrence Berkeley National Laboratory. His research focuses on scientific visualization on HPC platforms and Many-core architectures. He leads the development effort on several HPC related projects which include research on new visualization methods, optimizing scaling and performance on Cray machines, working on data model optimized I/O libraries and enabling a remote workflow services. He is also an active developer on several major open source projects which include VisIt, NiCE, H5hut, and has developed plugins for Fiji/ImageJ.

 

May 20, 2014 - Weiran Sun: A Spectral Method for Linear Half-Space Kinetic Equations

Abstract sent on behalf of the speaker:
Half-space equations naturally arise in boundary layer analysis of kinetic equations. In this talk we will present a unified proof for the well-posedness of a class of linear half - space equations with general incoming data. We will also show a spectral method to numerically resolve these type of equations in a systematic way. Our main strategy in both analysis and numerics includes three steps: adding damping terms to the original half-space equation, using an inf - sup argument and even-odd decomposition to establish the well-posedness of the damped equation, and then recovering solutions to the original half - space equation. The accuracy of the damped equation is shown to be quasi-optimal and the numerical error of approximations to the original equation is controlled by that of the damped equation. Numerical examples are shown for the isotropic neutron transport equation and the linearized BGK equation. This is joint work with Qin Li and Jianfeng Lu.

May 14, 2014 - Michael Bauer: Programming Distributed Heterogeneous Architectures with Logical Regions

Abstract and Bio sent on behalf of the speaker:
Modern supercomputers now encompass both heterogeneous processors and deep, complex memory hierarchies. Programming these machines currently requires expertise in an eclectic collection of tools (MPI, OpenMP, CUDA, etc.) that primarily focus on describing parallelism while placing the burden of data movement on the programmer. Legion is an alternative approach that provides extensive support for describing the structure of program data through logical regions. Logical regions can be dynamically partitioned into sub-regions giving applications an explicit mechanism for directly conveying information about locality and independence to the Legion runtime. Using this information, Legion automatically extracts task parallelism and orchestrates data movement through the memory hierarchy. Time permitting, we will discuss results from several applications including a port of S3D, a production combustion simulation running on Titan, the Department of Energy's current flagship supercomputer.

Bio:
Michael Bauer is a sixth year PhD student in computer science at Stanford University. His interests include the design and implementation of programming systems for supercomputers and distributed systems.

May 8, 2014 - Jerry McMahan: Bayesian Inverse Problems for Uncertainty Quantification: Prediction with Model Discrepancy and a Verification Framework

Abstract sent on behalf of the speaker:
Recent work in uncertainty quantification (UQ) has made it feasible to compute the statistical uncertainties for mathematical models in physics, biology, and engineering applications, offering added insight into how the model relates to the measurement data it represents. This talk focuses on two issues related to the reliability of UQ methods for model calibration in practice. The first issue concerns calibration of models having discrepancies with respect to the phenomena they model when these discrepancies violate commonly employed statistical assumptions used for simplifying computation. Using data from a vibrating beam as a case study, I will illustrate how these discrepancies can limit the accuracy of predictive simulation and discuss some approaches for reducing the impact of these limitations. The second issue concerns verifying the accurate implementation of computational algorithms for solving inverse problems in UQ. In this context, verification is particularly important as the nature of the computational results makes detection of subtle implementation errors unlikely. I will present a collaboratively developed computational framework for verification of statistical inverse problem solvers and present examples of its use to verify the Markov Chain Monte Carlo (MCMC) based routines in the QUESO C++ library.

May 5, 2014 - Abhishek Kumar: Multiscale modeling of polycrystalline material for optimized property

May 1, 2014 - Eric Chung: Staggered Discontinuous Galerkin Methods

ABSTRACT FROM SPEAKER: In this talk, we will present the staggered discontinuous Galerkin methods. These methods are based on piecewise polynomial approximation on staggered grids. The basis functions have to be carefully designed, so that some compatibility conditions are satisfied. Moreover, the use of staggered grids bring some advantages, such as optimal convergence and conservation. We will discuss the basic methodologies and applications to wave propagation and fluid flows.

April 7, 2014 - Tom Scogland: Runtime Adaptation for Autonomic Heterogeneous Computing

Heterogeneity is increasing at all levels of computing, certainly with the rise in general purpose computing with GPUs in everything from phones to supercomputers. More quietly it is increasing with the rise of NUMA systems, hierarchical caching, OS noise, and a myriad of other factors. As heterogeneity becomes a fact of life at every level of computing, efficiently managing heterogeneous compute resources is becoming a critical task. In order to make the problem tractable we must develop methods and systems to allow software to adapt to the hardware it finds within a given node at runtime. The goal is to make the complex functions of heterogeneous computing autonomic, handling load balancing, memory coherence and other performance critical factors in the runtime. This talk will discuss my research into this area, including the design of a work-sharing construct for CPU and GPU resources in OpenMP and automated memory reshaping/re-mapping for locality.

Dr. Scogland is a candidate for a postdoctoral position with the Computer Science Research Group

April 4, 2014 - Alex McCaskey: Effects of Electron-Phonon Coupling in Single-Molecule Magnet Transport Junctions Using a Hybrid Density Functional Theory and Model Hamiltonian Approach

Recent experiments have shown that junctions consisting of individual single-molecule magnets (SMMs) bridged between two electrodes can be fabricated in three-terminal devices, and that the characteristic magnetic anisotropy of the SMMs can be affected by electrons tunneling through the molecule. Vibrational modes of the SMM can couple to electronic charge and spin degrees of freedom, and this coupling also influences the magnetic and transport properties of the SMM. The effect of electron-phonon coupling on transport has been extensively studied in small molecules, but not yet for junctions of SMMs. The goals of this talk will be two-fold: to present a novel approach for studying the effects of this electron-phonon coupling in transport through SMMs that utilizes both density functional theory calculations and model Hamiltonian construction and analysis, and to present a software framework based on this hybrid approach for the simulation of transport across user-defined SMMs . The results of these simulations will indicate a characteristic suppression of the current at low energies that is strongly dependent on the overall electron-phonon coupling strength and number of molecular vibrational modes considered.

Mr. McCaskey is a candidate for a graduate position in the Computer Science Research Group

March 26, 2014 - Steven Wise: Convergence of a Mixed FEM for a Cahn-Hilliard-Stokes System

Abstract and Bio sent on behalf of the speaker:
Co-Authors: Amanda Diegel and Xiaobing Feng
Abstract: In this talk I will describe a mixed finite element method for a modified Cahn-Hilliard equation coupled with a non-steady Darcy-Stokes flow that models phase separation and coupled fluid flow in immiscible binary fluids and di-block copolymer melts. I will focus both on numerical implementation issues for the scheme as well as the convergence analysis. The time discretization is based on a convex splitting of the energy of the equation. I will show that our scheme is unconditionally energy stable with respect to a spatially discrete analogue of the continuous free energy of the system and unconditionally uniquely solvable. We can show, in addition, that the phase variable is bounded in L^\infty(0,T,L^\infty) and the chemical potential is bounded in L\infty(0,T,L^2), unconditionally in both two and three dimensions, for any finite final time T. In fact the bounds in such estimates grow only (at most) linearly in T. I will prove that these variables converge with optimal rates in the appropriate energy norms in both two and three dimensions. Finally, I will discuss some extensions of the scheme to approximate solutions for diffuse interface flow models with large differences in density.

Bio:
Steven Wise is an associate professor of mathematics at the University of Tennessee. He specializes in fast adaptive nonlinear algebraic solvers for numerical PDE, numerical analysis, and scientific computing more broadly. Before coming to the University of Tennessee, he was a postdoc and visiting assistant professor of mathematics and biomedical engineering at the University of California, Irvine. He earned a PhD in engineering physics from the University of Virginia in 2003.

March 18, 2014 - Zhiwen Zhang: A Dynamically Bi-Orthogonal Method for Time-Dependent Stochastic Partial Differential Equation

We propose a dynamically bi-orthogonal method (DyBO) to study time dependent stochastic partial differential equations (SPDEs). The objective of our method is to exploit some intrinsic sparse structure in the stochastic solution by constructing the sparsest representation of the stochastic solution via a bi-orthogonal basis. It is well-known that the Karhunen-Loeve expansion minimizes the total mean squared error and gives the sparsest representation of stochastic solutions. However, the computation of the KL expansion could be quite expensive since we need to form a covariance matrix and solve a large-scale eigenvalue problem. In this talk, we derive an equivalent system that governs the evolution of the spatial and stochastic basis in the KL expansion. Unlike other reduced model methods, our method constructs the reduced basis on-the-fly without the need to form the covariance matrix or to compute its eigen-decomposition. We further present an adaptive strategy to dynamically remove or add modes, perform a detailed complexity analysis, and discuss various generalizations of this approach. Several numerical experiments will be provided to demonstrate the effectiveness of the DyBO method.

Bio:
Zhiwen Zhang is a postdoctoral scholar in the Department of Computing and Mathematical Sciences, California Institute of Technology. He graduated from the Department of Mathematical Sciences, Tsinghua University in 2011, where he was awarded the degree of Ph.D. in Applied Mathematics. From 2008 to 2009, he was studied in the University of Wisconsin at Madison as a visiting student. His research interests lie in the applied analysis and numerical computation of problems arising from quantum chemistry, wave propagation, porous media, cell evolution, Bayesian updating, stochastic fluid dynamics and random heterogeneous media.

March 4, 2014 - David Seal: Beyond the Method of Lines Formulation: Building Spatial Derivatives into the Temporal Integrator

Abstract: High-order solvers for hyperbolic conservation laws often fall under two disparate categories. On one hand, the method of lines formulation starts by discretizing the spatial variables, and then a system of ODEs is solved using an appropriate time-integrator. On the other hand, Lax-Wendroff discretizations immediately convert Taylor series in time to discrete spatial derivatives. In this talk, we present generalizations of these methods including high-order discontinuous Galerkin (DG) methods based on multiderivative time-integrators, as well as high-order finite difference weighted essentially non-oscillatory (WENO) methods based on the Picard Integral Formulation (PIF) of the conservation law. Multiderivative time integrators are extensions of Runge-Kutta and Taylor methods. They reduce the overall storage required for a Runge-Kutta method, and they introduce flexibility to the Taylor series in time methods by allowing for new coefficients to be used at various stages. In the multiderivative DG method, "modified fluxes'' are used to define high-order Riemann problems, which are similar to those defined in the generalized Riemann problem solvers incorporated in the Arbitrary DERivative (ADER) methods. The finite difference WENO method is based on a Picard Integral Formulation of the PDE, where we first integrate in time, and then work on discretizing the temporal integral. The present formulation is automatically mass conservative, and therefore it introduces the possibility of modifying finite difference fluxes for the purpose of accomplishing tasks such as positivity preservation, or reducing the number of expensive non-linear WENO reconstructions. For now, we present results for a single-step version of the PIF-WENO method which lends itself to incorporating adaptive mesh refinement technology. Results for one- and two-dimensional conservation laws are presented, and they indicate that the new methods compete well with current state of the art technology.

February 21, 2014 - Zhou Li: Harnessing high-resolution mass spectrometry and high-performance supercomputing for quantitative characterization of a broad range of protein post-translational modifications in a natural microbial community

Microbial communities populate and shape diverse ecological niches within natural environments. The physiology of organisms in natural consortia has been studied with community proteomics. However, little is known about how free-living microorganisms regulate protein activities through post-translational modifications (PTMs). Here, we harnessed high-performance mass spectrometry and supercomputing for identification and quantification of a broad range of PTMs (including hydroxylation, methylation, citrullination, acetylation, phosphorylation, methylthiolation, S-nitrosylation, and nitration) in microorganisms. Using an E. coli proteome as a benchmark, we identified more than 5,000 PTM events of diverse types and a large number of modified proteins that carried multiple types of PTMs. We applied this demonstrated approach to profiling PTMs in two growth stages of a natural microbial community growing in the acid mine drainage environment. We found that the multi-type, multi-site protein modifications are highly prevalent in free-living microorganisms. A large number of proteins involved in various biological processes were dynamically modified during the community succession, indicating that dynamic protein modification might play an important role in organismal response to changing environmental conditions. Furthermore, we found closely related, but ecologically differentiated bacteria harbored remarkably divergent PTM patterns between their orthologous proteins, implying that PTM divergence could be a molecular mechanism underlying their phenotypic diversities. We also quantified fractional occupancy for thousands of PTM events. The findings of this study should help unravel the role of PTMs in microbial adaptation, evolution and ecology.

February 14, 2014 - Celia E. Shiau: Probing fish-microbe interface for environmental assessment of clean energy

To preserve wildlife and natural resources for future generations, we face the grand challenge of effectively assessing and predicting the impact of current and future energy use. My overall goal is to probe the microbiome and host-microbe interface of fish populations, in order to evaluate environmental stress on aquatic life and resources. Current understanding of aquatic microbes in fresh and salt water is centered on free-living bacteria (independent of a host). I will discuss my work on the experimentally tractable fish model (Danio rerio) that can be applied to investigate the interaction between microbiota, host health, and environmental toxicants (such as mercury and other metalloids), and the aims of my Liane Russell fellowship research program. The findings will provide a framework for studies of other fish species, leveraging advanced imaging, metagenomics, bioinformatics, and neutron scattering. The proposed study promises to inform the potential use of fish microbes to solve energy and environmental challenges, thereby providing means for critical assessment of global energy impact.

February 6, 2014 - Susan Janiszewski: 3-connected, claw-free, generalized net-free graphs are hamiltonian

Given a family $\mathcal{F} = \{H_1, H_2, \dots, H_k\}$ of graphs, we say that a graph is $\mathcal{F}$-free if $G$ contains no subgraph isomorphic to any $H_i$, $i = 1,2,\dots, k$. The graphs in the set $\mathcal{F}$ are known as {\it forbidden subgraphs}. The main goal of this dissertation is to further classify pairs of forbidden subgraphs that imply a 3-connected graph is hamiltonian. First, the number of possible forbidden pairs is reduced by presenting families of graphs that are 3-connected and not hamiltonian. Of particular interest is the graph $K_{1,3}$, also known as the {\it claw}, as we show that it must be included in any forbidden pair. Secondly, we show that 3-connected, $\{K_{1,3}, N_{i,j,0}\}$-free graphs are hamiltonian for $i,j \ne 0, i+j \le 9$ and 3-connected, $\{K_{1,3}, N_{3,3,3}\}$-free graphs are hamiltonian, where $N_{i,j,k}$, known as the {\it generalized net}, is the graph obtained by rooting vertex-disjoint paths of length $i$, $j$, and $k$ at the vertices of a triangle. These results combined with previous known results give a complete classification of generalized nets such that claw-free, net-free implies a 3-connected graph is hamiltonian.

January 30, 2014 - Wei Guo: High order Semi-Lagrangian Methods for Transport Problems with Applications to Vlasov Simulations and Global Transport

Abstract and Bio sent on behalf of the speaker:
The semi-Lagrangian (SL) scheme for transport problems gains more and more popularity in the computational science community due to its attractive properties. For example, the SL scheme, compared with the Eulerian approach, allows extra large time step evolution by incorporating characteristics tracing mechanism, hence achieving great computational efficiency. In this talk, we will introduce a family of dimensional splitting high order SL methods coupled with high order finite difference weighted essentially non-oscillatory (WENO) procedures and finite element discontinuous Galerkin (DG) methods. By performing dimensional splitting, the multi-dimensional problem is decoupled into a sequence of 1-D problems, which are much easier to solve numerically in the SL setting. The proposed SL schemes are applied to the Vlasov model arising from the plasma physics and the global transport problems based on the cubed-sphere geometry from the operational climate model. We further introduce the integral defer correction (IDC) framework to reduce the dimensional splitting errors. The proposed algorithms have been extensively tested and benchmarked with classical problems in plasma physics such as Landau damping, two stream instability, Kelvin-Helmholtz instability and global transport problems on the cubed-sphere. This is joint work with Andrew Christlieb, Maureen Morton, Ram Nair and Jing-Mei Qiu.

January 28, 2014 - Jeff Haack: Applications of computational kinetic theory

Abstract and Bio sent on behalf of the speaker:
Kinetic theory describes the evolution of a complex system of a large number of interacting particles. These models are used to describe systems where the characteristic scales for interaction between particles and characteristic length scales are similar. In this talk, I will discuss numerical computation of several applications of kinetic theory, including rarefied gas dynamics with applications towards re-entry, kinetic models for plasmas, and a biological model for swarm behavior. As kinetic models often involve a high dimensional phase space as well as an integral operator modeling particle interactions, simulations have been impractical in many settings. However, recent advances in massively parallel computing are very well suited to solving kinetic models, and I will discuss how these resources are used in computing kinetic models and new difficulties that arise when computing on these architectures.

January 24, 2014 - Roman Lysecky: Data-driven Design Methods and Optimization for Adaptable High-Performance Systems

Abstract and Bio sent on behalf of the speaker:
Research has demonstrated that runtime optimization and adaptation methods can achieve performance improvement over design-time optimization system implementations. Furthermore, modern computing applications require a large degree of configurability and adaptability to operate on a variety of data inputs where the characteristic of the data inputs may change over time. In this talk, we highlight two runtime optimization methods for adaptable computing systems. We first highlight the use of runtime profiling and system-level performance and power estimation methods for estimating the speedup and power consumption of dynamically reconfigurable systems. We evaluate the accuracy and fidelity of the online estimation framework for dynamic configuration of computational kernels with goals of both maximizing performance and minimizing system power consumption. We further present an overview of the design framework and runtime reconfiguration methods supporting data-adaptable reconfigurable systems. Data-adaptable reconfigurable systems enable a flexible runtime implementation in which a system can transition the execution of tasks between different execution modalities, e.g., hardware and software implementations, while simultaneously continuing to process data during the transition.

Bio:
Roman Lysecky is an Associate Professor of Electrical and Computer Engineering at the University of Arizona. He received his B.S., M.S., and Ph.D. in Computer Science from the University of California, Riverside in 1999, 2000, and 2005, respectively. His research interests focus on embedded systems, with emphasis on embedded system security, non-intrusive system observation methods for in-situ analysis of complex hardware and software behavior, runtime optimizations methods, and design methods for precisely timed systems with applications in safety-critical and mobile health systems. He was awarded the Outstanding Ph.D. Dissertation Award from the European Design and Automation Association (EDAA) in 2006 for New Directions in Embedded Systems. He received a CAREER award from the National Science Foundation in 2009 and four Best Paper Awards from the ACM/IEEE International Conference on Hardware-Software Codesign and System Synthesis (CODES+ISSS), the ACM/IEEE Design Automation and Test in Europe Conference (DATE), the IEEE International Conference on Engineering of Computer-Based Systems (ECBS), and the International Conference on Mobile Ubiquitous Computing, Systems, Services (UBICOMM). He has coauthored five textbooks on VHDL, Verilog, C, C++, and Java programming. He is an inventor on one US patent. In 2008 and 2013, he received an award for Excellence at the Student Interface from the College of Engineering and the University of Arizona.

January 21, 2014 - Tuoc Van Phan: Some Aspects in Nonlinear Partial Differential Equations and Nonlinear Dynamics

This talk contains two parts:

Part I: We discuss the Shigesada-Kawasaki-Teramoto system of cross-diffusion equations of two competing species in population dynamics. We show that if there are self-diffusion in one species and no cross-diffusion in the other, then the system has a unique smooth solution for all time in bounded domains of any dimension. We obtain this result by deriving global W ^(1,p) –estimates of Calderón-Zygmund type for a class of nonlinear reaction-diffusion equations with self-diffusion. These estimates are achieved by employing Caffarelli-Peral perturbation technique together with a new two-parameter scaling argument.

Part II: We study a class of nonlinear Schrödinger equations in one dimensional spatial space with double-well symmetric potential. We derive and justify a normal form reduction of the nonlinear Schrödinger equation for a general pitchfork bifurcation of the symmetric bound state. We prove persistence of normal form dynamics for both supercritical and subcritical pitchfork bifurcations in the time-dependent solutions of the nonlinear Schrödinger equation over long but finite time intervals.

The talk is based on my joint work with Luan Hoang (Texas Tech University), Truyen Nguyen (University of Akron), and Dmitry Pelinovsky (McMaster University).

January 17, 2014 - John Dolbow: Recent advances in embedded finite element methods

This seminar will present recent advances in an emerging class of embedded finite element methods for evolving interface problems in mechanics. By embedded, we refer to methods that allow for the interface geometry to be arbitrarily located with respect to the finite element mesh. This relaxation between mesh and geometry obviates the need for remeshing strategies in many cases and greatly facilitates adaptivity in others. The approach shares features with finite-difference methods for embedded boundaries, but within a variational setting that facilitates error and stability analysis.

We focus attention on a weighted form of Nitsche's method that allows interfacial conditions to be robustly enforced. Classically, Nitsche's method provides a means to weakly impose boundary conditions for Galerkin-based formulations. With regard to embedded interface problems, some care is needed to ensure that the method remains well behaved in varied settings ranging from interfacial configurations resulting in arbitrarily small elements to problems exhibiting large contrast. We illustrate how the weighting of the interfacial terms can be selected to both guarantee stability and to guard against ill-conditioning. Various benchmark problems for the method are then presented.

January 16, 2014 - Aziz Takhirov: Numerical analysis of the flows in Pebble Bed Geometries

Flows in complex geometries intermediate between free flows and porous media flows occur in pebble bed reactors and other industrial processes. The Brinkman models have consistently shown that for simplified settings accurate prediction of essential flow features depends on the impossible problem of meshing the pores. We discuss a new model to understand the flow and its properties in these geometries.

January 13, 2014 - Pablo Seleson: Bridging Scales in Materials with Mesoscopic Models

Complex systems are often characterized by processes occurring at different spatial and temporal scales. Accurate predictions of quantities of interest in such systems are many times only feasible through multiscale modeling. In this talk, I will discuss the use of mesoscopic models as a means to bridge disparate scales in materials. Examples of mesoscopic models include nonlocal continuum models, based on integro-differential equations, that generalize classical continuum models based on partial differential equations. Nonlocal models possess length scales, which can be controlled for multiscale modeling. I will present two nonlocal models: peridynamics and nonlocal diffusion, and demonstrate how inherent length scales in these models allow to bridge scales in materials.

January 9, 2014 - Gung-Min Gie: Motion of fluids in the presence of a boundary

In most practical applications of fluid mechanics, it is the interaction of the fluid with the boundary that is most critical to understanding the behavior of the fluid. Physically important parameters, such as the lift and drag of a wing, are determined by the sharp transition the air makes from being at rest on the wing to flowing freely around the airplane near the wing. Mathematically, the behavior of such flows at small viscosity is modeled by the Navier-Stokes equations. In this talk, we discuss some recent results on the boundary layers of the Navier-Stokes equations under various boundary conditions.

January 6, 2014 - Christine Klymko: Central and Communicability Measures in Complex Networks: Analysis and Algorithms

Complex systems are ubiquitous throughout the world, both in nature and within man-made structures. Over the past decade, large amounts of network data have become available and, correspondingly, the analysis of complex networks has become increasingly important. One of the fundamental questions in this analysis is to determine the most important elements in a given network. Measures of node importance are usually referred to as node centrality and measures of how well two nodes are able to communicate with each other are referred to as the communicability between pairs of nodes. Many measures of node centrality and communicability have been proposed over the years. Here, we focus on the analysis and computation of centrality and communicability measures based on matrix functions. First, we examine a node centrality measure based on the notion of total communicability, defined in terms of the row sums of the exponential of the adjacency matrix of the network. We argue that this is a natural metric for ranking nodes in a network, and we point out that it can be computed very rapidly even in the case of large networks. Furthermore, we propose a measure of the total network communicability, based on the total sum of node communicabilities, as a useful measure of the connectivity of the network as a whole. Next, we compare various parameterized centrality rankings based on the matrix exponential and matrix resolvent with degree and eigenvector centrality. The centrality measures we consider are exponential and resolvent subgraph centrality (defined in terms of the diagonal entries of the matrix exponential and matrix resolvent, respectively), total communicability, and Katz centrality (defined in terms of the row sums of the matrix resolvent). We demonstrate an analytical relationship between these rankings and the degree and subgraph centrality rankings which helps to explain explain the observed robustness of these rankings on many real world networks, even though the scores produced by the centrality measures are not stable.

December 19, 2013 - Adam Larios: New Techniques for Large-Scale Parallel Turbulence Simulations at High Reynolds Numbers

Abstract sent on behalf of the speaker:
Two techniques have recently been developed to handle large-scale simulations of turbulent flows. The first is a nonlinear, LES-type viscosity, which is based on the numerical violation of the local energy balance of the Navier-Stokes equations. This technique enjoys a numerical dissipation which remains vanishingly small in regions where the solution is smooth, only damping the flow in regions of numerical shock, allowing for increased accuracy at reduced computational cost. The second is a direction-splitting technique for projection methods, which unlocks new parallelism previously unexploited in fluid flows, and enables very fast, large-scale turbulence simulations.

December 16, 2013 - Tuoc Van Phan: Some Aspects in Nonlinear Partial Differential Equations and Nonlinear Dynamics

Abstract is attached and is sent on behalf of the speaker:
This talk contains two parts:

Part I: We discuss the Shigesada-Kawasaki-Teramoto system of cross-diffusion equations of two competing species in population dynamics. We show that if there are self-diffusion in one species and no cross-diffusion in the other, then the system has a unique smooth solution for all time in bounded domains of any dimension. We obtain this result by deriving global W ^(1,p) - estimates of Calderón-Zygmund type for a class of nonlinear reaction-diffusion equations with self-diffusion. These estimates are achieved by employing Caffarelli-Peral perturbation technique together with a new two-parameter scaling argument.

Part II: We study a class of nonlinear Schrödinger equations in one dimensional spatial space with double-well symmetric potential. We derive and justify a normal form reduction of the nonlinear Schrödinger equation for a general pitchfork bifurcation of the symmetric bound state. We prove persistence of normal form dynamics for both supercritical and subcritical pitchfork bifurcations in the time-dependent solutions of the nonlinear Schrödinger equation over long but finite time intervals.

The talk is based on my joint work with Luan Hoang (Texas Tech University), Truyen Nguyen (University of Akron), and Dmitry Pelinovsky (McMaster University).

December 13, 2013 - Rich Lehoucq: A Computational Spectral Graph Theory Tutorial

My presentation considers the research question of whether existing algorithms and software for the large-scale sparse eigenvalue problem can be applied to problems in spectral graph theory. I first provide an introduction to several problems involving spectral graph theory. I then provide a review of several different algorithms for the large-scale eigenvalue problem and briefly introduce the Anasazi package of eigensolvers.

December 10, 2013 - Jingwei Hu: Fast algorithms for quantum Boltzmann collision operators

The quantum Boltzmann equation describes the non-equilibrium dynamics of a quantum system consisting of bosons or fermions. The most prominent feature of the equation is a high-dimensional integral operator modeling particle collisions, whose nonlinear and nonlocal structure poses a great challenge for numerical simulation. I will introduce two fast algorithms for the quantum Boltzmann collision operator. The first one is a quadrature based solver specifically designed for the collision operator in reduced energy space. Compared to cubic complexity of direct evaluation, our algorithm runs in only linear complexity (optimal up to a logarithmic factor). The second one accelerates the computation of the full phase space collision operator. It is a spectral algorithm based on a special low-rank decomposition of the collision kernel. Numerical examples including an application to semiconductor device modeling are presented to illustrate the efficiency and accuracy of proposed algorithms.

December 6, 2013 - Jeongnim Kim: Analysis of QMC Applications on Petascale Computers

Continuum Quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles. The multiple forms of parallelism afforded by QMC algorithms and high compute-to-communication ratio make them ideal candidates for acceleration in the multi/many-core paradigm, as demonstrated by the performance of QMCPACK on various high-performance computing (HPC) platforms including Titan (Cray XK7) and Mira (IBM BlueGene Q).

The changes expected on future architectures - orders of magnitude higher parallelism, hierarchical memory and communication, and heterogeneous nodes - pose great challenges to application developers but also present opportunities to transform them to tackle new classes of problems. This talk presents core QMC algorithms and their implementations in QMCPACK on the HPC systems of today. The speaker will discuss the performance of typical QMC workloads to elucidate the critical issues to be resolved for QMC to fully exploit increasing computing powers of forthcoming HPC systems.

December 3, 2013 - Terry Haut: Advances on an asymptotic parallel-in-time method for highly oscillatory PDEs

In this talk, I will first review a recent time-stepping algorithm for nonlinear PDEs that exhibit fast (highly oscillatory) time scales. PDEs of this form arise in many applications of interest, and in particular describe the dynamics of the ocean and atmosphere. The scheme combines asymptotic techniques (which are inexpensive but can have insufficient accuracy) with parallel-in-time methods (which, alone, can yield minimal speedup for equations that exhibit rapid temporal oscillations). Examples are presented on the (1D) rotating shallow water equations in a periodic domain, which demonstrate significant parallel speedup is achievable.

In order to implement this time-stepping method for general spatial domains (in 2D and 3D), a key component involves applying the exponential of skew-Hermitian operators. To this end, I will next present a new algorithm for doing so. This method can also be used for solving wave propagation problems, which is of independent interest. This scheme has several advantages over standard methods, including the absence of any stability constraints in relation to the spatial discretization, and the ability to parallelize the computation in the time variable over as many characteristic wavelengths as resources permit (in addition to any spatial parallelization). I will also present examples on the linear 2D shallow water equations, as well the 2D (variable coefficient) wave equation. In these examples, this method (in serial) is 1-2 orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.

December 3, 2013 - Galen Shipman: The Compute and Data Environment for Science (CADES)

In this talk I will discuss ORNL's Compute and Data Environment for Science. The Compute and Data Environment for Science (CADES) provides R&D with a flexible and elastic compute and data infrastructure. The initial deployment consists of over 5 petabytes of high-performance storage, nearly half a petabyte of scalable NFS storage, and over 1000 compute cores integrated into a high performance ethernet and InfiniBand network. This infrastructure, based on OpenStack, provides a customizable compute and data environment for a variety of use cases including large-scale omics databases, data integration and analysis tools, data portals, and modeling/simulation frameworks. These services can be composed to provide end-to-end solutions for specific science domains.

Galen Shipman is the Data Systems Architect for the Computing and Computational Sciences Directorate and Director of the Compute and Data Environment for Science at Oak Ridge National Laboratory (ORNL). He is responsible for defining and maintaining an overarching strategy and infrastructure for data storage, data management, and data analysis spanning from research and development to integration, deployment and operations for high-performance and data-intensive computing initiatives at ORNL. His current work includes addressing many of the data challenges of major facilities such as those of the Spallation Neutron Source (Basic Energy Sciences) and major data centers focusing on Climate Science (Biological and Environmental Research).

December 2, 2013 - Wei Ding: Klonos: A Similarity Analysis-Based Tool for Software Porting in High-Performance Computing

Porting applications to a new system is a nontrivial job in the HPC field. It is a very time-consuming, labor-intensive process, and the quality of the results will depend critically on the experience of the experts involved. In order to ease the porting process, a methodology is proposed to address an important aspect of software porting that receives little attention, namely, planning support. When a scientific application consisting of many subroutines is to be ported, the selection of key subroutines greatly impacts the productivity and overall porting strategy, because these subroutines may represent a significant feature of the code in terms of functionality, code structure, or performance. They may also serve as indicators of the difficulty and amount of effort involved in porting a code to a new platform. The proposed methodology is based on the idea that a set of similar subroutines can be ported with similar strategies and result in a similar-quality porting. By vie wing subroutines as data and operator sequences, analogous to DNA sequences, various bio-informatics techniques may be used to conduct the similarity analysis of subroutines while avoiding NP-complete complexities of other approaches. Other code metrics and cost-model metrics have been adapted for similarity analysis to capture internal code characteristics. Based on those similarity analyses, "Klonos," a tool for software porting, has been created. Experiment shows that Klonos is very effective for providing a systematic porting plan to guide users during their porting process of reusing similar porting strategies for similar code regions.

November 20, 2013 - Chao Yang: Numerical Algorithms for Solving Nonlinear Eigenvalue Problems in Electronic Structure Calculation

The Kohn-Sham density functional theory (KSDFT) is the most widely used theory for studying electronic properties of molecules and solids. The main computational problem in KSDFT is a nonlinear eigenvalue problem in which the matrix Hamiltonian is a function of a number of eigenvectors associated with smallest eigenvalues. The problem can also be formulated as a constrained energy minimization problem or a nonlinear equation in which the unknown ground state electron density satisfies a fixed point map. Significant progress has been made in the last few years on understanding the mathematical properties of this class of problems. Efficient and reliable numerical algorithms have been developed to accelerate the convergence of nonlinear solvers. New methods have also been developed to reduce the computational cost in each step of the iterative solver. We will review some of these developments and discuss additional challenges in large-scale electronic structure calculations.

November 15, 2013 - Christian Straube: Simulation of HPDC Infrastructure Attributes

High Performance Distributed Computing (HPDC) infrastructures use several data centers, High Performance Computing (HPC) and distributed systems, each built from manifold (often heterogeneous) compute, storage, interconnect, and other specialized sub components to provide their capabilities, i.e. well-defined functionality that is exposed to a user or application. Capabilities' quality can be described by attributes, e.g., performance, energy efficiency, or reliability. Hardware-related modifications, such as clock rate adaptation or interconnect throughput improvement, often induce two groups of effects onto these attributes: the (by definition) positive intended effects and the mostly negative but unavoidable side effects. For instance, increasing a typical HPDC infrastructure's redundancy to address short-time breakdown and to improve reliability (positive intended effect), simultaneously increases energy consumption and degrades performance due to redundancy overhead (neg
ative side effects).

In this talk, I present Predictive Modification Effect Analysis (PMEA) that aims at avoiding harmful execution and costly but spare modification exploration by investigating in advance, whether the (negative) side effects on attributes will outweigh the (positive) intended effects. The talk covers the fundamental concepts and basic ideas of PMEA and it presents it's underlying model. The model is straightforward and fosters fast development, even for complex HPDC infrastructures, it handles individual and open sets of attributes and their calculations, and it addresses effect cascading through the entire HPC infrastructure. Additionally, I will present a prototype of a simulation tool and describe some selected features in detail.

Bio:

Christian Straube is a Computer Science Ph.D. student at the Ludwig-Maximilians-University (LMU) in Munich, Germany since January 2012. His research interests include HPDC infrastructure and data center analysis, in particular planning, modification justification, as well as effect outweighing and cascading. During his time as Ph.D. student, he worked several months at the Leibniz Supercomputing Center, which operates the SuperMUC, a three Petaflop/s system that applies warm-water cooling. Prior to joining LMU as a Ph.D. student, Christian worked for several years in industry and academia as software engineer and project manager. He ran his own software engineering company for 10 years, and was (co-) founder of several IT related start-ups. He received a best paper award for a conference contribution to INFOCOMP 2012 and was subsequently invited as technical program member of INFOCOMP 2013. Christian holds a Diploma with Distinction in Computer Science from Ludwig-Maximilians-University in Munich with a minor in Medicine.

November 12, 2013 - Surya R. Kalidindi: Data Science and Cyberinfrastructure Enabled Development of Advanced Materials

Materials with enhanced performance characteristics have served as critical enablers for the successful development of advanced technologies throughout human history, and have contributed immensely to the prosperity and well-being of various nations. Although the core connections between the material's internal structure (i.e. microstructure), its evolution through various manufacturing processes, and its macroscale properties (or performance characteristics) in service are widely acknowledged to exist, establishing this fundamental knowledge base has proven effort-intensive, slow, and very expensive for a number of candidate material systems being explored for advanced technology applications. It is anticipated that the multi-functional performance characteristics of a material are likely to be controlled by a relatively small number of salient features in its microstructure. However, cost-effective validated protocols do not yet exist for fast identification of these salient features and establishment of the desired core knowledge needed for the accelerated design, manufacture and deployment of new materials in advanced technologies. The main impediment arises from lack of a broadly accepted framework for a rigorous quantification of the material's microstructure, and objective (automated) identification of the salient features in the microstructure that control the properties of interest.

Microstructure Informatics focuses on the development of data science algorithms and computationally efficient protocols capable of mining the essential linkages in large microstructure datasets (both experimental and modeling), and building robust knowledge systems that can be readily accessed, searched, and shared by the broader community. Given the nature of the challenges faced in the design and manufacture of new advanced, this new emerging interdisciplinary field is ideally positioned to produce a major transformation in the current practices used by materials scientists and engineers. The novel data science tools produced by this emerging field promise to significantly accelerate the design and development of new advanced materials through their increased efficacy in gleaning and blending the disparate knowledge and insights hidden in "big data" gathered from multiple sources (including both experiments and simulations). This presentation outlines specific strategies for data science enabled development of advanced materials, and illustrates key components of the proposed overall strategy with examples.

November 11, 2013 - Hermann Härtig: A fast and fault tolerant microkernel-based system for exa-scale computing (FFMK)

FFMK is a recently started project funded by DFG's Exascale-Software program. It addresses three key scalability obstacles expected in future exa-scale systems: the vulnerability to system failures due to transient or permanent failures, the performance losses due to imbalances and the noise due to unpredictable interactions between HPC applications and the operating system. To this end, we adapt and integrate well-proven technologies including:

FFMK will combine Linux running in a light-weight virtual machine with a special-purpose component for MPI, both running side by side on L4. The objective is to build a fluid self-organizing platform for applications that require scaling up to exa-scale performance. The talk will explain assumptions and overall architecture of FFMK and continue with presenting a number of design decisions the team is currently facing. FFMK is a cooperation between Hebrew University's MosiX team, the HPC centers of Berlin and Dresden (ZIB, ZIH) and TU Dresden's operating systems group.

Bio:

After having received his PhD from Karlsruhe University on an SMP-related topic, Hermann Härtig led a team at German National Research Center(GMD) to build BirliX, a Unix lookalike designed to address high security requirements. He then moved to TU Dresden to lead the operating systems chair. His team was among the pioneers in building micro kernels of the L4 family (Fiasco, Nova) and systems based on L4 (LeRE, DROPS, NIZZA). L4RE and Fiasco form the OS basis of the SIMKO 3 smart phone. Hermann Härtig now is PI for FFMK.

October 17, 2013 - Marta D'Elia: Fractional differential operators on bounded domains as special cases of nonlocal diffusion operators

We analyze a nonlocal diffusion operator having as special cases the fractional Laplacian and fractional differential operators that arise in several applications, e.g. jump processes. In our analysis, a nonlocal vector calculus is exploited to define a weak formulation of the nonlocal problem. We demonstrate that the solution of the nonlocal equation converges to the solution of the fractional Laplacian equation on bounded domains as the nonlocal interactions become infinite. We also introduce Galerkin finite element discretizations of the nonlocal weak formulation and we derive a priori error estimates. Through several numerical examples we illustrate the theoretical results and we show that by solving the nonlocal problem it is possible to obtain accurate approximations of the solutions of fractional differential equations circumventing the problem of treating infinite-volume constraints.

October 15, 2013 - Tommy Janjusic: Framework for Evaluating Dynamic Memory Allocators including a new Equivalence Class based Cache-Conscious Dynamic Memory Allocator

Software applications' performance is hindered by a variety of factors, but most notably by the well-known CPU-Memory speed gap (often known as the memory wall). This results in the CPU sitting idle waiting for data to be brought from memory to processor caches. The addressing used by caches causes non-uniform accesses to various cache sets. The non-uniformity is due to several reasons; including how different objects are accessed by the code and how the data objects are located in memory. Memory allocators determine where dynamically created objects are placed, thus defining addresses and their mapping to cache locations. It is important to evaluate how different allocators behave with respect to the localities of the created objects. Most allocators use a single attribute, the size, of an object in making allocation decisions. Additional attributes such as the placement with respect to other objects, or specific cache area may lead to better use of cache memories. This talk discusses a framework that allows for the development and evaluation of new memory allocation techniques. At the root of the framework is a memory tracing tool called Gleipnir, which provides very detailed information about every memory access, and relates it back to source level objects. Using the traces from Gleipnir, we extended a commonly used cache simulator for generating detailed cache statistics: per function, per data object, per cache line, and identify specific data objects that are conflicting with each other. The utility of the framework is demonstrated with a new memory allocator known as an equivalence class allocator. The new allocator allows users to specify cache sets, in addition to object size, where the objects should be placed. We compare this new allocator with two well-known allocators, viz., Doug\_Lea and Pool allocators.

October 8, 2013 - Sophie Blondel: NAT++: An analysis software for the NEMO experiment

The NEMO 3 detector aims to prove that the neutrino is a Majorana particle (i.e. identical to the antineutrino). It is mainly composed of a calorimeter and a wire chamber, the former measuring the time and energy of a particle, and the latter reconstructing its track. NEMO 3 has taken data for 5 effective years with an event trigger rate of ~5 Hz, resulting in a total of 10e8 events to analyze. A C++-based software, called NAT++, was created to calibrate and analyze these events. The analysis is mainly based on a time of flight calculation which will be the focus of this presentation. Supplementing this classic analysis, a new tool named gamma-tracking has been developed in order to improve the reconstruction of the gamma energy deposits in the detector. The addition of this tool in the analysis pipeline leads to an increase of 30% of statistics in certain desired channels.

September 30, 2013 - Eric Barton: Fast Forward Storage and Input/Output (I/O)

Conflicting pressures drive the requirements for I/O and Storage at Exascale. On the one hand, an explosion is anticipated, not only in the size of scientific data models but also in their complexity and in the volume of their attendant metadata. These models require workflows that integrate analysis and visualization and new object-oriented I/O Application Programming Interfaces (APIs) to make application development tractable and allow compute to be moved to the data or data to the compute as appropriate. On the other hand, economic realities driving the architecture and reliability of the underlying hardware will push the limits on horizontal scale, introduce unavoidable jitter and make failure the norm. The I/O system will have to handle these as transparently as possible while providing efficient, sustained and predictable performance. This talk will describe the research underway in the Department of Energy (DOE) Fast Forward Project to prototype a complete Exascale I/O stack including at the top level, an object-oriented I/O API based on HDF5, in the middle, a Burst Buffer and data layout optimizer based on PLFS (A Checkpoint Filesystem for Parallel Applications) and at the bottom, DAOs (Data Access Objects) - transactional object storage based on Lustre.

September 25, 2013 - James Beyer: OPENMP vs OPENACC

A brief introduction to two accelerator programming directive sets with a common heritage: OpenACC 2.0 and OpenMP 4.0. After introducing the two directive sets, a side by side comparison of available features along with code examples will be presented to help developers understand their options as they begin programming for both Nvidia and Intel accelerated machines.

September 25, 2013 - Michael Wolfe: OPENACC 2.X AND BEYOND

The OpenACC API is designed to support high-level, performance portable, programming across a range of host+accelerator target systems. This presentation will start with a short discussion of that range, which provides a context for the features and limitations of the specification. Some important additions that were included in OpenACC 2.0 will be highlighted. New features currently under discussion for future versions of the OpenACC API and a summary of the expected timeline will be presented.

September 23, 2013 - Jun Jia: Accelerating time integration using spectral deferred correction

In this talk, we illustrate how to use the spectral deferred correction (SDC) to improve the time integration for scientific simulations. The SDC method combines a Picard integral formulation of the error equation, spectral integration and a user chosen low-order time marching method to form stable methods with arbitrarily high formal order of accuracy in time. The method could be either explicit or implicit, and it also provides the ability to adopt operator splitting while maintaining high formal order. At the end of the talk, we will show some applications using this technique.

September 19, 2013 - Kenny Gross: Energy Aware Data Center (EADC) Innovations: Save Energy, Boost Performance

The global electricity consumption for enterprise and high-performance computing data centers continues to grow much faster than Moore's Law as data centers push into emerging markets, and as developed countries see explosive growth in computing demand as well as supraexponential growth in demand for exabyte (and now zettabyte) storage systems. The USDOE reported that data centers now consume 38 gigawatts of electricity worldwide, a number that is growing exponentially even during times of global economic slowdowns. Oracle has developed a suite of novel algorithmic innovations that can be applied nonintrusively to any IT servers and substantially reduces the energy usage and thermal dissipation for the IT assets (saving additional energy for the data center HVAC systems), while significantly boosting performance (and hence Return-On-Assets) for the IT assets, thereby avoiding additional server purchases (that would consume more energy). The key enabler for this suite of algorithmic innovations is Oracle's Intelligent Power Monitoring (IPM) telemetry harness (implemented in software...no hardware mods anywhere in the data center). IPM, when coupled with advanced pattern recognition, identifies and quantifies three significant nonlinear (heretofore 'invisible') energy-wastage mechanisms that are present in all enterprise and HPC computing assets today, including in low-PUE high-efficiency data centers: 1) leakage power in the CPUs (grows exponentially with CPU temperature), 2) aggregate fan-motor power inside the servers (grows with the cubic power of fan RPMs), and 3) substantial degradation of server energy efficiency by low-level ambient vibrations in the data center racks. This presentation shows how continuous system internal telemetry coupled with advanced pattern recognition technology that was developed for nuclear reactor applications by the presenter and his team back at Argonne National Lab in the 1990s are significantly cutting energy utilization while boosting performance for enterprise and HPC computing assets.

Speaker Bio Info:
------------------
Kenny Gross is a Distinguished Engineer for Oracle and team leader for the System Dynamics Characterization and Control team in Oracle's Physical Sciences Research Center in San Diego. Kenny specializes in advanced pattern recognition, continuous system telemetry, and dynamic system characterization for improving the reliability, availability, and energy efficiency of enterprise computing systems and for the datacenters in which the systems are deployed. Kenny has 220 US patents issued and others pending, 180 scientific publications, and was awarded a 1998 R&D 100 Award for one of the top 100 technological innovations of that year, for an advanced statistical pattern recognition technique that was originally developed for nuclear plant applications and is now being used for a variety of applications to improve the quality-of-service, availability, and optimal energy efficiency for enterprise and HPC computer servers. Kenny earned his Ph.D. in nuclear engineering from the U. of Cincinnati in 1977.

September 17, 2013 - Damien Lebrun-Grandie: Simulation of thermo-mechanical contact between fuel pellets and cladding in UO2 nuclear fuel rods

As fission process heats up the fuel rods, UO2 pellets stacked on top of each other swell both radially and axially, while the surrounding Zircaloy cladding creeps down, so that cladding and pellet eventually come into contact. This exacerbate chemical degradation of the protective cladding and stresses may enable rapid propagation of cracks and thus threaten integrity of the clad. Along these lines, pellet-cladding interaction establish itself as a major concern in fuel rod design and reactor core operation in light water reactors. Accurately modeling fuel behavior is challenging because the mechanical contact problem strongly depends on temperature distribution, and the coupled pellet-cladding heat transfer problem, in turn, is affected by changes in geometry induced by bodies deformations and stresses generated at contact interface.

Our work focuses on active set strategies to determine the actual contact area in high-fidelity coupled physics fuel performance codes. The approach consists of two steps: In the first one, we determine the boundary region on conventional finite element meshes where the contact conditions shall be enforced to prevent objects from occupying the same space. For this purpose, we developed and implemented an efficient parallel search algorithm for detecting mesh inter-penetration and vertex/mesh overlap. The second step deals with solving the mechanical equilibrium factoring the contact conditions computed in the first step. To do so, we developed a modified version of the multi-point constraint (MPC) strategy. While the original algorithm was restricted to the Jacobi preconditioned conjugate gradient method, our MPC algorithm works with any other Krylov solvers (and thus liberate us from the symmetry requirements). Furthermore it does not place any restriction on the preconditioner used.

The multibody thermo-mechanical contact problem is tackled using modern numerics, with higher-order finite elements and a Newton-based monolithic strategy to handle both nonlinearities (coming from the non-linearity of the contact condition but as well as from the temperature-dependence of the fuel thermal conductivity for instance) and coupling between the various physics components (gap conductance sensitive to the clad- pellet distance, thermal expansion coefficient or Youngs modulus affected by temperature changes, etc.).

We will provide different numerical examples for one and multiple bodies contact problems to demonstrate how the method performs.

September 5, 2013 - Jared Saia: How to Build a Reliable System Out of Unreliable Components

The first part of this talk will survey several decades of work on designing distributed algorithms that boost reliability. These algorithms boost reliability in the sense that they enable the creation of a reliable system from unreliable components. We will discuss practical successes of these algorithms, along with drawbacks. A key drawback is scalability: significant redundancy of resources is required in order to tolerate even one node fault. The second part of the talk will introduce a new class of distributed algorithms for boosting reliability. These algorithms are self-healing in the sense that they dynamically adapt to failures, requiring additional resources only when faults occur.

We will discuss two such self-healing algorithms. The first enables self-healing in an overlay network, even when an omniscient adversary repeatedly removes carefully chosen nodes. Specifically, the algorithm ensures that the shortest path between any pair of nodes never increases by more than a logarithmic factor, and that the degree of any node never increases by more than a factor of 3. The second algorithm enables self-healing with Byzantine faults, where an adversary can control t < n/8 of the n total nodes in the network. This algorithm enables point-to-point communication with an expected number of message corruptions that is O(t(log* n)^2). Empirical results show that this algorithm reduces bandwidth and computation costs by up to a factor of 70 when compared to previous work.

August 21, 2013 - Hank Childs: Hybrid Parallelism for Visualization and Analysis

Many of today's parallel visualization and analysis programs are designed for distributed-memory parallelism, but not for the shared-memory parallelism available on GPUs or multi-core CPUs. However, architectural trends on supercomputers increasingly contain more and more cores per node, whether through the presence of GPUs or through more cores per CPU node. To make the best use of such hardware, we must evaluate the benefits of hybrid parallelism - parallelism that blends distributed- and shared-memory approaches - for visualization and analysis's data-intensive workloads. With this talk, Hank explores the fundamental challenges and opportunities for hybrid parallelism with visualization and analysis, and discusses recent results that measure its benefit.

Speaker Bio:
Hank Childs is an assistant professor at the University of Oregon and a computer systems engineer at Lawrence Berkeley National Laboratory. His research focuses on scientific visualization, high-performance computing, and the intersection of the two. He received the Department of Energy Career award in 2012 to research explorative visualization use cases on exascale machines. Additionally, Hank is one of the founding members of the team that developed the VisIt visualization and analysis software. He received his Ph.D. from UC Davis in 2006.

 

August 13, 2013 - Rodney O. Fox: Quadrature-Based Moment Methods for Kinetics-Based Flow Models

Kinetic theory is a useful theoretical framework for developing multiphase flow models that account for complex physics (e.g., particle trajectory crossings, particle size distributions, etc.) (1). For most applications, direct solution of the kinetic equation is intractable due to the high-dimensionality of the phase space. Thus a key challenge is to reduce the dimensionality of the problem without losing the underlying physics. At the same time, the reduced description must be numerically tractable and possess the favorable attributes of the original kinetic equation (e.g. hyperbolic, conservation of mass/momentum, etc.)

Starting from the seminal work of McGraw (2) on the quadrature method of moments (QMOM), we have developed a general closure approximation referred to as quadrature-based moment methods (3; 4; 5). The basic idea behind these methods is to use the local (in space and time) values of the moments to reconstruct a well-defined local distribution function (i.e. non-negative, compact support, etc.). The reconstructed distribution function is then used to close the moment transport equations (e.g. spatial fluxes, nonlinear source terms, etc.).

In this seminar, I will present the underlying theoretical and numerical issues associated with quadrature-based reconstructions. The transport of moments in real space, and its numerical representation in terms of fluxes, plays a critical role in determining whether a moment set is realizable. Using selected examples, I will introduce recent work on realizable high-order flux reconstructions developed specifically for finite-volume schemes (6).

References
[1] MARCHISIO, D. L. & FOX, R. O. 2013 Computational Models for Polydisperse Particulate and Multiphase Systems, Cambridge University Press.
[2] MCGRAW, R. 1997 Description of aerosol dynamics by the quadrature method of moments. Aerosol Science and Technology 27, 255–265.
[3] DESJARDINS, O., FOX, R. O. & VILLEDIEU, P. 2008 A quadrature-based moment method for dilute fluid-particle flows. Journal of Computational Physics 227, 2514–2539.
[4] YUAN, C. & FOX, R. O. 2011 Conditional quadrature method of moments for kinetic equations. Journal of Computational Physics 230, 8216–8246.
[5] YUAN, C., LAURENT, F. & FOX, R. O. 2012 An extended quadrature method of moments for population balance equations. Journal of Aerosol Science 51, 1–23.
[6] VIKAS, V., WANG, Z. J., PASSALACQUA, A. & FOX, R. O. 2011 Realizable high-order finite-volume schemes for quadrature-based moment methods. Journal of Computational Physics 230, 5328–5352.

August 12, 2013 - Lucy Nowell: ASCR: Funding/ Data/ Computer Science

Dr. Lucy Nowell is a Computer Scientist and Program Manager for the Advanced Scientific Computing Research (ASCR) program office in the Department of Energy's (DOE) Office of Science. While her primary focus is on scientific data management, analysis and visualization, her portfolio spans the spectrum of ASCR computer science interests, including supercomputer architecture, programming models, operating and runtime systems, and file systems and input/output research. Before moving to DOE in 2009, Dr. Nowell was a Chief Scientist in the Information Analytics Group at Pacific Northwest National Laboratory (PNNL). On detail from PNNL, she held a two-year assignment as a Program Director for the National Science Foundation's Office of Cyberinfrastructure, where her program responsibilities included Sustainable Digital Data Preservation and Access Network Partners (DataNet), Community-based Data Interoperability Networks (INTEROP), Software Development for Cyberinfrastructure (SDCI) and Strategic Technologies for Cyberinfrastructure (STCI). At PNNL, her research centered on applying her knowledge of visual design, perceptual psychology, human-computer interaction, and information storage and retrieval to problems of understanding and navigating in very large information spaces, including digital libraries. She holds several patents in information visualization technologies.

Dr. Nowell joined PNNL in August 1998 after a career as a professor at Lynchburg College in Virginia, where she taught a wide variety of courses in Computer Science and Theatre. She also headed the Theatre program and later chaired the Computer Science Department. While pursuing her Master of Science and Doctor of Philosophy degrees in Computer Science at Virginia, she worked as a Research Scientist in the Digital Libraries Research Laboratory and also interned with the Information Access team at IBM's T. J. Watson Research Laboratories in Hawthorne, NY. She also has a Master of Fine Arts degree in Drama from the University of New Orleans and the Master of Arts and Bachelor of Arts degrees in Theatre from the University of Alabama

August 8, 2013 - Carlos Maltzahn: Programmable Storage Systems

With the advent of open source parallel file systems a new usage pattern emerges: users isolate subsystems of parallel file systems and put them in contexts not foreseen by the original designers, e.g., an object-based storage back end gets a new REST-ful front end to become Amazon Web Service's S3 compliant key value store, or a data placement function becomes a placement function for customer accounts. This trend shows a desire for the ability to use existing file system services and compose them to implement new services. We call this ability "programmable storage systems".

In this talk I will argue that by designing programmability into storage systems has the following benefits: (1) we are achieving greater separation of storage performance engineering from storage reliability engineering, making it possible to optimize storage systems in a wide variety of ways without risking years of investments into code hardening; (2) we are creating an environment that encourages people to create a new stack of storage systems abstractions, both domain-specific and across domains, including sophisticated optimizers that rely on machine learning techniques; (3) we inform commercial parallel file system vendors on the design of low-level APIs for their products so that they match the versatility of open source storage systems without having to release their entire code into open source; and (4) use this historical opportunity to leverage the tension between the versatility of open source storage systems and the reliability of proprietary systems to lead the community of storage system designers.

I will illustrate programmable storage with an overview of programming abstractions that we have found useful so far, and if time permits, talk about "scriptable storage systems" and the interesting new possibilities of truly data-centered software engineering it enables.

Bio: Carlos Maltzahn is an Associate Adjunct Professor at the Computer Science Department of the Jack Baskin School of Engineering, Director of the UCSC Systems Research Lab and Director of the UCSC/Los Alamos Institute for Scalable Scientific Data Management at the University of California at Santa Cruz. Carlos Maltzahn's current research interests include scalable file system data and metadata management, storage QoS, data management games, network intermediaries, information retrieval, and cooperation dynamics.

Carlos Maltzahn joined UC Santa Cruz in December 2004 after five years at Network Appliance. He received his Ph.D. in Computer Science from the University of Colorado at Boulder in 1999, his M.S. in Computer Science in 1997, and his Univ. Diplom Informatik from the University of Passau, Germany in 1991.

August 7, 2013 - Tiffany M. Mintz: Toward Abstracting the Communication Intent in Applications to Improve Portability and Productivity

Programming with communication libraries such as the Message Passing Interface (MPI) obscures the high-level intent of the communication in an application and makes static communication analysis difficult to do. Compilers are unaware of communication libraries' specifics, leading to the exclusion of communication patterns from any automated analysis and optimizations. To overcome this, communication patterns can be expressed at higher-levels of abstraction and incrementally added to existing MPI applications. In this paper, we propose the use of directives to clearly express the communication intent of an application in a way that is not specific to a given communication library. Our communication directives allow programmers to express communication among processes in a portable way, giving hints to the compiler on regions of computations that can be overlapped with communication and relaxing communication constraints on the ordering, completion and synchronization of the communication imposed by specific libraries such as MPI. The directives can then be translated by the compiler into message passing calls that efficiently implement the intended pattern and be targeted to multiple communication libraries. Thus far, we have used the directives to express point-to-point communication patterns in C, C++ and Fortran applications, and have translated them to MPI and SHMEM.

August 2, 2013 - Alberto Salvadori: Multi-scale and multi-physics modeling of Li-ion batteries: a computational homogenization approach

There is being great interest in developing next generation of lithium ion battery for higher capacity and longer life of cycling, in order to develop significantly more demanding energy storage requirements for humanity existing and future inventories of power-generation and energy-management systems. Industry and academic are looking for alternative materials and Si is one of the most promising candidates for the active material, because it has the highest theoretical specific energy capacity. It emerged that very large mechanical stresses associated with huge volume changes during Li intercalation/deintercalation are responsible for poor cyclic behaviors and quick fading of electrical performance. The present contribution aims at providing scientific contributions in this vibrant context.

The computational homogenization scheme is here tailored to model the coupling between electrochemistry and mechanical phenomena that coexist during batteries charging and discharging cycles. At the macro-scale, di.fl'usion-advection equations model the electro-chemistry of the whole cell, whereas the micro-scale models the multi-component porous electrode, dififusion and intercalation of Lithium in the active particles, the swelling and fracturing of the latter. The scale transitions are formulated by tailoring the well established first-order computational homogenization scheme for mechanical and thermal problems.

August 2, 2013 - Michela Taufer: The effectiveness of application-aware self-management for scientific discovery in volunteer computing systems

There is being great interest in developing next generation of lithium ion battery for higher capacity and longer life of cycling, in order to develop significantly more demanding energy storage requirements for humanity existing and future inventories of power-generation and energy-management systems. Industry and academic are looking for alternative materials and Si is one of the most promising candidates for the active material, because it has the highest theoretical specific energy capacity. It emerged that very large mechanical stresses associated with huge volume changes during Li intercalation/deintercalation are responsible for poor cyclic behaviors and quick fading of electrical performance. The present contribution aims at providing scientific contributions in this vibrant context.

July 24, 2013 - Catalin Trenchea: Improving time-stepping numerics for weakly dissipative systems

In this talk I will address the stability and accuracy of CNLF time-stepping scheme, and propose a modification of Robert-Asselin time-filters for numerical models of weakly diffusive evolution systems. This is motivated by the vast number of applications, e.g., the meteorological equations, and coupled systems with dominating skew symmetric coupling (ground-water surface-water).

In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. After a brief review, I will suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense.

The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in many atmospheric models: ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models (successfully suppresses the spurious computational mode associated with the leapfrog time stepping scheme), it also weakly suppresses the physical mode, introduces non-physical damping, and reduces the accuracy.

This presentation proposes a simple modification to the RA filter (mRA) [Y. Li, CT 2013].

The modification is analyzed and compared with the RAW filter (Williams 2009, 2011).

The mRA increases the numerical accuracy to O(Δt^4) amplitude error and at least O(Δt2) phase-speed error for the physical mode. The mRA filter requires the same storage factors as RAW, and one more than the RA filter does. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy, the phase accuracy remaining second-order. The mRA and RAW filters can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense.

 

June 28, 2013 - Yuri Melnikov: A surprising connection between Green's functions and the infinite product representation of elementary functions

Some standard as well as innovative approaches will be reviewed for the construction of Green's functions for the elliptic PDEs. Based on that, a surprising technique is proposed for obtaining infinite product representations of some trigonometric, hyperbolic, and special functions. The technique uses comparison of different alternative expressions of Green's functions constructed by different methods. This allows us not only obtain the classical Euler's formulas but also come up with a number of new representations.

June 27, 2013 - Kimmy Mu: Performance, accuracy and power tradeoff for scientific processes using workflow in high performance computing

Power is getting more important in high performance computing than ever before as we are on the way to exascale computing. The transition from old style which considers performance and accuracy to the new style which will take care of performance, accuracy and power is necessary. In high performance computing a workflow is composed of a large number of tasks, such as simulation, analysis and visualization. However, there is no such guidance for user getting to know which kind of task allocation and task placement to nodes and clusters are good for performance or power with accuracy requirement. In this presentation, I will talk about power optimization for reconfigurable embedded systems which dynamically choose kernels to run on hardware co-processors to response to dynamic application behavior at runtime. With a lot of commonalities as in HPC, we are going to explore the method in high performance computing for a dynamic workflow of task placement, etc., in terms of performance, power and accuracy constraints.

June 26, 2013 - Matthew Causley: A fast implicit Maxwell field solver for plasma simulations

We present a conservative spectral scheme for Boltzmann collision operators. This formulation is derived from the weak form of the Boltzmann equation, which can represent the collisional term as a weighted convolution in Fourier space. The weights contain all of the information of the collision mechanics and can be precomputed. I will present some results for isotropic (in angle) interations, such as hard spheres and Maxwell molecules. We have recently extended the method to take into account anisotropic scattering mechanisms arising from potential interactions between particles, and we use this method to compute the Boltzmann equation with screened Coulomb potentials. In particular, we study the rate of convergence of the Fourier transform for the Boltzmann collision operator in the grazing collisions limit to the Fourier transform for the limiting Landau collision operator. We show that the decay rate to equilibrium depends on the parameters associated with the collision cross section, and specifically study the differences between the classical Rutherford scattering angular cross section, which has logarithmic error, and an artificial one with a linear error. I will also present recent work extending this method for multispecies gases and gas with internal degrees of freedom, which introduces new challenges for conservation and introduces inelastic collisions to the system.

June 25, 2013 - Jeff Haack: Conservative Spectral Method for Solving the Boltzmann Equation

We present a conservative spectral scheme for Boltzmann collision operators. This formulation is derived from the weak form of the Boltzmann equation, which can represent the collisional term as a weighted convolution in Fourier space. The weights contain all of the information of the collision mechanics and can be precomputed. I will present some results for isotropic (in angle) interations, such as hard spheres and Maxwell molecules. We have recently extended the method to take into account anisotropic scattering mechanisms arising from potential interactions between particles, and we use this method to compute the Boltzmann equation with screened Coulomb potentials. In particular, we study the rate of convergence of the Fourier transform for the Boltzmann collision operator in the grazing collisions limit to the Fourier transform for the limiting Landau collision operator. We show that the decay rate to equilibrium depends on the parameters associated with the collision cross section, and specifically study the differences between the classical Rutherford scattering angular cross section, which has logarithmic error, and an artificial one with a linear error. I will also present recent work extending this method for multispecies gases and gas with internal degrees of freedom, which introduces new challenges for conservation and introduces inelastic collisions to the system.

June 17, 2013 - Megan Cason: Analytic Utility Of Novel Threading Models In Distributed Graph Algorithms

Current analytic methods for judging distributed algorithms rely on communication abstractions that characterize performance assuming purely passive data movement and access. This assumption complicates the analysis of certain algorithms, such as graph analytics, which have behavior that is very dependent on data movement and modifying shared variables. This presentation will discuss an alternative model for analyzing theoretic scalability of distributed algorithms written with the possibility of active data movement and access. The mobile subjective model presented here confines all communication to 1) shared memory access and 2) executing thread state which can be relocated between processes, i.e., thread migration. Doing so enables a new type of scalability analysis, which calculates the number of thread relocations required, and whether that communication is balanced across all processes in the system. This analysis also includes a model for contended shared data accesses, which is used to identify serialization points in an algorithm. This presentation will show the analysis for a common distributed graph algorithm, and illustrate how this model could be applied to a real world distributed runtime software stack.

June 14, 2013 - Jeff Carver: Applying Software Engineering Principles to Computational Science

The increase in the importance of Computational Science software motivates the need to identify and understand which software engineering (SE) practices are appropriate. Because of the uniqueness of the computational science domain, exiting SE tools and techniques developed for the business/IT community are often not efficient or effective. Appropriate SE solutions must account for the salient characteristics of the computational science development environment. To identify these solutions, members of the SE community must interact with members of the computational science community. This presentation will discuss the findings from a series of case studies of CSE projects and the results of an ongoing workshop series. First, a series of case studies of computational science projects were conducted as part of the DARPA High Productivity Computing Systems (HPCS) project. The main goal of these studies was to understand how SE principles were and were not being applied in computational science along with some of the reasons why. The studies resulted in nine lessons learned about computational science software that are important to consider moving forward. Second, the Software Engineering for Computational Science and Engineering workshop brings together software engineers and computational scientists. The outcomes of this workshop series provide interesting insight into potential future trends.

June 12, 2013 - Hans-Werner van Wyk: Multilevel Quadrature Methods

Stochastic Sampling methods are arguably the most direct and least intrusive means of incorporating parametric uncertainty into numerical simulations of partial differential equations with random inputs. However, to achieve an overall error that is within a desired tolerance, a large number of sample simulations may be required (to control the sampling error), each of which may need to be run at high levels of spatial fidelity (to control the spatial error). Multilevel methods aim to achieve the same accuracy as traditional sampling methods, but at a reduced computational cost, through the use of a hierarchy of spatial discretization models. Multilevel algorithms coordinate the number of samples needed at each discretization level by minimizing the computational cost, subject to a given error tolerance. They can be applied to a variety of sampling schemes, exploit nesting when available, can be implemented in parallel and can be used to inform adaptive spatial refinement strategies. We present an introduction to multilevel quadrature in the context of stochastic collocation methods, and demonstrate its effectiveness theoretically and by means of numerical examples.

June 7, 2013 - Xuechen Zhang: Scibox: Cloud Facility for Sharing On-Line Data

Collaborative science demands global sharing of scientific data but it cannot leverage universally accessible cloud-based infrastructures, like DropBox, as those offer limited interfaces and inadequate levels of access bandwidth. In this talk, I will present Scibox cloud facility for online sharing scientific data. It uses standard cloud storage solutions, but offers a usage model in which high end codes can write/read data to/from the cloud via the same ADIOS APIs they already use for their I/O actions, thereby naturally coupling data generation with subsequent data analytics. Extending current ADIOS IO methods, with Scibox, data upload/download volumes are controlled via Data Reduction (DR) functions stated by end users and applied at the data source, before data is moved, with further gains in efficiency obtained by combining DR-functions to move exactly what is needed by current data consumers.

June 6, 2013 - Yuan Tian: Taming Scientific Big Data with Flexible Organizations for Exascale Computing

The fast growing High Performance Computing systems enable scientists to simulate scientific processes with great complexities and consequently, often producing complex data that are also exponentially increasing in size. However, the growth within the computing infrastructure is significantly imbalanced. The dramatically increasing computing power is accompanied with a slowly improving storage system. Such discordant progress among computing power, storage, and data, has led to a severe Input/Output (I/O) bottleneck that requires novel techniques to address big data challenges in the scientific domain.

This talk will identify the prevalent characteristics of scientific data and storage system as a whole, and explore opportunities to drive I/O performance for petascale computing and prepare it for the exascale. To this end, a set of flexible data organization and management techniques are introduced and evaluated to address the aforementioned concerns. Four key techniques are designed to exploit the capability of the back-end storage system for processing and storing scientific big data with a fast and scalable I/O performance, visualization space filling curve-based data reorganization, system-aware chunking, spatial and temporal aggregation, and in-node staging with compression. The experimental results demonstrated more than 60x speedup for a mission critical climate application during data post-processing.

May 31, 2013 - Pablo Seleson: Multiscale Material Modeling with Peridynamics

Multiscale modeling has been recognized in recent years as an important research field to achieve feasible and accurate predictions of complex systems. Peridynamics, a nonlocal reformulation of continuum mechanics based on integral equations, is able to resolve microscale phenomena at the continuum level. As a nonlocal model, peridynamics possesses a length scale which can be controlled for multiscale modeling. For instance, classical elasticity has been presented as a limiting case of a peridynamic model. In this talk, I will introduce the peridynamics theory and show analytical and numerical connections of peridynamics to molecular dynamics and classical elasticity. I will also present multiscale methods to concurrently couple peridynamics and classical elasticity, demonstrating the capabilities of peridynamics towards multiscale material modeling.

Dr. Seleson is a Postdoctoral Fellow in the Institute for Computational Engineering and Sciences at The University of Texas at Austin. He has obtained his Ph.D. in Computational Science from Florida State University in 2010. He holds a M.S. degree in Physics from the Hebrew University of Jerusalem (2006), and a double B.S. degree in Physics and Philosophy also from the Hebrew University of Jerusalem (2002).

May 29, 2013 - Ryan McMahan: The Effects of System Fidelity for Virtual Reality Applications

Virtual reality (VR) has developed from Ivan Sutherland's inception of an "ultimate display" to a realized field of advanced technologies. Despite evidence supporting the use of VR for various benefits, the level of system fidelity required for such benefits is often unknown. Modern VR systems range from high-fidelity simulators that incorporate many technologies to lower-fidelity, desktop-based virtual environments. In order to identify the level of system fidelity required for certain beneficial uses, research has been conducted to better understand the effects of system fidelity on the user. In this talk, a series of experiments evaluating the effects of interaction fidelity and display fidelity will be presented. Future directions of system fidelity research will also be discussed.

Dr. Ryan P. McMahan is an Assistant Professor of Computer Science at the University of Texas at Dallas, where his research focuses on the effects of system fidelity for virtual reality (VR) applications. Using an immersive VR system comprised of a wireless head-mounted display (HMD), a real-time motion tracking system, and Wii Remotes as 3D input devices, his research determines the effects of system fidelity by varying components such as stereoscopy, field of view, and degrees of freedom for interactions. Currently, he is using this methodology to investigate the effects of fidelity on learning for VR training applications. Dr. McMahan received his Ph.D. in Computer Science in 2011 from Virginia Tech, where he also received his B.S. and M.S. in Computer Science in 2004 and 2007.

 

May 28, 2013 - Adrian Sandu: Data Assimilation and the Adaptive Solution of Inverse Problems

The task of providing an optimal analysis of the state of the atmosphere requires the development of novel computational tools that facilitate an efficient integration of observational data into models. In this talk, we will introduce variational and statistical estimation approaches to data assimilation. We will discuss important computational aspects including the construction of efficient models for background errors, the construction and analysis of discrete adjoint models, new approaches to estimate the information content of observations, and hybrid variational-ensemble approaches to assimilation. We will also present some recent results on the solution of inverse problems using space and time adaptivity, and a priori and a posteriori error estimates for the optimal solution.

May 24, 2013 - Satoshi Matsuoka: The Futures of Tsubame Supercomputer and the Japanese HPCI Towards Exascale

HPCI is the Japanese High Performance Computer Infrastructure, which encompasses the national operations of major supercomputers, such as the K supercomputer and Tsubame2.0, much like the XSEDE in the United States and PRACE in Europe. Recently it was announced that the Japanese Ministry of Education, Culture, Sports, Science and Technology is intending to initiate a project towards an exascale supercomputer to be deployed around 2020. However, the workshop report that recommend the project also calls out for a comprehensive infrastructure where a flagship machine will be supplemented with leadership machines to complement the abilities of the flagship. Although it is still early, I will attempt to discuss the current status of Tsubame2.0 evolution to 2.5 and 3.0 in this context, as well as the activities in Japan to initiate an exascale effort, with collaborative elements with the US Department of Energy partners in system software development.

May 17, 2013 - Jon Mietling and Tony McCrary: Bling3D: a new game development toolset from l33t Labs

Bling3D is a forthcoming game development toolset from l33t labs.

The fusion of Eclipse 4 with game development technologies, Bling allows both programmers and designers to create compelling interactive experiences from within one powerful tool.

In this talk, you will be introduced to some of Bling's exciting features, including:

Jon Mietling and Tony McCrary are representatives of l33t labs LLC, technology startup from the Detroit, Michigan region.

May 10, 2013 - Xiao Chen: A Modular Uncertainty Quantification Framework for Multi-physics Systems

This talk presents a modular uncertainty quantification (UQ) methodology for multi-physics applications in which each physics module can be independently embedded with its internal UQ method (intrusive or non-intrusive). This methodology offers the advantage of "plug-and-play" flexibility (i.e., UQ enhancements to one module do not require updates to the other modules) without losing the "global" uncertainty propagation property. (This means that, by performing UQ in this modular manner, all inter-module uncertainty and sensitivity information is preserved.) In addition, using this methodology one can also track the evolution of global uncertainties and sensitivities at the grid point level, which may be useful for model improvement. We demonstrate the utility of such a framework for error management and Bayesian inference on a practical application involving a multi-species flow and reactive transport in randomly heterogeneous porous media.

May 2, 2013 - Kenley Pelzer: Quantum Biology: Elucidating Design Principles from Photosynthesis

Recent experiments suggest that quantum mechanical effects may play a role in the efficiency of photosynthetic light harvesting. However, much controversy exists about the interpretation of these experiments, in which light harvesting complexes are excited by a fem to second laser pulse. The coherence in such laser pulses raises the important question of whether these quantum mechanical effects are significant in biological systems excited by incoherent light from the sun. In our work, we apply frequency-domain Green's function analysis to model a light-harvesting complex excited by incoherent light. By modeling incoherent excitation, we demonstrate that the evidence of long-lived quantum mechanical effects is not purely an artifact of peculiarities of the spectroscopy. This data provides a new perspective on the role of noisy biological environments in promoting or destroying quantum transport in photosynthesis.

April 23, 2013 - Kirk W. Cameron: Power-Performance Modeling, Analyses and Challenges

The power consumption of supercomputers ultimately limits their performance. The current challenge is not whether we will can build an exaflop system by 2018, but whether we can do it in less than 20 megawatts. The SCAPE Laboratory at Virginia Tech has been studying the tradeoffs between performance and power for over a decade. We've developed an extensive tool chain for monitoring and managing power and performance in supercomputers. We will discuss our power-performance modeling efforts and the implications of our findings for exascale systems as well as some research directions ripe for innovation.

April 23, 2013 - Jordan Deyton: Tor Bridge Distribution Powered by Threshold RSA

Since its inception, Tor has offered anonymity for internet users around the world. Tor now offers bridges to help users evade internet censorship, but the primary distribution schemes that provide bridges to users in need have come under attack. This talk explores how threshold RSA can help strengthen Tor's infrastructure while also enabling more powerful bridge distribution schemes. We implement a basic threshold RSA signature system for the bridge authority and a reputation-based social network design for bridge distribution. Experimental results are obtained showing the possibility of quick responses to requests from honest users while maintaining both the secrecy and the anonymity of registered clients and bridges.

April 19, 2013 - Maria Avramova and Kostadin Ivanov: OECD LWR UAM and PSBT/BFBT benchmarks and their relation to Advanced LWR Simulations

From 1987 to 1995, Nuclear Power Engineering Corporation (NUPEC) in Japan performed a series of void measurement tests using full-size mock-up tests for both BWRs and PWRs. Void fraction measurements and departure from nucleate boiling (DNB) tests were performed at NUPEC under steady-state and transient conditions. The workshop will provide overview of the OECD/NEA/NRC PWR Subchannel and Bundle Tests (PSBT) and OECD/NEA/NRC BWR Full-size Fine-mesh Bundle Tests (BFBT) benchmarks based on the NUPEC data. The benchmarks were designed to provide a data set for evaluation of the abilities of existing subchannel, system, and computational fluid dynamics (CFD) thermal-hydraulics codes to predict void distribution and departure from nucleate boiling (DNB) in LWRs under steady-state and transient conditions. The first part of the seminar summarizes the description of PSBT and BFBT benchmark databases, specifications, definition of benchmark exercises and comparative analysis of obtained results and makes the case on how these benchmarks can be used for verification, validation and uncertainty quantification of thermal-hydraulic tools developed for advanced LWR simulations.

The second part of the seminar will provide overview of the OECD/NEA benchmark for LWR Uncertainty Analysis in Modeling (UAM) with emphasis on the Exercises of Phase I and Phase II of the benchmark and discussion of the Phase III, which is directly related to coupled multi-physics advanced LWR simulations. Series of well-defined problems with complete sets of input specifications and reference experimental data will be introduced with an objective is to determine the uncertainty in LWR calculations at all stages of coupled reactor physics/thermal hydraulics calculation. The full chain of uncertainty propagation will be discussed starting form from basic data and engineering uncertainties, across different scales (multi-scale), and physics phenomena (multi-physics) as well as how this propagation is tested on a number of benchmark exercises. Input, output and assumptions for each Exercise will be given as well as the procedures to calculate the output and propagated uncertainties in each step will be described supplemented by results of benchmark participants.

Bio of Dr. Maria Avramova
Dr. Maria Avramova is an Assistant Professor in the Mechanical and Nuclear Engineering Department at the Pennsylvania State University. She is currently the Director of Reactor Dynamics and Fuel Management Group (RDFMG). Her expertise and experience is in the area of developing methods and computer codes for multi-dimensional reactor core analysis. Her background includes development, verification, and validation of thermal-hydraulics sub-channel, porous media, and CFD models and codes for reactor core design, transient, and safety computational analysis. She has led and coordinated the OECD/NRC BFBT and PSBT benchmarks and currently is coordinating Phase II of the OECD LWR UAM benchmark. Her latest research efforts have been focused on high-fidelity multi-physics simulations (involving coupling of reactor physics, thermal-hydraulics and fuel performance models) as well as on uncertainty and sensitivity analysis of reactor design and safety calculations. Dr. Avramova has published over 15 refereed journal papers and over 40 refereed conference proceedings articles.

Bio of Dr. Kostadin Ivanov
Dr. Kostadin Ivanov is Distinguished Professor in the Mechanical and Nuclear Engineering Department at the Pennsylvania State University. He is currently Graduate Coordinator of Nuclear Engineering Program. His research developments include computational methods, numerical algorithms and iterative techniques, nuclear fuel management and reloading optimization techniques, reactor kinetics and core dynamics methods, cross-section generation and modeling algorithms for multi-dimensional steady-state and transient reactor calculations, and coupling three-dimensional (3-D) kinetics models with thermal-hydraulic codes. He has also led the development of multi-dimensional neutronics, in-core fuel management and coupled 3-D kinetics/thermal-hydraulic computer code benchmarks, multi-dimensional reactor transient and safety analysis methodologies as well as integrated analysis of safety-related parameters, system transient modeling of power plants, and in-core fuel management analyses.
Examples of such benchmarks are OECD/NRC PWR MSLB benchmark, OECD/NRC BWR TT benchmark and OECD/DOE/CEA VVER-1000 CT benchmark. He is currently a chair and coordinator of the Scientific Board and Technical Program Committee of OECD LWR UAM benchmark.

April 18, 2013 - Sparsh Mittal: MASTER: A Technique for Improving Energy Efficiency of Caches in Multicore Processors

Large power consumption of modern processors has been identified as the most severe constraint in scaling their performance. Further, in recent CMOS technology generations, leakage energy has been dramatically increasing and hence, the leakage energy consumption of large last-level caches (LLCs) has become a significant source of the processor power consumption.

This talk first highlights the need of power management in LLCs in the modern multi-core processors and then presents MASTER, a micro-architectural cache leakage energy saving technique using dynamic cache reconfiguration. MASTER uses dynamic profiling of LLCs to predict energy consumption of running programs at multiple LLC sizes. Using these estimates, suitable cache quotas are allocated to different programs using cache-coloring scheme and the unused LLC space is turned off to save energy. The implementation overhead of MASTER is small and even for 4 core systems; its overhead is only 0.8% of L2 cache size. Simulations have been performed using an out-of-order x86-64 simulator and 2-core and 4-core multi-programmed workloads from SPEC2006 suite. Further, MASTER has been compared with two energy saving techniques, namely decay cache and way-adaptable cache. The results show that MASTER gives the highest saving in energy and does not harm performance or cause unfairness.

Finally, this talk briefly shows an extension of MASTER for multicore QoS systems. Simulation results confirm that a large amount of energy is saved while meeting the QoS requirement of most of the workloads.

April 17, 2013 - Okwan Kwon: Automatic Scaling of OpenMP Applications Beyond Shared Memory

We present the first fully automated compiler-runtime system that successfully translates and executes OpenMP shared-address-space programs on laboratory-size clusters, for the complete set of regular, repetitive applications in the NAS Parallel Benchmarks. We introduce a hybrid compiler-runtime translation scheme. This scheme features a novel runtime data flow analysis and compiler techniques for improving data affinity and reducing communication costs. We present and discuss the performance of our translated programs, and compare them with the performance of the MPI, HPF and UPC versions of the benchmarks. The results show that our translated programs achieve 75% of the hand-coded MPI programs, on average.

April 17, 2013 - Michael S. Murillo: Molecular Dynamics Simulations of Charged Particle Transport in High Energy-Density Matter

High energy-density matter is now routinely produced at large laser facilities. Producing fusion energy at such facilities challenges our ability to model collisional plasma processes that transport energy among the plasma species and across spatial scales. While the most accurate computational method for describing collisional processes is molecular dynamics, there are numerous challenges associated with using molecular dynamics to model very hot plasmas. However, recent advances in high performance computing have allowed us to develop methods for simulating a wide variety of processes in hot, dense plasmas. I will review these developments and describe our recent results that involve simulating fast particle stopping in dense plasmas. Using the simulation results, implications for theoretical modeling of charged-particle stopping will be given.

April 12, 2013 - Vivek K. Pallipuram: Exploring Multiple Levels Of Performance Modeling For Heterogeneous Systems

One of the major challenges faced by the High-Performance Computing (HPC) community today is user-friendly and accurate heterogeneous performance modeling. Although performance prediction models exist to fine-tune applications, they are seldom easy-to-use and do not address multiple levels of design space abstraction. Our research aims to bridge the gap between reliable performance model selection and user-friendly analysis. We propose a straightforward and accurate multi-level performance modeling suite for multi-GPGPU systems that addresses multiple levels of design space abstraction. The multi-level performance modeling suite primarily targets synchronous iterative algorithms (SIAs) using our synchronous iterative GPGPU execution (SIGE) model and addresses two levels of design space abstraction: 1) low-level where partial details of the implementation are present along with system specifications and 2) high-level where implementation details are minimum and only high-level system specifications are known. The low-level abstraction of the modeling suite employs statistical techniques for runtime prediction, whereas the high-level abstraction utilizes existing analytical and quantitative modeling tools to predict the application runtime. Our initial validation efforts for the low-level abstraction yield high runtime prediction accuracy with less than 10% error rate for several tested GPGPU cluster configurations and case studies. The development of high-level abstraction models is underway. The end goal of our research is to offer the scientific community, a reliable and user-friendly performance prediction framework that allows them to optimally select a performance prediction strategy for the given design goals and system architecture characteristics.

April 11, 2013 - Jeff Young: Commodity Global Address Spaces - How Can We Scale Out Accelerator and Memory Performance for Tomorrow's Clusters?

Current Top 500 systems like Titan, Stampede, and Tianhe-1A have started to embrace the use of off-chip accelerators, such as GPUs and x86 coprocessors, to dramatically improve their overall performance and efficiency numbers. At the same time, these systems also make very specific assumptions about the availability of highly optimized interconnects and software stacks that are used to mitigate the effects of running large applications across multiple nodes and their accelerators. This talk focuses on the gap in networking between high-performance computing clusters and data centers and proposes that future clusters should be built around commodity-based networks and managed global address spaces to improve the performance of data movement between host memory and accelerator memory. This thesis is supported by previous research into converged commodity interconnects and ongoing research on the Oncilla managed GAS runtime to support aggregated memory for data warehousing applications. In addition, we will speculate on how commodity-based networks and memory management for clusters of accelerators might be affected by the advent of 3D stacking and fused CPU/GPU architectures.

April 9, 2013 - Cong Liu: Towards Efficient Real-Time Multicore Computing Systems

Current trends in multicore computing are towards building more powerful, intelligent, yet space- and power-efficient systems. A key requirement in correctly building such intelligent systems is to ensure real-time performance, i.e., "make the right move at the right time in a predictable manner." Current research on real-time multicore computing has been limited to simple systems for which complex application runtime behaviors are ignored; this limits the practical applicability of such research. In practice, complex but realistic application runtime behaviors often exist, such as I/O operations, data communications, parallel execution segments, critical sections etc. Such runtime behaviors are currently dealt with by over-provisioning systems, which is an economically wasteful practice. I will present predictable real-time multicore computing system design, analysis, and implementation methods that can efficiently support common types of application runtime behaviors. I will show that the proposed methods are able to avoid over-provisioning systems and to reduce the number of needed hardware components to the extent possible while providing timing correctness guarantees.

In the second part of the talk, I will present energy-efficient workload mapping techniques for heterogeneous multicore CPU/GPU systems. Through both algorithmic analysis and prototype system implementation, I will show that the proposed techniques are able to achieve better energy efficiency while guaranteeing response time performance.

April 9, 2013 - Frank Mueller: On Determining a Viable Path to Resilience at Exascale

Exascale computing is projected to feature billion core parallelism. At such large processor counts, faults will become more common place. Current techniques to tolerate faults focus on reactive schemes for recovery and generally rely on a simple checkpoint/restart mechanism. Yet, they have a number of shortcomings. (1) They do not scale and require complete job restarts. (2) Projections indicate that the mean-time-between-failures is approaching the overhead required for checkpointing. (3) Existing approaches are application-centric, which increases the burden on application programmers and reduces portability.

To address these problems, we discuss a number of techniques and their level of maturity (or lack thereof) to address these problems. These include (a) scalable network overlays, (b) on-the-fly process recovery, (c) proactive process-level fault tolerance, (d) redundant execution, (e) the effort of SDCs on IEEE floating point arithmetic and (f) resilience modeling. In combination, these methods are aimed to pave the path to exascale computing.

April 5, 2013 - Sarat Sreepathi: Optimus: A Parallel Metaheuristic Optimization Framework With Environmental Engineering Applications

Optimus (Optimization Methods for Universal Simulators) is a parallel optimization framework for coupling computational intelligence methods with a target scientific application. Optimus includes a parallel middleware component, PRIME (Parallel Reconfigurable Iterative Middleware Engine) for scalable deployment on emergent supercomputing architectures. PRIME provides a lightweight communication layer to facilitate periodic inter-optimizer data exchanges. A parallel search method, COMSO (Cooperative Multi-Swarm Optimization) was designed and tested on various high dimensional mathematical benchmark problems. Additionally, this work presents a novel technique, TAPSO (Topology Aware Particle Swarm Optimization) for network based optimization problems. Empirical studies demonstrate that TAPSO achieves better convergence than standard PSO for Water Distribution Systems (WDS) applications. Scalability analysis of Optimus was performed on the Cray XK6 supercomputer (Jaguar) at Oak Ridge Leadership Computing Facility for the leak detection problem in WDS. For a weak scaling scenario, we achieved 84.82% of baseline at 200,000 cores relative to performance at 1000 cores.

March 20, 2013 - J.W. Banks: Stable Partitioned Solvers for Compresible Uid-structure Interaction Problems

In this talk, we discuss recent work concerning the developing and analysis of stable, partitioned solvers for uid-structure interaction problems. In a partitioned approach, the solvers for each uid or solid domain are isolated from each other and coupled only through the interface. This is in contrast to fully-coupled monolithic schemes where the entire system is advanced by a single unied solver, typically by an implicit method. Added-mass instabilities, common to partitioned schemes, are addressed through the use of a newly developed interface projection technique. The overall approach is based on imposing the exact solution to local uid-solid Riemann problems directly in the numerical method. Stability of the FSI coupling is discussed using normal-mode stability theory, and the new scheme is shown to be stable for a wide range of material parameters. For the rigid body case, the approach is shown to be stable even for bodies of no mass or rotational inertia. This dicult limiting case exposes interesting subtleties concerning the notion of added mass in uid-structure problems at the continuous level.

March 13, 2013 - Travis Thompson: Navier-Stokes equations to Describe the Motion of Fluid Substances

The Navier-Stokes equations describe the motion of fluid substances; the equations are widely utilized to model many physical phenomena such as weather patterns, ocean currents, turbulent fluid flow and magneto-hydrodynamics. Despite their wide utilization a comprehensive theoretical understanding remains an open question; the equations offer a venue for challenges at the forefront of both theoretical and computational knowledge. My work at Texas A&M has focused, primarily, on two topics: aspects of hyperbolic conservation laws, specifically mass conservation for incompressible Navier-Stokes, and computational investigation of an LES model based on a new eddy-viscosity; both embody appeal to highly-parallel scientific computing albeit in differing ways.

With respect to hyperbolic conservation laws: on the computational side I have implemented a one-step artificial compression term in a numerical code which counteracts an entropy-viscosity regularization term. This is an innovative approach; canonical methods for interface tracking are two-step or adaptive procedures. In addition the implementation utilizes a splitting approach, originally designed for use in a highly-parallel momentum equation variant, as an approximation operator in the time-stepping scheme; this approach imbues the algorithm with additional parallelism. On the theoretical side a distinct approach towards the analysis of dispersion error, utilizing a commutator expression, has been investigated for particular finite element spaces; the approach offers a computational segue into investigating consistency error and moves away from the canonical, tedious, expansion-based methodology of analysis.

With respect to large eddy simulations (LES): Computational investigations of an eddy-viscosity model based on the entropy-viscosity of Guermond & Popov has been underway for the last six months; in collaboration with Dr. Larios, a post-doc here at Texas A&M, an analysis of the qualitative and statistical attributes of high Reynolds number, turbulent flow is being conducted. We will compare our results to the Smagorinsky-Lilly turbulence model and attempt to verify basic tenets of isotropic turbulence theory; namely the Kolmogorov – 5/3 law and predictions regarding the uncorrelated nature of velocity structure functions.

March 1, 2013 - Bob Salko: Development, Improvement, and Validation of Reactor Thermal-Hydraulic Analysis Tools

As a result of the need for continual development, qualification, and application of computational tools relating to the modeling of nuclear systems, the Reactor Dynamics and Fuel Management Group (RDFMG) at the Pennsylvania State University has maintained an active involvement in this area. This presentation will highlight recent RDFMG work relating to thermal-hydraulic modeling tools. One such tool is the COolant Boiling in Rod Arrays - Two Fluids (COBRA-TF) computer code, capable of modeling the independent behavior of continuous liquid, vapor, and droplets using the sub-channel methodology. Work has been done to expand the modeling capabilities from the in-vessel region only, which COBRA-TF has been developed for, to the coolant-line region by developing a dedicated coolant-line-analysis package that serves as an add-on to COBRA-TF. Additional COBRA-TF work includes development of a pre-processing tool for faster, more user-friendly creation of COBRA-TF input decks, implementation of post-processing capabilities for visualization of simulation results, and optimization of the source code for significant improvements in simulation speed and memory management. Of equal importance to these development activities is the validation of the resulting tools for their intended applications. The code capability to capture rod-bundle thermal-hydraulic behavior during prototypical PWR operating conditions will be demonstrated through comparison of predicted and experimental results for the New Experimental Studies of Thermal-Hydraulics of Rod Bundles (NESTOR) tests. Due to the growing usage of Computational Fluids Dynamics (CFD) tools in this area, modeling results predicted by the STAR-CCM+ CFD tool will also be presented for these tests.

February 23, 2013 - Thomas L. Lewis: Finite Difference and Discontinuous Galerkin Numerical Methods for Fully Nonlinear Second Order PDEs with Applications to Stochastic Optimal Control

In this talk I will discuss a convergence framework for directly approximating the viscosity solutions of fully nonlinear second order PDE problems. The main focus will be the introduction of a set of sufficient conditions for constructing convergent finite difference (FD) methods. The conditions given are meant to be easier to realize and implement than those found in the current literature. The given FD methodology will then be shown to generalize to a class of discontinuous Galerkin (DG) methods. The proposed DG methods are high order and allow for increased flexibility when choosing a computational mesh. Numerical experiments will be presented to gauge the performance of the proposed DG methods. An overview of the PDE theory of viscosity solutions will also be given. The presented ideas are part of a larger project concerned with efficiently and accurately approximating the Hamilton-Jacobi-Bellman equation from stochastic optimal control.

February 22, 2013 - Charles K. Garrett: Numerical Integration of Matrix Riccati Differential Equations with Solution Singularities

A matrix Riccati differential equation (MRDE) is a quadratic ODE of the form

X' = A21 + A22X – XA11 – XA12X.

It is well known that MRDEs may have singularities in their solution. In this presentation, both the theory and practice of numerically integrating MRDEs past solution singularities will be analyzed. In particular, it will be shown how to create a black box numerical MRDE solver, which accurately solves an MRDE with or without singularities.

February 21, 2013 - Giacomo Dimarco: Asymptotic Preserving Implicit-Explicit Runge-Kutta Methods For Non-Linear Kinetic Equations

In this talk, we will discuss Implicit-Explicit (IMEX) Runge Kutta methods which are particularly adapted to stiff kinetic equations of Boltzmann type. We will consider both the case of easy invertible collision operators and the challenging case of Boltzmann collision operators. We give sufficient conditions in order that such methods are asymptotic preserving and asymptotically accurate. Their monotonicity properties are also studied. In the case of the Boltzmann operator the methods are based on the introduction of a penalization technique for the collision integral. This reformulation of the collision operator permits to construct penalized IMEX schemes which work uniformly for a wide range of relaxation times avoiding the expensive implicit resolution of the collision operator. Finally we show some numerical results which confirm the theoretical analysis.

February 20, 2013 - Tom Berlijn: Effects of Disorder on the Electronic Structure of Functional Materials

Doping is one of the most powerful ways to tune the properties of functional materials such as thermoelectrics, photovoltaics and superconductors. Besides carriers and chemical pressure, the dopants insert disorder into the materials. In this talk I will present two case studies of doped Fe based superconductors: Fe vacancies in KxFeySe2 [1] and Ru substitutions in Ba(Fe1-xRux)2As2 [2]. With the use of a recently developed first principles method [3], non-trivial disorder effects are found that are not only interesting scientifically, but also have potential implications for materials technology. Open questions for further research will be discussed.

[1] TB, P.j. Hirschfeld, W. Ku, PRL 109 (2012)
[2] L. Wang, TB, C.-H. Lin, Y. Wang, P.j. Hirschfeld, W. Ku, PRL 110 (2013)
[3] TB, D. Volja, W. Ku, PRL 106 (2011)

February 19, 2013 - Joshua D. Carmichael: Seismic Monitoring of the Western Greenland Ice Sheet: Response to Early Lake Drainage

In 2006, the drainage of a supraglacial lake through hydrofracture on the Greenland Ice-sheet was directly observed for the first time. This event demonstrated that surface-to-bed hydrological connections can be established through 1km of cold ice and thereby allow surficial forcing of a developed subglacial drainage system by surface meltwater. In a climate changing scenario, supraglacial lakes on the Western Greenland Ice Sheet are expected to drain earlier each summer and form new lakes at higher elevations. The ice sheet response to these earlier drainages in the near future is of glaciological concern. We address the response of the Western Greenland Ice Sheet to an observed early lake drainage using a synthesis of seismic and GPS monitoring near an actively draining lake. This experiment demonstrates that (1) seismic activity precedes the drainage event by several days and is likely coincident with crack coalescence, that (2) seismic multiplet locations are coincident with the uplift of the ice during drainage and (3) a diurnal seismic response of the ice sheet follows after the ice surface settles to pre-drainage elevation a week later. These observations are consistent with a model in which the subglacial drainage system is likely distributed, highly pressurized and with low hydraulic conductivity at drainage initiation. It also demonstrates that an early lake drainage likely reduces basal normal stress for order-week time scales by storing water subglacially. We conclude with recommendations for future long-range lake drainage detection.

February 18, 2013 - Mili Shah: Calculating a Symmetry Preserving Singular Value Decomposition

The symmetry preserving singular value decomposition (SPSVD) produces the best symmetric (low rank) approximation to a set of data. These symmetric approximations are characterized via an invariance under the action of a symmetry group on the set of data. The symmetry groups of interest consist of all the non-spherical symmetry groups in three dimensions. This set includes the rotational, reflectional, dihedral, and inversion symmetry groups. In order to calculate the best symmetric (low rank) approximation, the symmetry of the data set must be determined. Therefore, matrix representations for each of the non-spherical symmetry groups have been formulated. These new matrix representations lead directly to a novel reweighting iterative method to determine the symmetry of a given data set by solving a series of minimization problems. Once the symmetry of the data set is found, the best symmetric (low rank) approximation can be established by using the SPSVD. Applications of the SPSVD to protein dynamics problems as well as facial recognition will be presented.

February 14, 2013 - Zheng (Cynthia) Gu: Efficient and Robust Message Passing Schemes for Remote Direct Memory Access (RDMA)-Enabled Clusters

While significant effort has been made in improving Message Passing Interface (MPI) performance, existing work has mainly focused on eliminating software overhead in the library and delivering raw network performance to applications. The current MPI implementations such as MPICH2, MVAPICH2, and Open MPI still suffer from performance issues such as unnecessary synchronizations, communication progress problems, and lack of communication-computation overlaps. The root cause of these problems is the mis-match between the communication protocols/algorithms and the communication scenarios. In my PhD research, I will develop efficient and robust message passing schemes for both point-to-point and collective communications for RDMA-enabled clusters. Unlike existing approaches for optimizing MPI performance, our approach will allow different communication protocols/algorithms for different communication scenarios. The idea is to use the most appropriate communication scheme for each communication so as to remove the mis-matches, which will eliminate unnecessary synchronizations, improve communication progress, and maximize communication-computation overlaps during a communication operation. This prospectus will describe the background of this research, present our preliminary research, and summarize the proposed future work.

February 8, 2013 - Taylor Patterson: Simulation of Complex Nonlinear Elastic Bodies Using Lattice Deformers

Lattice deformers are a popular option in computer graphics for modeling the behavior of elastic bodies as they avoid the need for conforming mesh generation, and their regular structure offers significant opportunities for performance optimizations. This talk will present work that expands the scope of current grid-based elastic deformers, adding support for a number of important simulation features. The approach to be described accommodates complex nonlinear, optionally anisotropic materials while using an economical one-point quadrature scheme. The formulation fully accommodates near-incompressibility by enforcing accurate nonlinear constraints, supports implicit integration for large time steps, and is not susceptible to locking or poor conditioning of the discrete equations. Additionally, this technique increases the solver accuracy by employing a novel high-order quadrature scheme on lattice cells overlapping with the embedded model boundary, which are treated at sub-cell precision. This accurate boundary treatment can be implemented at a minimal computational premium over the cost of a voxel-accurate discretization. Finally, this talk will present part of the expanding feature set of this approach that is currently under development.

February 6, 2013 - Makhan Virdi: Modeling High-resolution Soil Moisture to Estimate Recharge Timing and Experiences with Geospatial Analyses

Estimating the time of groundwater recharge after a rainfall event is poorly understood because of it's dependence on non-linear soil characteristics and variability in antecedent soil conditions. Movement of water in variably saturated soil can be described by Richards' equation - a non-linear partial differential equation without a closed-form analytical solution, which is difficult to approximate. To develop a simple recharge model using minimum number of soil parameters, high resolution soil moisture data from a soil column in controlled laboratory conditions were analysed to understand the wetting front propagation at a finer temporal scale. Findings from a series of simulations using an existing Finite Element model by varying soil properties and depth to water table were used to propose a simple model that uses only the most significant representative soil properties and antecedent soil matrix state. In other separate geospatial analyses, satellite imagery was used for determining landslide risk cost to develop an algorithm for safest and shortest route planning in hilly areas susceptible to landslide; effects of decadal climate extremes was studied on lake-groundwater exchanges; Effects of Phosphate mining on a regional scale were studied using hydrological models and geospatial analysis LiDAR derived DEM and watershed.

February 5, 2013 - Roshan J. Vengazhiyil and C. F. Jeff Wu: Experimental Design, Model Calibration, and Uncertainty Quantification

We will start the talk with a newly developed space-filling design, called minimum energy design (MED). The key ideas involved in constructing the MED are the visualization of each design point as a charged particle inside a box, and minimization of the total potential energy of these particles. It is shown through theoretical arguments and simulations, that under regularity conditions and proper choice of the charge function, the MED can asymptotically generate any arbitrary probability density function. This new design technique has important applications in Bayesian computation and uncertainty quantification. The second part of the talk will focus on model calibration. The commonly used Kennedy and O'Hagan's (KO) approach treats the computer model as a black box and therefore, the statistically calibrated models lack physical interpretability. We propose a new framework that opens up the black box and introduces statistical models inside the computer model. This approach leads to simpler models that are physically more interpretable. Then, we will present some theoretical results concerning the convergence properties of calibration parameter estimation in the KO formulation of the model calibration problem. The KO calibration is shown to be asymptotically inconsistent. A new approach, called L2 distance calibration, is shown to be consistent and asymptotically efficient in estimating the calibration parameters.

February 4, 2013 - Li-Shi Luo: Kinetic Methods for CFD

Computational fluid dynamics (CFD) is based on direct discretizations of the Navier-Stokes equations. The traditional approach of CFD is now being challenged as new multi-scale and multi-physics problems have begun to emerge in many fields -- in nanoscale systems, the scale separation assumption does not hold; macroscopic theory is therefore inadequate, yet microscopic theory may be impractical because it requires computational capabilities far beyond our present reach. Methods based on mesoscopic theories, which connect the microscopic and macroscopic descriptions of the dynamics, provide a promising approach. Besides their connection to microscopic physics, kinetic methods also have certain numerical advantages due to the linearity of the advection term in the Boltzmann equation. Dr. Luo will discuss two mesoscopic methods: the lattice Boltzmann equation and the gas-kinetic scheme, their mathematical theory and their applications to simulate various complex flows. Examples include incompressible homogeneous isotropic turbulence, hypersonic flows, and micro-flows.

January 23, 2013 - Tarek Ali El Moselhy: New Tools for Uncertainty Quantification and Data Assimilation in Complex Systems

In this talk, Dr. Tarek Ali El Moselhy will present new tools for forward and inverse uncertainty quantification (UQ) and data assimilation.

In the context of forward UQ, Dr. Moselhy will briefly summarize a new scalable algorithm particularly suited for very high-dimensional stochastic elliptic and parabolic PDEs. The algorithm relies on computing a compact separated representation of the stochastic field of interest. The separated presentation is computed iteratively and adaptively via a greedy optimization algorithm. The algorithm has been successfully applied to problems of flow and transport in stochastic porous media, handling “real world” levels of spatial complexity and providing orders of magnitude reduction in computational time compared to state of the art methods.

In the context of inverse UQ, Dr. Moselhy will present a new algorithm for the Bayesian solution of inverse problems. The algorithm explores the posterior distribution by finding a {\it transport map} from a reference measure to the posterior measure, and therefore does not require any Markov chain Monte Carlo sampling. The map from the reference to the posterior is approximated using polynomial chaos expansion and is computed via stochastic optimization. Existence and uniqueness of the map are guaranteed by results from the optimal transport literature. The map approach is demonstrated on a variety of problems, ranging from inference of permeability fields in elliptic PDEs to benchmark high-dimensional spatial statistics problems such as inference in log-Gaussian cox point processes.

In addition to its computational efficiency and parallelizability, advantages of the map approach include: providing clear convergence criteria and error measures, providing analytical expressions for posterior moments, evaluating at no additional computational cost the marginal likelihood/evidence (thus enabling model selection), the ability to generate independent uniformly-weighted posterior samples without additional model evaluations, and the ability to efficiently propagate posterior information to subsequent computational modules (thus enabling stochastic control).

In the context of data assimilation, Dr. Moselhy will present an optimal map algorithm for filtering of nonlinear chaotic dynamical systems. Such an algorithm is suited for a wide variety of applications including prediction of weather and climate. The main advantage of the algorithm is that it inherently avoids issues of sample impoverishment common to particle filters, since it explicitly represents the posterior as the push forward of a reference measure rather than with a set of samples.

December 13, 2012 - Russell Carden: Automating and Stabilizing the Discrete Empirical Interpolation Method for Nonlinear Model Reduction

The Discrete Empirical Interpolation Method (DEIM) is a technique for model reduction ofnonlinear dynamical systems. It is based upon a modification to proper orthogonal decomposition, which is designed to reduce the computational complexity for evaluating the reduced order nonlinear term. The DEIM approach is based upon an interpolatory projection and only requires evaluation of a few selected components of the original nonlinear term. Thus, implementation of the reduced order nonlinear term requires a new code to be derived from the original code for evaluating the nonlinearity. Dr. Carden will describe a methodology for automatically deriving a code for the reduced order nonlinearity directly from the original nonlinear code. Although DEIM has been effective on some very difficult problems, it can under certain conditions introduce instabilities in the reduced model. Dr. Carden will present a problem that has proved helpful in developing a method for stabilizing DEIM reduced models.

December 12, 2012 - Charlotte Kotas: Bringing Real-Time Array Signal Processing to the NVIDIA Tesla

Underwater acoustic detection of hostile targets at range requires increasingly computationally advanced algorithms as adversaries become quieter. This seminar will discuss the mathematics behind one such algorithm and some of the challenges associated with modifying it to work in a real-time networked environment. The algorithm was modified from a sequential MATLAB formulation to a parallel CUDA FORTRAN formation designed to run on an NVIDIA Tesla C2050 processor. Speedups of greater than 50◊ were observed over comparable computational sections.

December 6, 2012 - Shuaiwen "Leon" Song: Power, Performance and Energy Models and Systems for Emergent Architectures

Massive parallelism combined with complex memory hierarchies and heterogeneity in high-performance computing (HPC) systems form a barrier to efficient application and architecture design. The performance achievements of the past must continue over the next decade to address the needs of scientific simulations. However, building an exascale system by 2022 that uses less than 20 megawatts will require significant innovations in power and performance efficiency. Prior to this work, the fundamental relationships between power and performance were not well understood. Our analytical modeling approach allows users to quantify the relationship between power and performance at scale by enabling study of the effects of machine and application dependent characteristics on system energy efficiency. Our model helps users isolate root causes of energy or performance inefficiencies and develop strategies for scaling systems to maintain or improve efficiency. I will also show how this methodology can be extended and applied to model power and performance in heterogeneous GPU-based architectures.
 
Shuaiwen "Leon" Song is a PhD candidate in the Computer Science department of Virginia Tech. His primary research interests fall broadly within the area of High Performance Computing (HPC) with a focus on power and performance analysis and modeling for large scale homogeneous and heterogeneous parallel architectures and runtime systems. He is a recipient of the 2011 Paul E. Torgersen Award for Graduate Student Research Excellence and in 2011 was an Institute for Scientific Computing Research (ISCR) Scholar at Lawrence Livermore National Laboratory. His work has been published in conferences and journals including IPDPS, IEEE Cluster, PACT, MASCOTS, IEEE TPDS, and IJHPCA.

December 6, 2012 - Miroslav Stoyanov: Gradient Based Dimension Reduction Approach for Stochastic Partial Differential Equations

Dimension reduction approach is considered for uncertainty quantification, where we use gradient information to partition the uncertainty domain into “active” and “passive” subspaces, where the “passive” subspace is characterized by near zero variance of the quantity of interest. We present a way to project the model onto the low dimensional “active” subspace and solve the resulting problem using conventional techniques. We derive rigorous error bounds for the projection algorithm and show convergence in $L^1$ norm.

December 5, 2012 - Barbara Chapman: Enabling Exascale Programming:  The Intranode Challenge

As we continue to debate the best way to program emerging generations of leadership-class hardware, it is imperative that we do not ignore the more traditional paths.
Dr. Chapman's presentation considers some of the ways in which today's intranode programming models may help us migrate legacy application code.

December 5, 2012 - Andrew Christlieb: An Implicit Maxwell Solver Based on Method of Lines Transpose

Fast summation methods have been successfully used in a range of plasma applications. However, in the case of moving point charges, direct application of fast summation methods in the time domain requires the use of retarded potentials. In practices, this means that every time a point charge moves in a simulation, it leaves behind an image charge that becomes a source term for all time. Hence, at each time step the number of points in the simulation grows with the number of particles being simulated.

In this talk, Dr. Christlieb will present a new approach to Maxwell's equations based on the method of lines transpose. The method starts by expressing Maxwell’s equations in second order form, and then the time operator is discretized. The resulting implicit system is then solved using integral methods. This process is known as the method of lines transpose. This approach pushes the time history into a volume integral, which does not grow in complexity with time. To efficiently solve the boundary integral, Dr. Christlieb will explain the developed ADI method that is combined with a $O(N)$ solver for the 1D boundary integrals that is competitive with explicit time stepping methods. Because the new method is implicit, this approach does not have a CFL. Further, because the approach is based on an integral formulation, the new method easily encompasses complex geometry with no special modification. Dr. Christlieb will present preliminary results of this method applied to wave propagation and some basic Maxwell examples.

November 27, 2012 - Charles Jackson: Metrics for Climate Model Validation

A “valid” model is a model that has been tested for its intended purpose. In the Bayesian formulation, the “log-likelihood” is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. Thisstatistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues in formulating the log-likelihood is how one should account for biases because not all biases affect predictions of quantities of interest.  Dr. Jackson makes use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting eachmodel’s sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular, Dr. Jackson uses multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model’s sensitivity. What is found is that the differences that matter can be a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. Dr. Jackson will discuss the implications of this result.

November 15, 2012 - Erich Foster: Finite Elements for the Quasi-Geostrophic Equations of the Ocean

Erich Foster will present a conforming finite element (FE) discretization of the stream function formulation of the pure stream function form of the quasi-geostrophic equations (QGE), which are a commonly used model for the large scale wind-driven ocean circulation. The pure stream function form of the QGE is a fourth-order PDE and therefore requires a C^1 FE discretization to be conforming. Thus, the Argyris finite element, a C^1 FE with 21 degrees of freedom, was chosen for the FE discretization of the QGE. Optimal error estimates for the pure stream function form of the QGE will be presented. The QGE is a simplified model of the ocean, however it can be computationally expensive to resolve all scales, therefore numerical methods, such as the two-level method, are indispensable for time sensitive projects. A two-level method and optimal error estimate for a two-level method applied to the conforming FE discretization of the pure stream function form of the QGE will be presented and computational efficiency will be demonstrated.

October 25, 2012 - Shi Jin: Asymptotic-Preserving Schemes for Boltzmann Equation and Relative Problems with Stiff Sources

Dr. Shi Jin will propose a general framework to design asymptotic preserving schemes for the Boltzmann kinetic and related equations. Numerically solving these equations are challenging due to the nonlinear stiff collision (source) terms induced by small mean free or relaxation time. Dr. Jin will propose to penalize the nonlinear collision term by a BGK-type relaxation term, which can be solved explicitly even if discretized implicitly in time. Moreover, the BGK-type relaxation operator helps to drive the density distribution toward the local Maxwellian, thus naturally imposes an asymptotic-preserving scheme in the Euler limit. The scheme so designed does not need any nonlinear iterative solver or the use of Wild Sum. It is uniformly stable in terms of the (possibly small) Knudsen number, and can capture the macroscopic fluid dynamic (Euler) limit even if the small scale determined by the Knudsen number is not numerically resolved. Dr. Jin will show how this idea can be applied to other collision operators; such as the Landau-Fokker-Planck operator, Ullenbeck-Urling model, and in the kinetic-fluid model of disperse multiphase flows.

October 24, 2012 - Shi Jin: Semiclassical Computation of High Frequency Waves in Heterogeneous Media

Dr. Shi Jin will introduce semiclassical Eulerian methods that are efficient in computing high frequency waves through heterogeneous media. The method is based on the classical Liouville equation in phase space, with discontinuous Hamiltonians due to the barriers or material interfaces. Dr. Jin will provide physically relevant interface conditions consistent with the correct transmissions and reflections, and then build the interface conditions into the numerical fluxes. This method allows the resolution of high frequency waves without numerically resolving the small wave lengths, and capture the correct transmissions and reflections at the interface. This method can also be extended to deal with diffraction and quantum barriers. Dr. Jin will also discuss Eulerian Gaussian beam formulation which can compute caustics more accurately.

October 09, 2012 - Christian Ringhofer: Charged Particle Transport in Narrow Geometries under Strong Confinement

Kinetic transport in narrow tubes and thin plates, involving scattering of particles with a background, is modeled by classical and quantum mechanical sub-band type macroscopic equations for the density of particles (ions).  The result are diffusion equation with the projection of the (asymptotically conserved) energy tensor on the confined directions as an additional free variable, on large time scales.  Classical transport of ions through protein channels and quantum transport in thin films are discussed as examples of the application  of this methodology.

October 05, 2012 - Amilcare Porporato: Stochastic soil moisture dynamics: from soil-plant biogeochemistry and land-atmosphere interactions to sustainable use of soil and water

The soil-plant-atmosphere system is characterized by a large number of interacting processes with high degree of unpredictability and nonlinearity. These elements of complexity, while making a full modeling effort extremely daunting, are also responsible for the emergence of characteristic behaviors. Duke University model these processes by mean of minimalist models which describe the main deterministic components of the system and surrogate the high dimensional ones (i.e., hydroclimatic variability and rainfall in particular) with suitable stochastic terms. The solution of the stochastic soil water balance allows us to describe probabilistically several ecohydrological processes, including ecosystem response plant productivity as well as soil organic matter and nutrient cycling dynamics. Dr. Porporato will also discuss how such an approach can be extended to include land atmosphere feedbacks and related impact on convective precipitation. Dr. Porporato will conclude with a brief discussion of how these methods can be employed to address quantitatively the sustainable management of water and soil resources, including optimal irrigation and fertilization, phytoremediation, and soil salinization risk.