2010

- FEBRUARY -

WHO: Prof. James A. Sethian, Department of Mathematics, University of California, Berkeley

WHAT: Advances in Advancing Interfaces:  Efficient Algorithms for Inkjet Plotters, Coating Rollers, Semiconductors, and Medical Scanners

WHEN: Friday, February 5, 2010 2:00 p.m.

WHERE: ORNL, Bldg. 5100, Auditorium, Room 128

Propagating interfaces occur in a wide variety of settings, and include ocean waves, burning flames, and material boundaries. Less obvious boundaries are equally important, and include iso-intensity contours in images, handwritten characters, and shapes against boundaries. In addition, some static problems can be recast as advancing fronts, including robotic navigation and finding shortest paths on contorted su rfaces.

One way to frame moving interfaces is to recast them as solutions to fixed domain Eulerian partial differential equations, and this has led to a collection of PDE-based techniques, including level set methods, fast marching methods, and ordered upwind methods. These techniques easily accommodate merging boundaries and the delicate 3D physics of interface motion. In many settings, they been proven valuable.

In this talk, we will focus on industrial applications of these techniques. The goal is to give an overview of some of the computational complexities involved in optimizing ink jet plotters, micro fluidics, simulating semiconductor manufacturing, and extracting anatomical structures through image segmentation in medical scanners.

Host:  Barney Maccabe


WHO: Yousef Saad - University of Minnesota - Department of Computer Science and Engineering

WHAT: Preconditioning Techniques for Highly Indefinite Systems

WHEN: Friday, February 5, 2010 10:00 a.m.

WHERE: ORNL, Bldg. 5100, Auditorium, Room 128

Many practical situations require the solution of highly indefinite linear systems of equations.  Among these are systems which arise from the Helmholtz equation or the very irregularly structured systems that are obtained from circuit simulation for example.

This talk will discuss preconditioning techniques which emphasize robustness. One such technique is based on combining two-sided permutations with a multilevel approach.  The nonsymmetric permutation technique exploits a greedy strategy to put large entries of the matrix in the diagonal of the upper leading submatrix.  This leads to an effective incomplete factorization preconditioner for general nonsymmetric, irregularly structured, sparse linear systems.  The algorithm is implemented in a multilevel fashion and borrows from the Algebraic Recursive Multilevel Solver (ARMS) framework.  Preliminary parallel implementations using a Domain Decomposition framework will also be discussed.  Preliminary illustrations with the Helmholtz equations and the Maxwell equation will be reported.

Host: Bobby Philip (philipb@ornl.gov)

NOTE:  If you would like to request a meeting with the visitor please contact Tammy or Bobby.


2009

- MAY -

WHO: Jungho Lee - Courant Institute of Mathematical Sciences Department of Mathematics New York University

WHAT: A Hybrid Domain Decomposition Method Based on One-Level FETI and BDDC Algorithms and its Applications to Contact Problems

WHEN: Thursday, May 14, 2009 10:00 a.m.

WHERE: ORNL, Bldg. 5700, Room L202

A three-level domain decomposition is considered. Bodies in contact with each other are divided into subdomains, which are the union of elements.

Using an approach based on FETI (Finite Element Tearing and Interconnect) algorithms, which has been done by the engineering community, does not lead to a scalable algorithm with respect to the number of subdomains in each body. Instead, we consider a new method which combines the one-level FETI method and the BDDC (Balanced Domain Decomposition with Constraints) method and is scalable with respect to the number of subdomains. This method is based on a saddle point formulation, which allows the use of inexact solvers.

Host: John Turner, turnerja@ornl.gov


 

WHO: Beth A. Wingate, Los Alamos National Laboratory

WHAT: Separation of Time Scales at high latitudes in rotating and stratified Flows

WHEN: 2 pm, Tuesday, May 12

WHERE: Building 5700, D307

The dynamics of the Arctic ocean can be characterized by a close proximity to the axis of rotation and weaker stratification than in other parts of the ocean. In this talk I show that ocean dynamics in regions like this can be described by new reduced equations whose character is very different from quasi-geostrophy. The theory, based on a slow/fast decomposition for the method of multiple scales, states that:

  1. The horizontal dynamics reduces to the 2D Navier Stokes equations and has two conserved quantities, the horizontal kinetic energy and the horizontal vertical vorticity. This implies that the Arctic could have numerous barotropic vortices.
  2. The flow is non hydrostatic in a special way. There is a component of the vertical velocity, but its dynamics is two-dimensional (vertically integrated) and forced by the vertical integral of the buoyancy. There is a conservation law for these dynamics that is an area integral over the vertical kinetic energy and the buoyancy (potential energy).
  3. A key part of this is that the ratio of the slow total energy to the total energy remains constant in the absence of dissipation but that the ratio of the slow potential enstrophy relative to the total potential enstrophy goes to 1. This suggests that the potential enstrophy is the more important quantity to 'get right' and that there may be some important consequences for the turbulence cascade in the Arctic.
  4. Some of these results are supported by the observations of Woodgate et al. (2001) who observed numerous barotropic (depth about 1000 meters) vortices in the Arctic.

************************

Beth Wingate graduated from the University of Michigan in 1996 where she studied climate science, scientific computing, and applied mathematics. She was an Advanced Studies Postdoc at the National Center for Atmospheric Research and has spent the rest of her professional career at LANL where she is a member of the COSIM project and the Center for Nonlinear Studies. Her work focuses on theory and modeling of high latitude dynamics and the development of numerical methods for ocean models.

Dr. Wingate is hosted by Kate Evans.

 

- APRIL -

WHO: Bill Michener - University of New Mexico

WHAT: DataONE: Enabling Data-Intensive Biological and Environmental Research through Cyberinfrastructure

WHEN: Friday, April 17,10:00 am

WHERE: Building 5100  Lecture Hall (Room 128)  Joint Institute for Computational Science

Earth’s environmental challenges are daunting. Human populations are increasing and shifting geographically; the demand for freshwater is outstripping supply in many regions; and regional and global climate change are affecting natural resources, disturbance regimes, economies, and quality of life. Addressing these problems requires that we change the ways that we do science; harness the enormity of existing data; develop new methods to combine, analyze, and visualize diverse data resources; create new, long-lasting cyberinfrastructure; and re-envision many of our longstanding institutions. DataONE (Observation Network for Earth) represents a new virtual organization whose goal is to enable new science and knowledge creation through universal access to data about life on earth and the environment that sustains it. DataONE is designed to be the foundation of new innovative environmental science through a distributed framework and sustainable cyberinfrastructure that meets the needs of science and society for open, persistent, robust, and secure access to well-described and easily discovered Earth observational data. Supported by the U.S. National Science Foundation, DataONE will ensure the preservation and access to multi-scale, multi-discipline, and multi-national science data. DataONE is transdisciplinary, making biological data available from the genome to the ecosystem; making environmental data available from atmospheric, ecological, hydrological, and oceanographic sources; providing secure and long-term preservation and access; and engaging scientists, land-managers, policy makers, students, educators, and the public through logical access and intuitive visualizations. Most importantly, DataONE will serve a broader range of science domains both directly and through the interoperability with the DataONE distributed network.   Our Speaker: Bill Michener (University of New Mexico) is Professor and Director of e-Science Initiatives for University Libraries at the University of New Mexico. He has a PhD in Biological Oceanography from the University of South Carolina and has published extensively in the ecological sciences and information sciences. During the past decade he has directed several large interdisciplinary research programs and cyberinfrastructure projects including the NSF Biocomplexity Program, the Development Program for the NSF-funded Long-Term Ecological Research Network, and cyberinfrastructure projects that focus on developing information technologies for the biological, ecological, and environmental sciences.

View video of UT presentation HERE.

Hosts:  Bob Cook, phone 574-7319 and John Cobb, phone 576-5439

 

- MARCH -

WHO: Dr. Andreas Dedner - University of Freiburg, Germany

WHAT: hp Adaptive Stabilization of the Discontinuous Galerkin Method for Evolution Equations

WHEN: 10 am, Tuesday, March 10, 2009

WHERE: Building 5700, Room L204

In this talk we present an hp-adaptive scheme in space and time for the discretization of systems of general convection-diffusion equations with source term. We base our method on the higher order Discontinuous Galerkin method in space and explicit methods in time using general parallel grid structures with h-adaptivity. Our focus is on the convection dominated case, so we discuss approaches for gradient limiting and p-adaptivity for stabilizing the scheme in the regions of strong gradients or discontinuities. The basis of the hp-adptivity is an a-posteriori error estimate for the semi-discrete method. By combining several "simple" spatial operators it is possible to use our method to solve quite complex problems.

For the implementation of the scheme we use the free software package DUNE (dune.mathematik.uni-freiburg.de). In this package, general interfaces for grid-based numericalmethods are described and a variety of grid structures and numerical schemes are implemented. The interface-based approach allows the coding of numerical schemes independent of the grid structure and the spatial dimension. Parallelization, load balancing, and grid adaptation are part of the general interface and can thus be easily used in the numerical scheme. Our approach uses modern software design in C++ to combine flexibility with high efficiency. ******

Dr. Dedner is hosted by Ralf Deiterding


WHO: Michael Wolf University of Illinois, Urbana-Champaign - (http://www.cs.uiuc.edu/homes/mmwolf)

WHAT: Optimizing Matrix-Vector Multiplication with Combinatorial Techniques

WHEN: Monday, March 9, 2009 10:00 a.m.

WHERE: ORNL, Bldg. 5100, Auditorium, Room 128

Abstract:

In this talk, I will describe my work on optimizing matrix-vector multiplication with combinatorial techniques. My research has focused on two different combinatorial scientific computing topics related to matrix-vector multiplication.

For the first topic, I address the optimization of serial matrix-vector multiplication for relatively small, dense matrices, which can be used in finite element assembly. Previous work showed that combinatorial optimization of matrix-vector multiplication can lead to faster evaluation of finite element stiffness matrices by removing redundant operations. Based on a graph model characterizing row relationships, a more efficient set of operations can be generated to perform matrix-vector multiplication. I improved this graph model by extending the set of binary row relationships and using hypergraphs to model more complicated row relationships, yielding significantly improved results over previous models.

For the second topic, I address parallel matrix-vector multiplication for large sparse matrices. Parallel sparse matrix-vector multiplication is a particularly important numerical kernel in computational science. We have focused on optimizing the parallel performance of this operation by reducing the communication volume through smarter two-dimensional matrix partitioning. We have developed and implemented a recursive algorithm based on nested dissection to partition structurally symmetric matrices. In general, this method has proven to be the best available for partitioning structurally symmetric matrices (when considering both volume and partitioning time) and has shown great promise for information retrieval matrices.

Host: Ed D'Azevedo (dazevedoef@ornl.gov)

 


WHO: Christine Morin - l'Institut National de Recherche en Informatique et en Automatique

WHAT: XtreemOS Technology and the Emerging Cloud Computing Era

WHEN: 10 am, Thursday, March 5, 2009

WHERE: Building 5100, Auditorium

The XtreemOS distributed operating system is developed in the framework of XtreemOS project partially funded by the European Commission and involving eighteen academic and industrial partners (http://www.xtreemos.eu). The XtreemOS system provides for the Grid what a traditional operating system offers for a single computer: abstraction from the hardware and secure resource sharing between different users. It thus simplifies the work of users belonging to virtual organizations by giving them the illusion of using a traditional computer, and releasing them from dealing with the complex resource management issues of a typical Grid environment. When a user runs an application on XtreemOS, the operating system automatically finds all resources necessary forthe execution, configures users’ credentials on the selected resources, and starts the application. The XtreemOS operating system provides three major distributed services to users: application, execution management(providing scalable resource discovery and job schedulingfor distributed interactive applications), data management(accessing and storing data inXtreemFS, a POSIX-like file system spanning the Grid) and virtual organization management(building and operating dynamic virtual organizations).

In this talk, we will present the XtreemOS distributed operating system fundamental design principles for achieving scalability, transparency, interoperability, dependability and security. We will illustrate the users’ experience with XtreemOS system on some basic scenarios of use. In the last part of our talk, we will discuss the positioning of XtreemOS technology with regard to the emerging cloud computing era. More specifically, we will describe a scenario where XtreemOS could help users take full advantage of clouds in a global environment including their own resources and cloud resources. We also show how cloud service providers could use the XtreemOS system to manage their underlying infrastructure.

Christine Morin received her engineering degree from the Institut National des Sciences Appliquées (INSA), of Rennes (France), in 1987 and Master and PhD degrees in Computer Science from the University of Rennes I in 1987 and 1990, respectively. In March 1998, she received her Habilitation à Diriger des Recherches in Computer Science from the Université de Rennes 1.

Since 1991, she has held a researcher position at INRIA and has carried out her research activities at IRISA/INRIA-Rennes. Since January 2000, she has been a member of the INRIA PARIS project-team contributing to the programming of large scale parallel and distributed systems. From October 2000 to August 2002, she has held a temporary assistant professor position at IFSIC (University of Rennes I). Since September 2002, she has held a senior researcher position at INRIA. Since 1999, she has led research activities on single system image OS for high performance computing in clusters, resulting in Kerrighed cluster OS, now developed in open source (http://www.kerrighed.org). She is the scientific coordinator of the XtreemOS project which is a 4-year European integrated project started in June 2006 (http://wwwxtreemos.eu). She is a co-founder of Kerlabs start-up, created in 2006 to exploit Kerrighed technology (http://www.kerlabs.com). Her research interests are in operating systems, distributed systems, fault tolerance, cluster and grid computing. She is the author of more than 80 papers in refereed international journals and conferences. She is a member of ACM and IEEE.

Dr. Morin will be hosted by Geoffroy Vallee

- FEBRUARY -

WHO: Christopher G. Baker - Sandia National Laboratories - (http://www.scs.fsu.edu/cbaker)

WHAT: Riemannian Optimization and Applications to Numerical Linear Algebra

WHEN: Friday, February 27, 2009 - 10:00 a.m.

WHERE: ORNL, Bldg. 5100, Auditorium, Room 128

Abstract:

Many scientific and engineering problems can be interpreted as the optimization of a smooth function over a Riemannian manifold. Eigenvalue problems, tensor factorization, pose estimation, simultaneous diagonalization, and optimal reduced order modeling are just a few applications from diverse fields such as linear algebra, computer vision, signal processing and data mining. Current research activity in this area involves the transfer of theory and methods from Euclidean optimization to a Riemannian setting. Employed successfully, geometric information gleaned from the manifold presentation informs a more efficient approach than would have been achieved via traditional constrained Euclidean optimization.

In this talk, I will present recent work regarding several trust-region methods on Riemannian manifolds. These methods are discussed in the context of the family of "retraction-based" Riemannian optimization methods, a novel paradigm which significantly increases the algorithmic possibilities for Riemannian optimization. Emphasis will be placed on flexibility of the methods and the preservation of significant convergence theory. The usefulness of the methods will be demonstrated on important problems from large-scale numerical linear algebra, including large-scale eigenvalue and singular value problems.

Host: Ed D'Azevedo (dazevedoef@ornl.gov)


WHO: Yi Sun - Courant Institute of Mathematical Sciences New York University (http://www.cims.nyu.edu/~yisun/)

WHAT: Network Dynamics of Hodgkin-Huxley Neurons

WHEN: February 26, 2009 - 10:00 a.m.

WHERE: ORNL, Bldg. 5700, Room D307

Abstract:

The reliability and predictability of neuronal network dynamics is a central question in neuroscience. We present a numerical analysis of the dynamics of all-to-all pulsed-coupled Hodgkin-Huxley (HH) neuronal networks. Since this is a non-smooth dynamical system, we propose a pseudo-Lyapunov exponent (PLE) that captures the long-time predictability of HH neuronal networks. The PLE can capture very well the dynamical regimes of the network. Furthermore, we present an efficient library-based numerical method for simulating HH neuronal networks. Our pre-computed high resolution data library can allow us to avoid resolving the spikes in detail and to use large numerical time steps for evolving the HH neuron equations. By using the library-based method, we can evolve the HH networks using time steps one order of magnitude larger than the typical time steps used for resolving the trajectories without the library, while achieving comparable resolution in statistical quantifications of the network activity. Moreover, our large time steps using the library method can overcome the stability requirement of standard ODE methods for the original dynamics.

Host: Ed D'Azevedo (dazevedoef@ornl.gov)


WHO: Cory Hauck - Los Alamos National Laboratory (http://cnls-www.lanl.gov/External/people/Cory_Hauck.php)

WHAT: Model Reduction and Asymptotic Preserving Numerical Methods for Kinetic Transport Equations

WHEN: February 24, 2009 - 10:00 a.m.

WHERE: ORNL, Bldg. 5100, Auditorium, Room 128

Abstract:

Kinetic transport equations are used to describe the evolution of many-particle systems in a variety of physical applications. Because of the large phase space, simulation of these equations requires new approaches to model reduction and numerical discretization. In this talk, I will present a general overview of these topics and then present details of recent work in the context of linear transport equations.

Host: Ed D'Azevedo (dazevedoef@ornl.gov)


WHO: Jin Xu - Argonne National Laboratory (http://www.mcs.anl.gov/about/people_detail.php?id=690)

WHAT: Scientific Computing with High Order Methods

WHEN: February 23, 2009 - 10:00 a.m.

WHERE: ORNL, Bldg. 5100, Auditorium, Room 128

Abstract:

Scientific computing now becomes more and more important in science and engineering. The essential to its success is the efficient numerical methods. In this talk, researches on developing efficient numerical methods, especially high order methods, to solve various partial differential equations will be presented. These methods, as well as scalable parallel models, have been applied to different fields, such as fluid dynamics, beam dynamics and electromagnetics, etc. High order methods have demonstrated their power and advantages than other methods. Both traditional and newly emerged high order methods will be discussed. Physical models, simulation results and challenges in these fields will be presented and discussed. Developing more efficient numerical methods has great value in academia and applying them to current and new scientific area has great prospect.

Host: Ed D'Azevedo (dazevedoef@ornl.gov)


WHO: Xinfeng Liu - University of California at Irvine (http://math.uci.edu/~xliu1/)

WHAT: Computational Studies for Turbulent Mixing and Cell Signaling

WHEN: February 20, 2009 - 11:00 a.m.

WHERE: ORNL, Bldg. 5100, Auditorium, Room 128

Abstract:

Many systems in the engineering and biology involve moving interfaces or boundaries. Front tracking method is one of the most accurate and efficient computational approaches for studying such systems. A main challenge of developing front tracking algorithms is to capture the interface topological changes. In this talk I shall introduce an improved three-dimensional front tracking method and consider an application for turbulent mixing driven by Rayleigh-Taylor instability, which shows an excellent agreement with the experiments. For the second part of the talk, I will present a computational analysis of cell signaling in biology and medicine. Scaffold, a class of proteins, plays many important roles in signal transduction. Through studying various models of scaffold, I will show novel regulations induced by its spatial location and switch-like responses due to scaffold. To efficiently compute the models, we introduce a new fast numerical algorithm incorporated with adaptive mesh refinement for solving the stiff systems with spatial dynamics.

Host: Ed D'Azevedo (dazevedoef@ornl.gov)


WHO: Yong Chen - Department of Computer Science Illinois Institute of Technology (http://www.iit.edu/~chenyon1/)

WHAT: A Hybrid Data Prefetching Architecture for Data-Access Efficiency

WHEN: February 20, 2009 - 10:00 a.m.

WHERE: ORNL, Bldg. 5700, Room L202

Abstract:

Though computing capability continues increasing rapidly and the multi-core/many-core architecture has emerged as the norm of future high-performance processor, data-access technology still lacks far behind, having a severe impact on overall system performance. In the meantime, a large variety of applications including scientific simulations, visualization applications, information retrieval, etc., have made computing more data centric than computing centric. The preliminary study on the scalability of computing systems and applications has identified that data-access delay, not the processor speed, has become the performance bottleneck of computing and a dominant factor that decides the sustained performance of computing systems. This is especially true for high-end computing (HEC) and high-performance computing (HPC) where performance is keen. The data-access performance bottleneck has been recognized as one of the most critical problems that HEC/HPC community faces.

In this talk, a Hybrid Adaptive Prefetching (HAP) architecture will be introduced to bridge the gap between computing speed and data-access speed. The HAP architecture improves data-access performance via two stages, cache-memory stage by leveraging specialized hardware solutions and memory-disk stage by exploiting innovative software solutions. A specialized Data-Access History Cache and feedback-controlled adaptive data prefetching were proposed to address cache-memory stage latency reduction. Cooperative caching and prefetching, online heuristic and pre-execution prefetching were studied to enhance memory-disk stage access efficiency. Extensive experimental testing has been conducted to validate the design and verify the performance gain, and the results have demonstrated significant performance improvement. The Hybrid Adaptive Prefetching architecture can benefit numerous applications, such as scientific simulation, data mining, information retrieval, geographical information system, multimedia and visualization applications, etc. It will have a broad impact on boosting data-access performance for high-end and high-performance computing.

Short Bio of the Speaker: Yong Chen is a Ph.D. student and Research Assistant in the Computer Science Department at Illinois Institute of Technology. He received his B.E. degree in Computer Engineering in 2000 and M.S. degree in Computer Science in 2003, both from University of Science and Technology of China. His research focuses on high-performance computing, parallel and distributed computing and computer architecture in general, and on optimizing data-access performance, parallel I/O, performance modeling and evaluation in particular.

Host: Jeff Vetter (vetter@ornl.gov)


WHO: George Biros - The Georgia Institute of Technology, Atlanta (http://www.cc.gatech.edu/~gbiros/)

WHAT: Parallel Algorithms for Boundary Value Problems

WHEN: February 19, 2009 - 10:00 a.m.

WHERE: ORNL, Bldg. 5100, Auditorium

Abstract:

Boundary value problems are ubiquitous in computational science and engineering. I will discuss two classes of problems: multigrid for stencil-based non-uniform discretizations on bounded regular geometries, and integral equation solvers for exterior problems in complex geometries. I will discuss the basic algorithmic components of the proposed methodologies and give an overview of the challenges associated with scaling them to large number of cores. I will give details for a common component, an octree data structure. In particular, I will explain the construction, coarsening/refining, and balancing of octrees. I will present scalability results on up to 32,000 cores.

Biography:

George Biros is an associate professor with joint appointment to the Georgia Tech College of Computing's Computational Science and Engineering division and the Wallace H. Coulter Department of Biomedical Engineering. Prior to joining Georgia Tech, he was an assistant professor in Mechanical Engineering and Applied Mechanics, Bioengineering, and Computer and Information Science at the University of Pennsylvania, and earned his master's and doctorate from Carnegie Mellon University. He joined Penn in 2003 after serving as a postdoctoral associate at the Courant Institute of Mathematical Sciences at New York University.

Host: Judith C. Hill (hilljc@ornl.gov)


WHO: Mike Kirby - Colorado State University (http://www.math.colostate.edu/~kirby/)

WHAT: Opportunities for Extreme Data Analysis

WHEN: February 18, 2009 - 11:00 a.m.

WHERE: ORNL, Bldg. 5600, Room C101

Abstract:

The mathematical and computing challenges associated with extreme data sets, e.g., petabytes to exabytes, are clearly on the horizon. New algorithms will be required to exploit issues dictated by mathematical theory and computer architecture. In this talk several potential algorithms will be considered and illustrated on modestly sized data sets. Conjectures concerning new directions will also be offered.

Host: Richard Graham (rlgraham@ornl.gov)


WHO: Bradley Settlemyer, Clemson University (http://www.parl.clemson.edu/~bradles/)

WHAT: A Study of Client-Based Caching for Parallel I/O

WHEN: Tuesday, February 17, 2009 - 11:00 a.m.

WHERE: ORNL, Bldg. 5700, Room L202

Abstract:

The trend in parallel computing toward clusters running thousands of cooperating processes per application has led to an I/O bottleneck that has only gotten more severe as the CPU density of clusters has increased. Current parallel file systems are able to provide high bandwidth file access for large contiguous file accesses; however, applications performing small, unaligned file accesses continue to experience poor performance due to inefficient file system access. In this presentation we explore the use of a client-side file caching middleware to improve parallel I/O performance. In particular we explore fixed-size page caching and progressive page caching architectures that allow small, unaligned data regions to be combined into larger blocks that improve the performance of file system interactions. Our results indicate that a correctly configured file data cache using only a modest amount of memory can greatly improve I/O throughput for applications that have not been able to traditionally leverage high performance parallel file systems.

*****************************************

Host: Rich Graham, rlgraham@ornl.gov


WHO: Daniel Hagimont (http://hagimont.perso.enseeiht.fr/)

WHAT: Component-Based Autonomic Management for Legacy Software

WHEN: February 16, 2009 - 10:00 a.m.

WHERE: ORNL, Bldg. 5700, Room L204

Abstract:

Distributed software environments are increasingly complex and difficult to manage, as they integrate various legacy software with specific management interfaces. Moreover, the fact that management tasks are performed by humans leads to many configuration errors and low reactivity. This is particularly true in medium or large-scale distributed infrastructures. To address this issue, we explore the design and implementation of an autonomic management system. The main principle is to wrap legacy software pieces in components in order to administrate a software infrastructure as a component architecture. In order to help the administrators defining autonomic management policies, we introduce high-level formalisms for the specification of deployment and management policies. We describe the design and implementation of such a system, and its evaluation with different use cases.

Bio: Daniel Hagimont is Professor at Polytechnic National Institute of Toulouse and a member of the IRIT laboratory, where he leads a group working on operating systems, distributed systems and middleware. He received a PhD from Polytechnic National Institute of Grenoble in 1993. After a Postdoc at the University of British Columbia, Vancouver, in 1994, he joined INRIA Grenoble in 1995. He took his Professor position in Toulouse in 2005.

Host: Richard Graham (rlgraham@ornl.gov)


WHO: Yulong XING, New York University http://www.cims.nyu.edu/~xing/

WHAT: New Efficient Sparse Space-Time Algorithms in Numerical Weather Prediction

WHEN: February 12, 2009 - 10:00 a.m.

WHERE: ORNL, Bldg. 5100, Auditorium, Room 128

Abstract:

A major stumbling block in the prediction of weather is the accurate parameterization of moist convection on microscales. A recent multi-scale modeling approach, superparameterization (SP), has yielded promising results and provided a potential solution to this problem. SP is a large-scale modeling system with explicit representation of small-scale processes provided by a cloud-resolving model (CRM) embedded in each column of a large-scale model. We present new efficient sparse space-time algorithms which solve the small scale model in a reduced spatially periodic domain with a reduced time interval of integration. The new algorithms have been applied to a stringent two-dimensional test suite involving moist convection interacting with shear. The numerical results are compared with the CRM and original SP. It is shown that the new efficient algorithms for SP result in a gain of roughly a factor of 10 in efficiency, and the large scale variables such as horizontal velocity and specific humidity are captured in a statistically accurate way.

Host: Ed D'Azevedo (dazevedoef@ornl.gov)

 

- JANUARY -

WHO: Dr. Gilles Muller, Ecole des Mines de Nantes

WHAT: Documenting and Automating Collateral Evolutions in Linux Device Drivers

WHERE: 5100, Auditorium

WHEN: 2:30 pm, Tuesday, January 20, 2009

The internal libraries of Linux are evolving rapidly, to address new requirements and improve performance. These evolutions, however, entail a massive problem of collateral evolution in Linux device drivers: for every change that affects an API, all dependent drivers must be updated accordingly. Manually performing such collateral evolutions is time-consuming and unreliable, and has lead to errors when modifications have not been done consistently.

In this talk, we present an automatic program transformation tool, Coccinelle, for documenting and automating device driver collateral evolutions. Because Linux programmers are accustomed to manipulating program modifications in terms of patch files, this tool uses a language based on the patch syntax to express transformations, extending patches to semantic patches. Coccinelle preserves the coding style of the original driver, as would a human programmer. We have evaluated our approach on 62 representative collateral evolutions that were previously performed manually in Linux 2.5 and 2.6. On a test suite of over 5800 relevant driver files, the semantic patches for these collateral evolutions update over 93% of the files completely. In the remaining cases, the user is typically alerted to a partial match against the driver code, identifying the files that must be considered manually. We have additionally identified over 150 driver files where the maintainer made an error in performing the collateral evolution, but Coccinelle transforms the code correctly. Finally, more than 130 patches derived from the use of Coccinelle have been accepted into the Linux kernel.

************

Gilles Muller received the Ph.D. degree in 1988 from the University of Rennes I, and the Habilitation a Diriger des Recherches degree in 1997 from the University of Rennes I.

After having been a researcher at INRIA for 13 years, he is currently a Full Professor at the Ecole des Mines de Nantes. His research interests include the development of new methodologies based on the use of domain-specific languages for the structuring of operating systems. Gilles Muller has been a member of the IEEE since 1995 and the vice chair of the ACM/SIGOPS from July 2003 to July 2007.

Dr. Muller is hosted by Geoffroy Vallee


WHO: Tim P. Schulze, Asst. Professor - Department of Mathematics - University of Tennessee-Knoxville

WHAT: "Enhanced Kinetic Monte Carlo"

WHEN: Wednesday, January 14, 2009 - 10:00 a.m.

WHERE: JICS/ORCAS Building, Room 128 (Auditorium)

Abstract
Unlike traditional equilibrium/thermodynamic Monte Carlo, Kinetic Monte Carlo (KMC) seeks to model non-equilibrium processes and simulate their stochastic evolution in time.   The growth of defect-free epitaxial thin films is often studied by this technique.  In this talk, we present three enhanced KMC models that are more computationally demanding but greatly extend the range of physical systems that can be studied.  In the first model, we combine a KMC approach to front-tracking with a continuum model for heat transfer to study the growth of a dendrite into an under-cooled melt.  In the second model, we couple  KMC simulations of film growth to a linear elastic model to simulate hetero-epitaxial growth. Finally, in the third model, we combine KMC with molecular dynamics to allow the study of grain-boundary evolution.

Host:  Ed D'Azevedo (CSMD/Computational Mathematics Group, 576-7925)


WHO: Oscar Hernandez - University of Houston

WHAT: Compiler Support for Performance Tuning and Tools Integration

WHEN: Tuesday, January 13, 2009 - 2:00 p.m.

WHERE: ORNL, Bldg. 5700, Room L204

ABSTRACT:

At the University of Houston we have developed an environment, based upon robust, existing, open source software, for tuning applications written using MPI, OpenMP or both. The goal of this effort, which integrates the OpenUH compiler and several popular performance tools, is to increase user productivity by providing an automated, scalable performance measurement and optimization system. In this talk I describe our environment, show how these complementary tools can work together, and illustrate the synergies possible by exploiting their individual strengths and combined interactions. I also present a methodology for performance tuning that is enabled by this environment. One of the benefits of using compiler technology in this context is that it can direct the performance measurements (via instrumentation) to capture events at different levels of granularity and help assess their importance, which we have shown to significantly reduce the measurement overheads on larger systems. The compiler plays a crucial when attempting to understand the performance results: it can supply information on how a code was translated and provides a mean to evaluate whether optimizations were applied successfully.

The methodology combines two performance views of the application to find bottlenecks. The first is a high level view that focuses on OpenMP/MPI performance problems such as synchronization cost and load imbalances; the second is a low level view that focuses on hardware counter analysis with derived metrics that assess the efficiency of the code evaluated against the predicted cost model from the compiler. In this talk, I demonstrate the workings of this methodology and tools environment by illustrating its use with selected NAS Parallel Benchmarks, and how we solved performance bottlenecks on a Cloud Resolving Code from NASA and a Fluid Dynamic Application.

*****************************************

Host: Rich Graham, rlgraham@ornl.gov


Who: Professor Frank Mueller - North Carolina State University

Where: 5700, Room O304

When: January 7, 2009 - 11 am

WHAT: ScalaTrace: Scalable Compression and Timed Replay of Communication
Traces

Characterizing the communication behavior of large-scale applications is a difficult and costly task due to code/system complexity and their long execution times. An alternative to running actual codes is to gather their communication traces and then replay them, which facilitates application tuning and future procurements. While past approaches lacked lossless scalable trace collection, we contribute an approach that provides orders of magnitude smaller, if not near constant-size, communication traces regardless of the number of nodes while preserving structural information. We introduce intra- and inter-node compression techniques of MPI events, we develop a scheme to preserve time and causality of communication events, and we present results of our implementation for BlueGene/L. Given this novel capability, we discuss its impact on communication tuning and beyond.

To the best of our knowledge, such a concise representation of MPI traces in a scalable manner combined with time-preserving deterministic MPI call replay are without any precedence.

***

Frank Mueller (mueller@cs.ncsu.edu) is an Associate Professor in Computer Science and a member of the Centers for Efficient, Secure and Reliable Computing (CESR) and High Performance Simulations (CHiPS) at North Carolina State University. Previously, he held positions at Lawrence Livermore National Laboratory and Humboldt University Berlin, Germany. He received his Ph.D. from Florida State University in 1994.

He has published papers in the areas of embedded and real-time systems, compilers and parallel and distributed systems. He is a founding member of the ACM SIGBED board and the steering committee chair of the ACM SIGPLAN LCTES conference. He is a member of the ACM, ACM SIGPLAN, ACM SIGBED and the IEEE Computer Society. He is a recipient of an NSF Career Award, an IBM Faculty Award and a Fellowship from the Humboldt Foundation.