A strong point is that it enables users to get access to hardware platforms through their own programs by making calls through NetSolve to various software components. Thus, there's locational transparency. A software library that a person can access remotely, a virtual library, is thereby created although it does not actually exist on their machine. Therefore, there can be central management of library resources, where the most up-to-date version is always available and systems administrators no longer have to maintain software packages on a variety of different machines.
The NetSolve system has three components: the client, which can be either a user program or a user interacting with one of the NetSolve interfaces; the NetSolve agent; and the pool of NetSolve resources. The entry point into the NetSolve system is the client sending a problem request to the agent. The agent analyzes this request and chooses a computational resource. The problem and its input data are then sent to the chosen NetSolve resource. The problem is solved by the appropriate scientific package on some hardware platform and the result is sent back to the client. This system can be deployed on the Internet or on a local intranet.
Currently, NetSolve can be enabled on any Unix-based machine. Its mechanism will manage and exploit full heterogeneity throughout without the user being aware of the complexities and hassles of network programming. Traditionally, if a user wanted to gain access to a given subroutine or function they would write a call to it, passing the input and output arguments. With NetSolve, you still call a routine and pass the parameters, but the executable software can be anywhere on the net. Thus, you simply call NetSolve, pass the arguments to it, and it figures out the most suitable computational device. It then sends your problem to that device for computation, and if necessary, using retry for fault-tolerance, solves a problem and returns the answers to the user's program.
Molecular dynamics, which models the interactions between the atoms in a chemical, biological or solid state system, is a cornerstone of applications ranging from the study of DNA-protein interactions to the design of new materials.
In a record-breaking demonstration run, D'Azevedo and Romine, performed molecular dynamics simulations for systems of one billion particles, with each simulation step taking about 280 seconds. This result is a major step forward over recently reported simulations of 600 million particles on a 1,024-node Thinking Machine CM-5 and 400 million particles on the 1,024-node Paragon supercomputer at Beaverton performed by another team at ORNL.
The MD code was modified from SOTON_PAR and was developed by Ed D'Azevedo and Charles Romine. Details of the implementation can be found in their ORNL Tech Report. The shared memory emulation library called DOLIB (Distributed Object Library) was also developed by the same authors with support from the PICS (Partnership in Computational Sciences) to simplify parallel programming on distributed memory multiprocessors such as the Intel MP Paragon. DOLIB uses the IPX message system developed by Ron Peierls and Bob Marr at Brookhaven National Laboratory. And IPX is in turn written on top of PVM.
The ability to solve larger MD simulations opens up many new problems that can be solved.
Al Geist CS Group Leader http://www.epm.ornl.gov/~geist/ firstname.lastname@example.org (423)574-3153http://www.epm.ornl.gov/msr/msrcs.html Oak Ridge National Laboratory / (email@example.com) Last modified: April 29, 1997