ACPI Avant Garde Project Argonne Meeting, August 21-23,
CCSM Software Engineering Working Group Meeting - Monday Morning
1. Coupler repository separate from CCSM repositories. The rest of CSM
is in a common
directory structure but has not been setup as a single CVS checkout
** An effort will be made to put all of CSM under a single version
control system. Tony Craig and Lawrence Buja will develop CSM level
2. Software issues on restricted checkout are not yet resolved. A pilot
project to look at BITKEEPER will be started as a possible alternative
** Drake will start it off with help from Rob Jacob,Eric Kluzek and
John Tannihill. Report back to SEWG chair in one month.
3. Additional model output will be added to the repository for quick
4. The NASA CAN proposal was discussed.
5. December Freeze presented by Byron
CCSM needs 500 years of simulation to present at the June Workshop
Software may not be frozen, rather it is the model- physics and
algorithms- that will be frozen. Does not mean that dynamical cores will
be thrown out of the repository or that development on dycores will be
dropped. Probably does mean that with physical parameterizations.
Target performance is 5 years/ day for CCSM. Target machines: SGI, IBM,
Resolution: T42 eul, T63 sld, 2.5 L-R.
Tony Craig is the software coordinator for the December freeze.
6. Coupler status presented by Tom Bettge
cpl15 is the current concurrent coupler. Hope to get a cpl15+ in. This
will likely contain N to 1 style communication. Dealing with
bottlenecks from the ice is a concern. Until this is improved the
improvements in other components may have little impact on performance.
ACPI Avant Garde Project Meetings - Monday Afternoon and on...
Tasks Dave outlined.
Physics rework. A - finishslit of linemsac to bring grmult into the
B - clean up physics lat and lon dependancies. Pass a lat and lon array
(indexed by the nlon) where necessary. Watch for coszrs.
C - rework all the parameterizations to fit the template Byron
established.. driver updates the state (a untility call) and
parameterization package is passed as physxx(state*,tendency).
D - check the status of information pass across tphysbc to tphysac. It
is a sensitive decision about what accumulated tendencies and updated
states are saved across the lsm and coupler calls.
Need to introduce user defined typesfor phys_state and phys_tend. Also
a flag to the initializtions to indicate what tendencies or states were
updated in the call.
Need to change the names of plond to something indicating physics
chunking. Also make
the physics plev independent of the dynamics plev in the dimensioning.
E- Unify the driversof the physics and dynamics between the dycores.
Add decompositions for physics for the december freeze. This will allow
scaling to 32MPI tasks
with 32x(??) for the physics. ?? is 4 on the WHII,16 on the NERSC NHII.
Instead of segments we will refer to chunks for the physics
Once the decompositions are added,we need the interface, utilities etc
to support transformations and data transposition. How these might
evolve was the topic of a long discussion. The existing frameworks,
GEMS and the GFDL FSM(??) were discussed, it being agreed that no one
was particular familiar with what GFDL had done. The details of the
GEMS framework were more familiar. Byron agreed to provide a templete
encapsulating what we decided for the interface between physics and
dycores. It consists of three modules. 1 a decomposition module, that
describes which locations (grid points) belong to which process.
(NOTE to each MPI process.) 2) a router module which builds and holds
the matrix of which
processes need to send /recieve to where. 3) an interface module that
describes what needs to communicated in the router and what the local
structure of the data is.