C3
Section: Cluster Command & Control (C3) Tool Suite (1)
Updated:
Index
Return to Main Contents
DESCRIPTION
The Cluster Command Control (C3) tools are a suite of cluster tools developed at Oak Ridge National Laboratory that are useful for both administration and application support. The suite includes tools for cluster-wide command execution, file distribution and gathering, process termination, remote shutdown and restart, and system image updates. A short description of each tool follows:
cexec - general utility that enables the execution of any standard command on all cluster nodes
cget - retrieves files or directories from all cluster nodes
ckill - terminates a user specified process on all cluster nodes
cpush - distribute files or directories to all cluster nodes
cpushimage - update the system image on all cluster nodes using an image captured by the SystemImager tool
crm - remove files or directories from all cluster nodes
cshutdown - shutdown or restart all cluster nodes
The default method of execution for the tools is to run the command on all cluster nodes concurrently. However, a serial version of cexec is also provided that may be useful for deterministic execution and debugging. To invoke the serial version of cexec, type cexecs instead of cexec.
For more information on how to use each tool, see the man page for the specific tool.
ENVIRONMENT
C3_RSH
-
By default, the C3 tools will use ssh to issue the remote commands. If you would like to have them use rsh instead, you must set the C3_RSH environment variable to rsh.
-
For example, if you are using the bash shell you would do the following:
-
export C3_RSH=rsh
-
any program that behaves like rsh or ssh is acceptable
C3_PATH
-
The default install path for C3 is /opt/c3-4. If you install C3 in an alternate location this variable must point to that installation. For remote clusters C3 must be installed in the same directory on each cluster.
-
For example, if you installed C3 in your home directory you might use the following:
-
export C3_PATH=/home/sgrundy/c3-4
-
C3_CONF
-
C3's default configuration file is /etc/c3.conf. If you wish an alternate default configuration file set this to point to the file
-
For example, if you keep a special c3.conf file in your home directory you may use:
-
export C3_CONF=/home/sgrundy/.c3conf
-
C3_USER
-
By default, the C3 tools will use your local username to access a remote cluster. If you wish to use a different default then set this variable to it
-
For example, this will change the example user from sgrundy to mmanhunter:
export C3_USER=mmanhunter
FILES
/etc/c3.conf
-
This file is the cluster configuration file which the tools use to access the cluster. The cluster configuration file may also be specified from the command line. The format of both files is identical.
-
see the c3.conf man page for details
SEE ALSO
cexec(1), c3(1), cget(1), ckill(1), cpush(1
), cpushimage(4), crm(1), cshutdown(4), cname(1), cnum(1), clist(1), c3.conf(5), c3-scale(5)
Index
- DESCRIPTION
-
- ENVIRONMENT
-
- FILES
-
- SEE ALSO
-
Last Modified: