Section: C3 User Manual (1)
Updated: 4.0
Index Return to Main Contents


cexec(s) - A utility that executes a given command string on each node of a cluster.



Usage: cexec(s) [OPTIONS] [MACHINE DEFINITIONS] command  


cexec(s) is a utility that executes a given string on each node of a cluster. This command is intended to be a general purpose utility as opposed to the other commands in the C3 tools suite. cexec is the parallel version and cexecs is the serial version. The serial version of cexec is useful for debugging a cluster (each node is executed and printed before the next node is run), otherwise it is suggested that you use the parallel version.  


--help -h
:display help message

--file -f <filename>
:alternate cluster configuration file if one is not supplied then /etc/c3.conf will be used

:interactive mode, ask once before executing

:execute on all the nodes in all the clusters in the c3.conf file, note this ignores the [MACHINE_DEFINITIONS] block

:execute command on head node, does not execute on compute nodes

--pipe -p
:formats the output in a pipe friendly fashion. It prepends the cluster name, a space, the node name and a colon before each line of output.

:execute command on all nodes in all clusters that are accessible. When specifying --head only the head nodes will participate. This ignores the [MACHINE_DEFINITIONS] section.


There are several basic ways to call cexec:

1. To simply execute a command on the default cluster:

cexec mkdir temp

this creates a directory named temp in your home directory

2. To execute a command on a subset of nodes on the default cluster

cexec :2-6 ls -l

this executes ls with the -l option on nodes 2, 3, 4, 5, 6

3. To execute commands on a list of clusters

cexec cluster1: cluster2: ls -l

this executes ls with the -l option on all the nodes in both clusters

4. Quote position is important

cexec "ps -aux |grep root"

will execute ps on each node, grep for root there, sort the output, and display it on the screen

cexec ps -aux |grep root

will execute ps on each node, sort the output, grep out all the lines with root, and only display those lines to the screen. This removes all the formatting, to use C3 in this method the --pipe option should be used


See the C3 INSTALL file for installation instructions. Also see C3-range for help on using node ranges on the command line. If using the scalable setup please see c3-scale  



By default, the C3 tools will use ssh to issue the remote commands. If you would like to have them use rsh instead, you must set the C3_RSH environment variable to rsh.
For example, if you are using the bash shell you would do the following:

export C3_RSH=rsh

any program that behaves like rsh or ssh is acceptable


The default install path for C3 is /opt/c3-4. If you install C3 in an alternate location this variable must point to that installation. For remote clusters C3 must be installed in the same directory on each cluster.
For example, if you installed C3 in your home directory you might use the following:

export C3_PATH=/home/sgrundy/c3-4


C3's default configuration file is /etc/c3.conf. If you wish an alternate default configuration file set this to point to the file
For example, if you keep a special c3.conf file in your home directory you may use:

export C3_CONF=/home/sgrundy/.c3conf


By default, the C3 tools will use your local username to access a remote cluster. If you wish to use a different default then set this variable to it
For example, this will change the example user from sgrundy to mmanhunter:

export C3_USER=mmanhunter




This file is the cluster configuration file that contains the names of the nodes to which commands will be sent. The cluster configuration file of nodes may also be specified from the command line. The format of both files is identical.
see the c3.conf(5) man page for the format


cexec(1), c3(1), cget(1), ckill(1), cpush(1 ), cpushimage(4), crm(1), cshutdown(4), cname(1), cnum(1), clist(1), c3.conf(5), c3-scale(5)




SRT     C3 Home     Download     Documentation     Papers     Contacts     Related Research    

For user questions and information about releases, email/subscribe to: c3-users
For report bugs or problem, e-mail: c3-devel@ornl.gov

Computer Science and Math Division
Oak Ridge National Laboratory
Last Modified: Fri 03-05-2021