Section: C3 User Manual (1)
Updated: 4.0
Index Return to Main Contents


ckill - a utility the runs 'kill' on each node of a cluster for a specified process name



Usage: ckill [OPTIONS] [MACHINE DEFINITIONS] process_name  


ckill executes 'kill' with the given signals on each cluster node. Unlike 'kill', ckill must use the process name (similar to killall) as the process ID will most likely be different on the various nodes. The root user has the ability to further indicate a specific user as the process owner, enabling root to kill a specific user's process without affecting like-named processes owned by other users.  


--help -h
:display help message

--file -f <filename>
:alternate cluster configuration file if one is not supplied then /etc/c3.conf will be used

:interactive mode, ask once before executing

:execute command on head node, does not execute on compute nodes

--signal -s <signal>
:signal to send process

--user -u <user>
:alternate process owner (root only)

:execute command on all nodes in all clusters that are accessible. When specifying --head only the head nodes will participate. This ignores the [MACHINE_DEFINITIONS] section.


There are several basic ways to call ckill:

1. To kill a process (no signal sent):

ckill a.out

This kills every process named a.out that the user owns

2. To execute a command on a subset of nodes on the default cluster (with signal 9)

ckill -s 9 :2-6 daemon

This sends to the process named daemon the 9 signal on nodes 2, 3, 4, 5, 6

3. To kill a process on a list of clusters with an alternate user

ckill -u sgrundy cluster1: cluster2: a.out

This kills the process named a.out on all the nodes in both clusters that are owned by user (note: the -u option is only valid for root)

4. To kill all processes of a given name on a list of clusters

ckill -u ALL cluster1: cluster2: a.out

This kills the process named a.out on all the nodes in both clusters, "ALL" is a reserved name meaning all users on a system


See the C3 INSTALL file for installation instructions. Also see C3-range for help on node ranges on the command line. If using the scalable setup please see c3-scale  



By default, the C3 tools will use ssh to issue the remote commands. If you would like to have them use rsh instead, you must set the C3_RSH environment variable to rsh.
For example, if you are using the bash shell you would do the following:

export C3_RSH=rsh

any program that behaves like rsh or ssh is acceptable


The default install path for C3 is /opt/c3-4. If you install C3 in an alternate location this variable must point to that installation. For remote clusters C3 must be installed in the same directory on each cluster.
For example, if you installed C3 in your home directory you might use the following:

export C3_PATH=/home/sgrundy/c3-4


C3's default configuration file is /etc/c3.conf. If you wish an alternate default configuration file set this to point to the file
For example, if you keep a special c3.conf file in your home directory you may use:

export C3_CONF=/home/sgrundy/.c3conf


By default, the C3 tools will use your local username to access a remote cluster. If you wish to use a different default then set this variable to it
For example, this will change the example user from sgrundy to mmanhunter:

export C3_USER=mmanhunter




This file is the cluster configuration file that contains the names of the nodes to which commands will be sent. The cluster configuration file of nodes may also be specified from the command line. The format of both files is identical.
See the c3.conf(5) man page for format


cexec(1), c3(1), cget(1), ckill(1), cpush(1 ), cpushimage(4), crm(1), cshutdown(4), cname(1), cnum(1), clist(1), c3.conf(5), c3-scale(5)




SRT     C3 Home     Download     Documentation     Papers     Contacts     Related Research    

For user questions and information about releases, email/subscribe to: c3-users
For report bugs or problem, e-mail:

Computer Science and Math Division
Oak Ridge National Laboratory
Last Modified: Wed 11-25-2015