CKILL

Section: C3 User Manual (1)
Updated: 4.0
Index Return to Main Contents
 

NAME

ckill - a utility the runs 'kill' on each node of a cluster for a specified process name

 

SYNOPSIS

Usage: ckill [OPTIONS] [MACHINE DEFINITIONS] process_name  

DESCRIPTION

ckill executes 'kill' with the given signals on each cluster node. Unlike 'kill', ckill must use the process name (similar to killall) as the process ID will most likely be different on the various nodes. The root user has the ability to further indicate a specific user as the process owner, enabling root to kill a specific user's process without affecting like-named processes owned by other users.  

OPTIONS

--help -h
:display help message

--file -f <filename>
:alternate cluster configuration file if one is not supplied then /etc/c3.conf will be used

-i
:interactive mode, ask once before executing

--head
:execute command on head node, does not execute on compute nodes

--signal -s <signal>
:signal to send process

--user -u <user>
:alternate process owner (root only)

--all
:execute command on all nodes in all clusters that are accessible. When specifying --head only the head nodes will participate. This ignores the [MACHINE_DEFINITIONS] section.
 

GENERAL

There are several basic ways to call ckill:

1. To kill a process (no signal sent):

ckill a.out

This kills every process named a.out that the user owns

2. To execute a command on a subset of nodes on the default cluster (with signal 9)

ckill -s 9 :2-6 daemon

This sends to the process named daemon the 9 signal on nodes 2, 3, 4, 5, 6

3. To kill a process on a list of clusters with an alternate user

ckill -u sgrundy cluster1: cluster2: a.out

This kills the process named a.out on all the nodes in both clusters that are owned by user (note: the -u option is only valid for root)

4. To kill all processes of a given name on a list of clusters

ckill -u ALL cluster1: cluster2: a.out

This kills the process named a.out on all the nodes in both clusters, "ALL" is a reserved name meaning all users on a system
 

SETUP

See the C3 INSTALL file for installation instructions. Also see C3-range for help on node ranges on the command line. If using the scalable setup please see c3-scale  

ENVIRONMENT

C3_RSH

By default, the C3 tools will use ssh to issue the remote commands. If you would like to have them use rsh instead, you must set the C3_RSH environment variable to rsh.
For example, if you are using the bash shell you would do the following:

export C3_RSH=rsh

any program that behaves like rsh or ssh is acceptable

C3_PATH

The default install path for C3 is /opt/c3-4. If you install C3 in an alternate location this variable must point to that installation. For remote clusters C3 must be installed in the same directory on each cluster.
For example, if you installed C3 in your home directory you might use the following:

export C3_PATH=/home/sgrundy/c3-4

C3_CONF

C3's default configuration file is /etc/c3.conf. If you wish an alternate default configuration file set this to point to the file
For example, if you keep a special c3.conf file in your home directory you may use:

export C3_CONF=/home/sgrundy/.c3conf

C3_USER

By default, the C3 tools will use your local username to access a remote cluster. If you wish to use a different default then set this variable to it
For example, this will change the example user from sgrundy to mmanhunter:

export C3_USER=mmanhunter

 

FILES

/etc/c3.conf

This file is the cluster configuration file that contains the names of the nodes to which commands will be sent. The cluster configuration file of nodes may also be specified from the command line. The format of both files is identical.
See the c3.conf(5) man page for format
 

SEE ALSO

cexec(1), c3(1), cget(1), ckill(1), cpush(1 ), cpushimage(4), crm(1), cshutdown(4), cname(1), cnum(1), clist(1), c3.conf(5), c3-scale(5)


 

Index

NAME
SYNOPSIS
DESCRIPTION
OPTIONS
GENERAL
SETUP
ENVIRONMENT
FILES
SEE ALSO

Last Modified: