CPUSH
Section: C3 User Manual (1)
Updated: 4.0
Index
Return to Main Contents
NAME
cpush - A utility to push files from the local machine to the nodes in your cluster.
SYNOPSIS
Usage: cpush [OPTIONS] [MACHINE DEFINITIONS] source [target]
DESCRIPTION
cpush is a utility that uses rsync to send a file or set of files specified by in a list to the nodes in a cluster.
OPTIONS
- -h, --help
-
:display help message
- -f, --file <filename>
-
:alternate cluster configuration file, if one is not supplied then /etc/c3.conf will be used
- -l, --list <filename>
-
:list of files to push (see files section for format)
- -i
-
:interactive mode, ask once before executing
- --head
-
execute command on head node, does not execute on compute nodes
- --nolocal
-
:the source file or directory lies on the head node of the remote cluster
- -b, --blind
-
:pushes the entire file (normally cpush uses rsync to push changes and then rebuilds the file on the target node)
- --all
-
:execute command on all nodes in all clusters that are accessible. When specifying --head only the head nodes will participate. This ignores the [MACHINE_DEFINITIONS] section.
GENERAL
There are many ways to call cpush, here is a few examples:
1. To move a single file
-
cpush /home/filename /home/
2. To move a single file, renaming that file on the cluster nodes
-
cpush /home/filename1 /home/filename2
3. To move a set of files listed in a file
-
cpush --list=/home/filelist
See files section for the format of the file list
SETUP
See the C3 INSTALL file for installation instructions. Also see C3-range for help on node ranges on the command line. If using the scalable setup please see c3-scale
ENVIRONMENT
C3_RSH
-
By default, the C3 tools will use ssh to issue the remote commands. If you would like to have them use rsh instead, you must set the C3_RSH environment variable to rsh.
-
For example, if you are using the bash shell you would do the following:
export C3_RSH=rsh
any program that behaves like rsh or ssh is acceptable
C3_PATH
-
The default install path for C3 is /opt/c3-4. If you install C3 in an alternate location this variable must point to that installation. For remote clusters C3 must be installed in the same directory on each cluster.
-
For example, if you installed C3 in your home directory you might use the following:
export C3_PATH=/home/sgrundy/c3-4
C3_CONF
-
C3's default configuration file is /etc/c3.conf. If you wish an alternate default configuration file set this to point to the file
-
For example, if you keep a special c3.conf file in your home directory you may use:
export C3_CONF=/home/sgrundy/.c3conf
C3_USER
-
By default, the C3 tools will use your local username to access a remote cluster. If you wish to use a different default then set this variable to it
-
For example, this will change the example user from sgrundy to mmanhunter:
export C3_USER=mmanhunter
FILES
/etc/c3.conf
-
This file is the cluster configuration file that contains the names of the nodes to which commands will be sent. The cluster configuration file of nodes may also be specified from the command line. The format of both files is identical.
-
See the c3.conf(5) man page for the format
File list
-
The file list is a single file per line (both a relative and absolute path are ok) with the first column being the source and the second being the target. if no target is specified it is assumed to be the same as the source.
SEE ALSO
cexec(1), c3(1), cget(1), ckill(1), cpush(1
), cpushimage(4), crm(1), cshutdown(4), cname(1), cnum(1), clist(1), c3.conf(5), c3-scale(5)
Index
- NAME
-
- SYNOPSIS
-
- DESCRIPTION
-
- OPTIONS
-
- GENERAL
-
- SETUP
-
- ENVIRONMENT
-
- FILES
-
- SEE ALSO
-
Last Modified: