IBM Books

Command and Technical Reference, Volume 1

cshutdown

Purpose

|cshutdown - Halts or reboots the entire system or any |number of nodes in the system.

Syntax

|
|cshutdown
|[-G] [-P] |[-N | -g] |[-R] [-W seconds | |AUTO] ...
 
[-X | -E] [-Y] [-F] [-h | -k | -m | -r [-C cstartup_options ]]
 
[-s] [{-T minutes | -T hh:mm} [-M message_string]]
 
[-K number] target_nodes

Flags

-G
Allows the specification of nodes to include one or more nodes outside the current system partition. If ALL is specified with -G, all nodes in the SP are shut down. If ALL is specified without -G, all nodes in the current system partition are shut down. If -G is specified with a list of nodes, all listed nodes are shut down regardless of the system partition in which they reside (subject to the restrictions of the sequence file). If -G is not specified and some of the specified target nodes are outside of the current system partition or some of the specified target nodes depend on nodes outside of the current system partition, none of the specified nodes are shut down.

-P
Powers off the nodes after the shutdown command completes. This is the default action except when the -m option (single user mode) is chosen. |

|-N
|Indicates that the target_nodes are specified as node numbers, |not SP Ethernet administrative local area network (LAN) adapter (reliable) |host names. The node numbers can be specified as ranges, for example, |3-7 indicates nodes 3, 4, 5, 6, and 7.

-g
Indicates that the target_nodes are specified as a named node group. If -G is supplied, a global node group is used. Otherwise, a partitioned-bound node group is used. |

|-R
|Indicates that target_nodes is a file that contains host |identifiers. If you also use the -N flag, the file |contains node numbers; otherwise, the file contains node names, specified |as SP Ethernet administrative LAN adapter (reliable) host names. |

|-W seconds | AUTO ...
|Provides a timeout value for shutting down a leading node. In |normal processing, cshutdown waits for a leading node to be |completely halted before starting to shut down trailing nodes. If one |or more leading nodes does not shut down, the cshutdown command waits |indefinitely. The -W flag tells cshutdown to |wait only the specified number of seconds after starting to halt a |leading node; after that time, cshutdown starts the halt process |for the trailing nodes. If you specify the value AUTO, the |cshutdown command will automatically generate a timeout value based |on the node types in your system.

|Notes:

  1. |Be careful to use timeout values large enough to allow a node to complete |shutdown processing. Your timeout value should be at least several |minutes long; shorter values may be transparently modified to a higher |value.

  2. |If shutdown processing for a node does not complete within the timeout |limit and cshutdown halts trailing nodes, the system may not function |correctly. |

|If there are special subsystems, the same waiting procedure applies to |subsystem sequencing in the subsystem phase.

-X
Tells cshutdown that the state of nontarget nodes should not affect the result of the command. Use the -X flag to force cshutdown to shut down the target nodes if nontarget nodes listed in /etc/cshutSeq are gating the shutdown.
Note:
If some critical nodes, but not the entire system, are forced to halt or reboot, the system may not function correctly.

-E
Terminates processing if any nodes are found that are powered on, but not running (host_responds in the System Data Repository (SDR) shows a value of 0 - node shows red for hostResponds in SP Perspectives). This includes nodes that may have been placed in maintenance (single-user) mode. Refer to the "Description" section for additional information.

If you specify -E, you cannot specify -X.

-Y
Tells cshutdown to ignore any error codes from the special subsystem interfaces. Without this flag, if a special subsystem interface exits with an error code, you receive a prompt allowing you to continue the operation, to quit, or to enter a subshell to investigate the error. On return from the subshell, you are prompted with the same choices.

-F
Tells the cshutdown command to start the shut down immediately, without issuing warning messages to users.

-h
Halts the target nodes. This is the default, unless overridden by the -k, -m, or -r flags.

-k
Verifies the shutdown sequence file without shutting any node down. Special subsystems are not affected. There is no effect on a nonrunning target node. You can use cshutdown -kF ALL to test your /etc/cshutSeq file without actually shutting down any nodes and without sending messages to users.

-m
Handles the request similar to a halt except that the last step, after syncing and unmounting file systems, is to bring the node to single user mode. There is no effect on a nonrunning target node.

-r
Handles the request as a reboot. It performs the same operations as -h. Then it restarts the target nodes with cstartup. It does not power on a target node that was powered off at the time the cshutdown command was issued (it differs from the cstartup command, which powers on all specified nodes).

-C cstartup_options
Tells cshutdown to pass the cstartup_options to cstartup when the cstartup command is invoked after the target_nodes are halted. This flag is valid only when the -r (reboot) option is also specified. Any blanks in cstartup_options must be escaped or quoted.

-s
Stops nonroot processes in the node order specified in /etc/cshutSeq. The default is to stop the nonroot processes in parallel.

[-T time [-M message_string]]
The -T flag specifies a time to start cshutdown, either as a number of minutes from now (-T  minutes) or at the time in 24-hour format (-T  hh:mm). If the -T flag is specified, then you can use -M message_string to specify a message for users on the target nodes. Any blanks in message_string must be escaped or quoted.

-K
Limits the number of concurrent processes created to rsh to the nodes. This is relevant to large systems. The default value is 64.

Operands

|
|target_nodes
|Designates the target nodes to be operated on. It is the operand of |the command, and must be the last token on the command line. In the |absence of the -R, -N, or -g |flags, target_nodes are specified as reliable host names on the SP |Ethernet administrative LAN adapter. Use ALL to designate the |entire system. You must identify one or more |target_nodes.

Description

Use this command to halt or reboot the entire system or any number of nodes in the system. The SP cshutdown command is analogous to the workstation shutdown command. Refer to the shutdown man page for a description of the shutdown command. The cshutdown command always powers off the nodes except while in Maintenance mode.

Note:
If you bring a node down to maintenance mode, you must ensure file system integrity before rebooting the node.

In this case, the cshutdown command, which runs from the control workstation, cannot rsh to the node to perform the node shutdown phase processing. This includes the synchronization of the file systems. Therefore, you should issue the sync command three times in succession from the node console before running the cshutdown command. This is especially important if any files were created while the node was in maintenance mode.

To determine which nodes may be affected, issue the spmon -d -G command and look for a combination of power on and host_responds no.

For an SP system with a switch, if the entire system is being shutdown, issue the Equiesce command before issuing the cshutdown command. If only a portion of the system is being shutdown, but the switch primary node and the switch primary backup node are among the nodes targeted, use the Eprimary command to select a new switch primary node, and then issue the Estart command before issuing the cshutdown command.

The cshutdown command has these advantages over using the shutdown command to shut down each node of an SP:

Shutdown processing has these phases:

  1. Notifying all users of the impending shutdown, executing the customized shutdown script (/etc/cshut.clean) if it exists on the target node, then terminating all nonroot processes on the target nodes. Nonroot processes are sent a SIGTERM followed, 30 seconds later, by a SIGKILL. This gives user processes that handle SIGTERM a chance to do whatever cleanup is necessary.
  2. Invoking any special subsystems, so they can perform any necessary shutdown activities. This phase follows the sequencing rules in /etc/subsysSeq. See PSSP: Administration Guide for the format of the /etc/subsysSeq file.
  3. Starting node phase shutdown. The node phase includes syncing and unmounting file systems and halting the nodes, following the sequencing rules in /etc/cshutSeq. See PSSP: Administration Guide for the format of the /etc/cshutSeq file.
  4. Rebooting the system, if requested by the -r flag.

Results

The cshutdown command may be gated by a problem with some subsystems or nodes to complete shutdown. In this case, look in the file created: /var/adm/SPlogs/cs/cshut.MMDDhhmmss.pid

MMDDhhmmss
Time stamp.

pid
The process ID of the cshutdown command.

If a file with the same name already exists (from a previous year), the cshutdown command overwrites the existing file.

Files

The following files reside on the control workstation:

/etc/cshutSeq
Describes the sequence in which the nodes should be shut down. Nodes not listed in the file are shut down concurrently with listed nodes. If the file is empty, all nodes are shut down concurrently. If the file does not exist, cshutdown uses the output of seqfile as a temporary sequencing default.

/etc/subsysSeq
Describes groups of special subsystems that need to be invoked in the subsystem phase of cshutdown. Also shows the sequence of invocation. Subsystems are represented by their invocation commands. If this file does not exist or is empty, no subsystem invocation is performed.

/var/adm/SPlogs/cs/cshut.MMDDhhmmss.pid
Road map of cshutdown command progress.

The following file may reside on the target nodes:

/etc/cshut.clean
Name of the customized shutdown script that will be run before cshutdown terminates nonroot processes. This script is created by the user to stop nonroot processes gracefully before cshutdown terminates them.

|Environment Variables

|PSSP 3.4 provides the ability to run commands using secure remote |command and secure remote copy methods.

|To determine whether you are using either AIX rsh or rcp |or the secure remote command and copy method, the following environment |variables are used. |If no environment variables are set, the defaults are |/bin/rsh and /bin/rcp.

|You must be careful to keep these environment variables consistent. |If setting the variables, all three should be set. The DSH_REMOTE_CMD |and REMOTE_COPY_CMD executables should be kept consistent with the choice of |the remote command method in RCMD_PGM: |

|For example, if you want to run cshutdown using a secure remote |method, enter:

|export RCMD_PGM=secrshell
|export DSH_REMOTE_CMD=/bin/ssh
|export REMOTE_COPY_CMD=/bin/scp

Security

|The cshutdown command can only be issued on the control |workstation. You must have root privilege and a valid Kerberos ticket |to run this command, or be running with the secure remote commands |enabled. Refer to chapter on security in PSSP: |Administration Guide.

|You must also have: |

Location

/usr/lpp/ssp/bin/cshutdown

Related Information

PSSP commands: cstartup, init, seqfile, shutdown

AIX commands: rsh

Examples

  1. For these examples, assume that /etc/cshutSeq contains the following lines:
    Group1 > Group2 > Group3
     
    Group1: A
     
    Group2: B
     
    Group3: C
    

    This defines 3 groups, Group1 through Group3, each containing a single node. The nodes names are A, B, and C. The sequence line Group1 > Group2 > Group3 means that Group3 (node C) is shut down first. When Group3 is down, Group2 (node B) is shut down. When Group2 is down, then Group1 (node A) is shut down.

    Table 1 shows that the result of a cshutdown command depends on the flags specified on the command line, the initial state of each node, and the sequencing rules in /etc/cshutSeq. The shorthand notation Aup indicates that node A is up and running; Adn indicates that node A is down.

    Table 1. Examples of the cshutdown command
    The subscript up means the node is powered up and running; the subscript dn means the node is not running.
    Initial state Command issued Final state Explanation
    Aup B up Cup cshutdown A B C Adn Bdn Cdn The command succeeds; the nodes are all down.
    Aup B up Cdn cshutdown B Aup Bdn Cdn The command succeeds because C is already not running.
    Aup B up Cdn cshutdown A Unchanged The command fails because B is still running.
    Aup B up Cdn cshutdown -X A Adn Bup Cdn The command succeeds because -X considers the sequencing of only the target nodes.

  2. To shut down all the nodes in the SP system regardless of system partitions and the sequence file, enter:
    cshutdown -GXY ALL
    
  3. To shut down nodes 1, 9, and 16--20 regardless of system partitions and subject to the restrictions of the sequence file, enter:
    cshutdown -G -N 1 9 16-20
    

    The command may be unsuccessful if any node in the list depends on any node that is not on the list and that node is not shutdown.

  4. To shut down all the nodes in the current system partition, enter:
    cshutdown ALL
    

    The command may be unsuccessful if any node in the current system partition depends on nodes outside of the current system partition.

  5. To shut down nodes 1, 5, and 6 in the current system partition, enter:
    cshutdown -N 1 5 6
    

    The command may be unsuccessful if any node in the list is not in the current system partition or depends on nodes outside of the current system partition.

  6. Specify the -X flag to ignore the sequence file and force nodes 1, 5, and 6 to be shut down. The following command is successful even if node 5 is gated by a node that is not shut down or is outside the current system partition:
    cshutdown -X -N 1 5 6
    
  7. To do a fast shut down on node 5 without sending a warning message to the user, enter:
    cshutdown -F -N 5
    
  8. To verify the sequence file without shutting down any node, enter the -k flag as follows. If both the -k and -F flags are specified, the sequence file can be tested without actually shutting down any nodes and without issuing a warning message to the user.
    cshutdown -kF ALL
    
  9. Specify the -r flag to halt the target nodes and restart them with cstartup. If necessary, specify the -C flag to provide cstartup_options. For example, to halt and restart nodes 12--16 with a timeout value of 300 seconds for the purpose of starting a leading node, enter:
    cshutdown -rN -C'-W 300' 12-16
    
  10. To reboot all the nodes in the partition node group sleepy_nodes, enter:
    cshutdown -rg sleepy_nodes
    

CSS_test

Purpose

CSS_test - Verifies that the installation and configuration of the Communications Subsystem of the SP system completed successfully.

Syntax

CSS_test

Flags

None.

Operands

None.

Description

Use this command to verify that the Communications Subsystem component ssp.css of the SP system was correctly installed. CSS_test runs on the system partition set in SP_NAME.

A return code of 0 indicates that the test completed without an error, but unexpected results may be noted on standard output and in the companion log file /var/adm/SPlogs/CSS_test.log. A return code of 1 indicates that an error occurred.

You can use the System Management Interface Tool (SMIT) to run this command. To use SMIT, enter:

smit SP_verify

Files

/var/adm/SPlogs/CSS_test.log
Default log file

|Environment Variables

|PSSP 3.4 provides the ability to run commands using secure remote |command and secure remote copy methods.

|To determine whether you are using either AIX rsh or rcp |or the secure remote command and copy method, the following environment |variables are used. |If no environment variables are set, the defaults are |/bin/rsh and /bin/rcp.

|You must be careful to keep these environment variables consistent. |If setting the variables, all three should be set. The DSH_REMOTE_CMD |and REMOTE_COPY_CMD executables should be kept consistent with the choice of |the remote command method in RCMD_PGM: |

|For example, if you want to run CSS_test using a secure remote |method, enter:

|export RCMD_PGM=secrshell
|export DSH_REMOTE_CMD=/bin/ssh
|export REMOTE_COPY_CMD=/bin/scp

|Security

|When restricted root access (RRA) is enabled, this command can only be run |from the control workstation.

Location

/usr/lpp/ssp/bin/CSS_test

Related Information

Commands: st_verify, SDR_test, SYSMAN_test, spmon_ctest, spmon_itest

Examples

To verify the Communication Subsystem following installation, enter:

CSS_test

css.snap

Purpose

css.snap - Collects switch related log and trace files from a node.

Syntax

|css.snap [-a | -c || -n | -p | -s]

Flags

|-a
|This flag is valid only on PSSP 3.4 or later systems.

-c
Erases the contents of the adapter cache and prints the result (Default).

-n
Assumes the device driver or daemon has erased the contents of the cache. |

|-p
|This flag is valid only on PSSP 3.4 or later systems. |

|-s
|All information from memory on the adapter is collected regardless of |whether this option is specified. The soft option for an adapter for |the SP Switch2 is ignored.

Operands

None.

Description

css.snap is generally issued automatically from the fault_service daemon when switch related errors occur and the data may be of use in debugging a problem. It can also be issued by the system administrator, usually under the direction of IBM level 2 or PE support. css.snap can be run on nodes with SP switch adapters or on the control workstation. It always collects logs local to the node from which it is run.

Files

/var/adm/SPlogs/css/css.snap.log
Specifies the trace file.

/var/adm/SPlogs/css/hostname.dateymdHMS.css.snap.tar.Z

Specifies the compressed tar file containing switch logs and debug information.

Location

/usr/lpp/ssp/css/css.snap

Examples

To collect data because Estart was unsuccessful on the switch primary node (c191n01), enter:

[c191n01]> /usr/lpp/ssp/css/css.snap

cstartup

Purpose

cstartup - Specifies the SP system Startup command.

Caution!

The cstartup command attempts to power on nodes that are powered off. This has safety implications if someone is working on the nodes. Proper precautions should be taken when using this command.

Syntax

cstartup
[-E] [-G] [-k] [-N | -R | -g] [-S] [-W seconds | AUTO] ...
 
[-X] [-Z] [-z] {target_nodes | [ALL]}

Flags

-E
Starts up all nodes concurrently. Ignores the /etc/cstartSeq file, if one exists.

-G
Allows the specification of nodes to include one or more nodes outside of the current system partition. If ALL is specified with -G, all nodes in the SP start up. If ALL is specified without -G, all nodes in the current system partition start up. If -G is specified with a list of nodes, all listed nodes start up regardless of the system partition in which they reside (subject to the restrictions of the sequence file). If -G is not specified and some of the specified target nodes are outside of the current system partition or some of the specified target nodes depend on nodes outside of the current system partition, none of the specified nodes are started up.

-g
Indicates that the target_nodes are specified as a named node group. If -G is supplied, a global node group is used. Otherwise, a partitioned-bound node group is used.

-k
Checks the sequence data file; does not start up any nodes. If circular sequencing is detected, cstartup issues warning messages. You can use cstartup -k ALL to test your /etc/cstartSeq file without starting or resetting any nodes. |

|-N
|Indicates that the target_nodes are specified as node numbers, |not SP Ethernet administrative local area network (LAN) adapter (reliable) |host names. The node numbers can be specified as ranges; for |example, 3-7 is interpreted as nodes 3, 4, 5, 6, and 7.

-R
Indicates that target_nodes is a file that contains the node identifiers.

-S
Tells cstartup to ignore existing sequencing violations; some trailing target_nodes are already up and running. The target_nodes that are already up are left alone. The other target_nodes are started in sequence. This operation may cause the nodes involved to not interface properly with their dependent nodes. If you omit the -S flag and any target_node is already running before its leading node, cstartup encounters an error without modifying the state of the system. |

|-W seconds | AUTO ...
|Provides a timeout value for starting up a leading node. In normal |processing, cstartup waits for a leading node to be completely |started before initiating the startup of trailing nodes. If one or more |target_nodes does not come up, cstartup waits |indefinitely. The -W flag tells cstartup to |wait the specified amount of time after initiating the startup of a node; |the command continues to start other nodes, preserving the sequence in |/etc/cstartSeq. The value you specify as seconds is |added to a 3 minute (180 second) default wait period. Your value is a |minimum; internal processing may cause the actual wait time to be |slightly longer. If you specify the value AUTO, the |cshutdown command will automatically generate a timeout value based |on the node types in your system.
|Note:
Your system may still be usable if one or more nodes does not complete |startup, because the sequencing rules are preserved. |

-X
Starts up only the nodes listed on the command line even if there are nontarget nodes gating the system startup. If you do not specify the -X flag and there are sequence violations involving nontarget nodes, cstartup encounters an error without modifying the state of the system.
Note:
If some nodes but not the entire system are forced to start up this way, they may not function properly because of possible resource problems.

-Z
If a target_node is already running at the time the cstartup command is issued, this flag tells cstartup to reset the node. This operation is disruptive to any processes running on the node. If you omit the -Z flag and any target_node is already running, cstartup encounters an error without modifying the state of the system.

-z
If a target_node is already running at the time the cstartup command is issued, this flag tells cstartup to reset the node if the node is dependent on a node that is down when cstartup is issued, but leave the node alone if the node is to be started up ahead of any down node. This operation is disruptive to any processes running on the node being reset. This operation correctly resets the node-startup sequencing with minimum disruption to the system. If you omit the -z flag and any target_node is already running, cstartup encounters an error without modifying the state of the system.

Operands

|
|target_nodes
|Designates the target nodes to be operated on. It is the operand of |the command, and must be the last token on the command line. In the |absence of the -R, -N, or -g |flags, target_nodes are specified as reliable host names on the SP |Ethernet administrative LAN adapter. The string ALL can be |used to designate all nodes in the SP system. You must identify one or |more target_nodes.

Description

The cstartup command starts up the entire system or any number of nodes in the system. If a node is not powered on, startup means powering on the node. If the node is already powered on and not running, startup means resetting the node.

The /etc/cstartSeq file specifies the sequence in which the nodes are started up. See PSSP: Administration Guide for the format of the /etc/cstartSeq file.

You can use the -SXZ flags to violate the cstartup sequence intentionally. See Table 2 for examples of the effect of these flags.

Results

The /var/adm/SPlogs/cs/cstart.MMDDhhmmss. pid file contains the results of cstartup.

MMDDhhmmss
The time stamp.

pid
The process ID of the cstartup command.

If the command is unsuccessful, examine this file to see which steps were completed. If a file with the same name already exists (from a previous year), the cstartup command overwrites the existing file.

Files

The following files reside on the control workstation:

/etc/cstartSeq
Describes the sequence in which the nodes should be started. Nodes not listed in the file are started up concurrently with listed nodes. If the file is empty, all nodes are started up concurrently. If the file does not exist, cstartup uses the output of seqfile as a temporary sequencing default.

/var/adm/SPlogs/cs/cstart.MMDDhhmmss.pid
Road map of cstartup command progress.

Security

The cstartup command can only be issued on the control workstation. To run the command you must have one of the following:

Location

/usr/lpp/ssp/bin/cstartup

Related Information

PSSP commands: cshutdown, init, seqfile

Examples

  1. For these examples, assume that /etc/cstartSeq specifies the following startup sequence:
    Group1 > Group2 > Group3 > Group4 > Group5
     
              Group1: A
     
              Group2: B
     
              Group3: C
     
              Group4: D
     
              Group5: E
    

    This defines five groups, Group1 through Group5, each containing a single node. The nodes names are A, B, C, D, and E. The sequence line Group1 > Group2 > Group3 > Group4 > Group5 means that Group1 (node A) is started first. When Group1 is up, Group2 (node B) is started. When Group2 is up, then Group3 (node C) is started, and so on.

    Table 2 shows that the result of a cstartup command depends on the flags specified on the command line, the initial state of each node, and the sequencing rules in /etc/cstartSeq. The shorthand notation Aup indicates that A is powered up and running; Adnindicates that A is not running.

    Table 2. Examples of the cstartup Command
    The subscript up means the node is up; the subscript dn means the node is down.
    Initial State Command issued Final state Explanation
    Adn Bdn Cdn Ddn Edn cstartup A B C D E Aup B up Cup Dup Eup The command succeeds; the nodes are all up.
    Aup B up Cdn Ddn Edn cstartup A B C D E Aup B up Cup Dup Eup The command succeeds, C, D, and E are started up.
    Aup B up Cdn D up Edn cstartup A B C D E Unchanged The command fails because D was already up before C.
    Aup B up Cdn D up Edn cstartup -S A B C D E Aup B up Cup Dup Eup The command succeeds because -S ignores sequencing violations.
    Aup Bup Cdn D up Edn cstartup -Z A B C D E Aup B up Cup Dup Eup The command succeeds because -Z resets running nodes.
    Aup B up Cdn D up Edn cstartup C E Unchanged The command fails because node D was already up before node C.
    Aup B up Cdn D up Edn cstartup -S C E Aup B up Cup Dup Eup The command succeeds because -S ignores sequencing violations.
    Aup B up Cdn D up Edn cstartup -X C E Aup B up Cup Dup Eup The command succeeds because -X considers the sequencing of only the target nodes.
    Aup B up Cdn D up Edn cstartup -Z C E unchanged The command fails because resetting C or E does not correct the sequence violation.
    Aup B up Cdn Ddn Edn cstartup C E unchanged The command fails because D is gating E. Node C is not started either.
    Aup B up Cdn Ddn Edn cstartup -S C E unchanged The command fails because D is gating E. Node C is not started either.
    Aup B up Cdn Ddn Edn cstartup -X C E Aup B up Cup Ddn Eup The command succeeds and starts up only the explicit targets, C and E.
    Aup B up Cdn Ddn Edn cstartup -Z C E unchanged The command fails because D is gating E. Node C is not started either.

  2. To start up all the nodes in the SP system regardless of system partitions and the sequence file, enter:
    cstartup -GXZ ALL
    
  3. To start up nodes 1, 9, and 16--20 regardless of system partitions and subject to the restrictions of the sequence file, enter:
    cstartup -G -N 1 9 16-20
    

    The command may be unsuccessful if any node in the list depends on any node that is not on the list and that node is not started up.

  4. To start up all the nodes in the current system partition, enter:
    cstartup ALL
    

    The command may be unsuccessful if any node in the current system partition depends on nodes outside of the current system partition.

  5. To start up nodes 1, 5, and 6 in the current system partition, enter:
    cstartup -N 1 5 6
    

    The command may be unsuccessful if any node in the list is not in the current system partition or depends on nodes outside of the current system partition.

  6. Specify the -X flag to ignore the sequence file and force nodes 1, 5, and 6 to be started up. The following command is successful even if node 5 is gated by a node that is not started up or is outside the current system partition:
    cstartup -X -N 1 5 6
    
  7. To verify the sequence file without actually starting up or resetting any nodes, enter the -k flag as follows:
    cstartup -k ALL
    
  8. To ignore the sequence file and start up all the target nodes concurrently, use the -E flag. For example, to start up all the nodes in the current system partition concurrently, enter:
    cstartup -E ALL
    
  9. To start up all nodes in the system node group sleepy_nodes, enter:
    cstartup -Gg sleepy_nodes
    

ctlhsd

Purpose

ctlhsd - Sets the operational parameters for the Hashed Shared Disk subsystem on a node.

Syntax

ctlhsd [-p parallel_level | -v hsd_name ... | -C | -V]

Flags

no option
Displays the current parallelism level, the number of reworked requests, and the number of requests that are not at a page boundary.

-p parallel_level
Sets the HSD device driver's parallelism level as the specified value of the parallel_level.

-v hsd_name ...
Resets the statistics in the number of reads and writes on the specified hashed shared disks.

-C
Resets the HSD device drivers counters in the number of reworked requests and the number of read/write requests that are not at a page boundary.

-V
Resets all the configured hashed shared disk's statistics in the number of read and write requests.

Operands

None.

Description

Use this command to set the parallelism level and to reset the statistics of the Hashed Shared Disk subsystem's data striping device driver for the virtual shared disk. When specified with no arguments, it displays the the current parallelism level, the number of reworked requests, and the number of requests that were not at a page boundary. When ctlhsd is used to reset the statistics of the device driver, or a particular hashed shared disk, or all the configured hashed shared disks on the system, it will not suspend all the underlying virtual shared disks. In other words, the user should make sure that there are no I/O activities on the underlying virtual shared disks.

Use lshsd -s to display the statistics on the number of read and write requests at the underlying virtual shared disks in a hashed shared disk or all hashed shared disks. Use the -v or -V flag to reset these counters.

Security

You must be in the AIX bin group to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Location

/usr/lpp/csd/bin/ctlhsd

Related Information

Commands: cfghsd, lshsd, lsvsd, resumevsd, suspendvsd, ucfghsd

Examples

To display the current parallelism level and counter, enter:

ctlhsd

The system displays a message similar to the following:

The current parallelism level is 9.
The number of READ requests not at page boundary is 0.
The number of WRITE requests not at page boundary is 0.

ctlvsd

Purpose

ctlvsd - Sets the operational parameters for the IBM Virtual Shared Disk subsystem on a node.

Syntax

ctlvsd
[-c cache_size | -r node_number... | -R | -p parallelism | [-l on | off]
 
-k node_number... | -t | -T | -v vsd_name ... |
 
-V | -C | -K | -M IP_max_message_size]

Flags

|-c
|Sets the cache size to the new value. Only increasing the cache |size up to the maximum value is supported. The initial value of the |cache size is the init_cache_buffer_count from the SDR Node object |for the node.
|Note:
IBM Virtual Shared Disk caching is no longer supported. This |information will still be accepted for compatibility with previous releases, |but the IBM Virtual Shared Disk device driver will ignore the |information. |
|

|-r
|Resets the outgoing and expected sequence numbers for the nodes specified |on the node on which the command is run. Use this flag when another |node has either been rebooted, cast out, or all virtual shared disks have been |reconfigured on that node. The specified nodes are also cast in.

|

|Note:
This option should be used only under direct guidance from IBM |Service. It should never be used under normal circumstances. |
|

|-R
|Resets the outgoing and expected sequence number for all nodes on the node |on which the command is run. Use this flag after rebooting the |node. All nodes in the IBM Virtual Shared Disk network will be cast |in.
|Note:
This option should be used only under direct guidance from IBM |Service. It should never be used under normal circumstances. |

-p
Sets the level of IBM Virtual Shared Disk parallelism to the number specified. The valid range is 1 to 9. The default is 9. A larger value can potentially give better response time to large requests. (Refer to PSSP: Managing Shared Disks for more information regarding tuning IBM Virtual Shared Disk performance.)

This value is the buf_cnt parameter on the uphysio call that the IBM Virtual Shared Disk IP device driver makes in the kernel. Use statvsd to display the current value on the node on which the command is run.

-l on | off
Specify -l on to activate KLAPI. Specify -l off to deactivate KLAPI. |

|-k
|Casts out the node numbers specified on the local node. The local |node ignores requests from cast out nodes. Use -r to |cast nodes back in.

|Notes:

  1. |Before using this flag, refer to the "Restrictions" section that |follows.

  2. |This option should be used only under direct guidance from IBM |Service. It should never be used under normal circumstances. |

-t
Lists the current routing table and mbuf headers cached by the IBM Virtual Shared Disk driver.

-T
Clears or releases all cached routes.

-v vsd_name ...
Resets the statistics in the number of read and write requests on the specified virtual shared disks.

-V
Resets all the configured virtual shared disk's statistics in the number of read and write requests.

-C
Resets the IBM Virtual Shared Disk device driver counters displayed by the statvsd command. Exceptions are the outgoing and expected request sequence numbers among the client and server nodes. |

|-K
|Casts out all nodes on the local node. Local requests are still |honored.

|Notes:

  1. |Before using this flag, refer to the "Restrictions" section that |follows.

  2. |This option should be used only under direct guidance from IBM |Service. It should never be used under normal circumstances. |
|

|-M
|Sets the IBM Virtual Shared Disk max_IP_msg_size. This is |the largest sized block of data the virtual shared disk sends over the network |for an I/O request. This limit also affects local virtual shared disk |I/O block size. The value is in bytes and must be a multiple of 512, |and be between 512 and 65024. The default is 61440. All nodes |should use the same value.

Operands

None.

Description

The ctlvsd command changes some parameters of the IBM Virtual Shared Disk subsystem. When called with no arguments it displays the current and maximum cache buffer count, the request block count, the pbuf count, the minimum buddy buffer size, the maximum buddy buffer size as well as the overall size of the buddy buffer.

|Sequence number information may or may not be displayed. In |general, sequence numbers and the options that reset them are managed entirely |within the IBM Virtual Shared Disk and IBM Recoverable Virtual Shared Disk |subsystems.

Security

You must be in the AIX bin group to run this command.

Prerequisite Information

PSSP: Managing Shared Disks

Location

/usr/lpp/csd/bin/ctlvsd

Related Information

Commands: lsvsd, statvsd

Refer to PSSP: Managing Shared Disks for information on tuning IBM Virtual Shared Disk performance.

Examples

  1. |To display the current parameters, enter:
    |ctlvsd
    The system displays a message similar to the following:
    |The current cache buffer count is 64.
    |The maximum cache buffer count is 256.
    |The minimum buddy buffer size is 4096.
    |The maximum buddy buffer size is 65536.
    |The total buddy buffer size is 4 max buffers, 262144 bytes.
  2. |To display the current IP routing table, enter:
    |ctlvsd -t
    The system displays the following information:
    |Route cache information:
    | 
    | destination  interface  ref  status  direct/gateway   min managed mbuf
    |     1          css0      2     Up        Direct             256


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]