System log of the Unix and other worlds


Sun Cluster 3.1 Command Line Cheat Sheet

Flattr this!

This page describes basic Sun Cluster 3.1 administration commands, including the following command line interface commands: scrgadm, scswitch, scconf, scstat, and scdpm.


Manage registration and unregistration of resource types, resource groups, and resources

Action Object Who What Hosts
-a (add) -g (group) -h (physical host) -x (extensionparam=value) -l (logical host)
-c (change) -j (resource) -f registration-file path -y (param=value)
-r (remove) -t (resource type)
-p[vv] (print)
v = state of resources
vv = parameter values
-L (for Logicalhostnames) -n netiflist
-S (for Shared Addresses) -n netiflist -X auxnodelist


Register a resource type named MyDatabase:

# scrgadm -a -t MyDatabase

Create a failover resource group named MyDatabaseRG:

# scrgadm -a -g MyDatabaseRG

Create a scalable resource group named MyWebServerRG:

# scrgadm -a -g MyWebServerRG
-y Maximum_primaries=integer
-y Desired_primaries=integer

Create a resource of a given type in a resource group:

# scrgadm -a -j resource-name -t resource-type-name -g RG-name

See the rg_properties(5) man page for a description of the resource group properties. See the r_properties(5) man page for a description of the resource properties.


Update the Sun Cluster software configuration

Action Object What
-a (add) -C (cluster option) cluster=clustername
-c (change) -A (adapter) add: trtype=type,name=name,node=node[,other-options] change name=name,node=node[,state=state][,other-options_remove name=_name,node=node
-r (remove) -B (transport junction) add: type=type,name=name[,other-options] change name=name[,state=state][,other-options] remove name=name
-p[vv] (print) -m (cable endpoint) add: endpoint=[node:]name[@port],endpoint=[node:]name[@port][,noenable] change: endpoint=[node:]name[@port],state=state remove: endpoint=[node:] name[@port]
-P (private net name) add: node=node[,privatehostname=node[,privatehostname=hostalias]
-q (quorum) add: globaldev=devicename[,node=node,node=node[,...]] change: node=node,{maintstate | reset]} change globaldev= devicename,{maintstate | reset} change:resetchange: installmoderemove: globaldev=devicename
-D (devicegroup) add: type=type,name=name[,nodelist=node [: node]...][,preferenced={true | false}][,failback={enabled | disabled}][,otheroptions ] change name=name [,nodelist=node[:node]...][,preferenced={true | false}][,failback={enabled | disabled}][,other-options ] removename=name [,nodelist=node[:node]...]
-T (authentication) add: node=nodename[,...][,authtype=authtype] change authtype=authtype remove {node=nodename [,...] | all} 
-h (nodes) add: node=nodename remove node=nodename or nodeID


Register a new disk group:

# scconf -a -D type=vxvm,name=new-disk-group,nodelist=nodex:nodex

Synchronize device group information after adding a volume:

# scconf -c -D name=diskgroup,sync

Add a shared quorum device to the cluster:

# scconf -a -q globaldev=nodename

Clear “installmode”:

# scconf -c -q reset

Configure a second set of cluster transport connections:

# scconf -a
-A trtype=transport,name=ifname1,node=nodename1
-A trtype=transport,name=ifname2,node=nodename2
-m endpoint=nodename1:ifname1,endpoint=nodename2:ifname2

Secure the cluster against other machines that might attempt to add themselves to the cluster:

# scconf -a -T node=.


—– Perform ownership/state change of resource groups and disk device groups in Sun Cluster configurations —–

Action Object Who Special
-z (bring online) -g (resource group) -D (device group) -h (target host) -h "" (no receiver) takes resource group offline
-Z (bring everything online) -g (resource group) no -g brings all resource groups online
-F (take offline for all nodes) -g (resource group)-D (device group)
-S (switch all DG and RG) -h (losing host) -K # specifies the number of seconds to keep resource groups from switching back onto a node after that node has been successfully evacuated. Default is 60 seconds, and can be set up to 65535 seconds. Starting with Sun Cluster 3.1 Update 3.
-R (restart all RG) -h (target host)
-m (set maintenance mode) -D (device group)
-u (unmanage RG) -g (resource group) -M disables monitoring only
-o (online RG) -g (resource group) -M disables monitoring only
-e (enable resource) -j (resource)
-n (disable resource) -j (resource)
-c (clear flag STOP_FAILED) -j (resource) -f (flag name) -h (target host)
-Q (quiesce resource group (starting in Sun Cluster 3.1 update 3)) -g (resource group)


Switch over resource-grp-2 to be mastered by node1:

# scswitch -z -h node1 -g resource-grp-2

Switch over resource-grp-3, a resource group configured to have multiple primaries, to be mastered by node1, node2, node3:

# scswitch -z -h node1,node2,node3 -g resource-grp-3

Switch all managed resource groups online on their most preferred node or nodes:

# scswitch -z

Quiesce resource-grp-2. Stops RG from continuously bouncing around from one node to another in the event of the failure of a START or STOP method:

# scswitch -Q -g resource-group-2

Switch over all resource groups and disk device groups from node1 to a new set of primaries:

# scswitch -S -h node1

Restart some resource groups on specified nodes:

node1# scswitch -R -h node1,node2 -g resource-grp-1,resource-grp-2

Disable some resources:

# scswitch -n -j resource-1,resource-2

Enable a resource:

# scswitch -e -j resource-1

Take resource groups to the unmanaged state:

# scswitch -u -g resource-grp-1,resource-grp-2

Take resource groups to the online state:

# scswitch -o -g resource-grp-1,resource-grp-2

Switch over device-group-1 to be mastered by node2:

# scswitch -z -h node2 -D device-group-1

Put device-group-1 into maintenance mode:

# scswitch -m -D device-group-1

Move all resource groups and disk device groups persistently off of a node:

# scswitch -S -h iloveuamaya -K 120

This situation arises when resource groups attempt to switch back automatically when strong negative affinities have been configured (with RG_affinities).


Monitor the status of Sun Cluster

Action Options
-D (shows status for all disk groups)
-g (shows status for all resource groups) -h host (show status of all components related to specified host)
-i (shows status for all IPMP groups and public network adapters)
-n (shows status for all nodes)
-h host (shows status for the specified node)
-p (shows status for all components in the cluster) -v[v] verbose output
-q (shows status for all device quorums and node quorums)
-W (shows status for cluster transport path)


Show status of all resource groups followed by the status of all components related to node1:

# scstat -g -h node1


# scstat -g


# scstat -h node1


Available starting in Sun Cluster 3.1 update 3.

Disk-path monitoring administration command

Action What
-m (monitor the new disk path that is specified by node:disk path. All is default option) [node]
-u (unmonitor a disk path. The daemon on each node stops monitoring the specified path. All is default option) [node]
-p [-F] (print current status of a specified disk path from all the nodes that are attached to the storage. All is default option. The -F option prints only faulty disk paths) [node]
-f filename (read the list of disk paths to monitor or unmonitor for a specified file name.)


Force daemon to monitor all disk paths in the cluster infrastructure:

# scdpm -m all

Monitor a new path on all nodes where path is valid:

# scdpm -m /dev/did/dsk/d3

Monitor new paths on just node1:

# scdpm -m node1:d4 -m node1:d5

Print all disk paths in the cluster and their status:

# scdpm -p all:all

Print all failed disk paths:

# scdpm -p -F all

Print the status of all disk paths from node1:

# scdpm -p node1:all

Leave a Reply

Your email address will not be published. Required fields are marked *