Management and maintenance of RHCs cluster is a very complex and tedious work. In order to maintain a RHCs cluster well, we must be familiar with the basic operation principle of RHCs. In terms of cluster management, RHCs provides two methods: Luci graphical interface mode and command line mode. Here we focus on how to manage RHCs cluster under the command line.
Start RHCs cluster
The core processes of RHCs cluster are cman and rgmanager. To start the cluster, start cman, then rgmanager, as follows:
Start the cluster service on the host web1:
[[email protected] ~]# service cman start Starting cluster: Loading modules... done Mounting configs... done Starting ccsd... done Starting cman... done Starting qdiskd...done Starting daemons...done Starting fencing... done [ OK ]
After cman starts successfully in other nodes, start rgmanager service. The specific operations are as follows:
[[email protected] ~]# service rgmanager start Starting Cluster Service Manager: [ OK ]
Shut down RHCs cluster
As opposed to starting the cluster server, the command to shut down the RHCs cluster is as follows:
Sometimes when shutting down cman service, it may prompt failure. At this time, you can check whether the local shared storage gfs2 file system has been uninstalled, or check whether all rgmanager services of other nodes have been shut down normally.
Manage application services
After the cluster system is started, the application service is started automatically by default. However, if an application service is not started automatically, it needs to be started manually. The commands to manage application services are clusvcadm, through which you can start, close, restart and switch the Dell application services in the cluster services.
Start an application
You can start the application service on a node in the following ways, for example, starting wrbserver:
[[email protected] ~]#clusvcadm -e webserver -m web1 Member web1 trying to enable service:webserver...Success service:webserver is now running on web1
Close an application
You can shut down the application service on a node in the following ways to start mysqlserver as an example:
[[email protected] ~]#clusvcadm -s mysqlserver -m web1 Member mysql1 stopping service:mysqlserver...Success
Restart an application
You can restart the application service on a node in the following ways to start wrbserver as an example:
[[email protected] ~]#clusvcadm -R wrbserver -m web1 Member web1 trying to restart service:wrbserver...Success
This command is executed on the web2 node, and it can also restart the wrbserver on the web1 node. It can be seen that the clusvcadm command can be executed on any node in the cluster.
Switch an application
You can switch the application services on a node in the following ways, for example, to switch the services on node web1 to node web2:
[[email protected] ~]# clusvcadm -r wrbserver -m web2 Trying to relocate service:webserver to web2...Success service:webserver is now running on web2
Monitoring RHCs cluster status
Through the monitoring of RHCs, it is helpful to understand the health status of each node in the cluster, find problems and solve them in time. The RHCs cluster provides a wealth of status view commands. Here we mainly introduce the usage of cman tool, clustat and CCS tool.
Cman? Tool command
There are many parameters of cman tool, but the method of using them is simple. Here are two methods of using them:
[[email protected] ~]# cman_tool nodes -a Node Sts Inc Joined Name 0 M 0 2010-08-23 01:24:00 /dev/sdb7 1 M 2492 2010-08-23 01:22:43 web2 Addresses: 192.168.12.240 2 M 2492 2010-08-23 01:22:43 Mysql1 Addresses: 192.168.12.231 3 M 2492 2010-08-23 01:22:43 Mysql2 Addresses: 192.168.12.232 4 M 2488 2010-08-23 01:22:43 web1 Addresses: 192.168.12.230
This command shows the node name, the corresponding node IP address and the time of joining the cluster.
If you want to learn more about cluster nodes, you can use the following command:
[[email protected] ~]# cman_tool status Version: 6.2.0 Config version: 35 cluster profile version number Cluster name: mycluster Cluster Id: 56756 Cluster Member: Yes Cluster Generation: 2764 Membership state: Cluster-Member Nodes: 4 cluster nodes Expected votes: 6 × expected votes Quorum device votes: 2 ා vote disk vote value Total votes: 6 × size of all votes in the cluster Quorum: 4 ා the legal voting value of the cluster. If it is lower than this value, the cluster will stop service Active subsystems: 9 Flags: Dirty Ports Bound: 0 177 Node name: web1 Node ID: 4 the ID number of the node in the cluster Multicast addresses: 126.96.36.199 cluster broadcast address Node addresses: 192.168.12.230 - the corresponding IP address of this node
The clustat command is very simple to use. For detailed usage, you can get help information through “clustat – H”. Here are just a few examples.
[[email protected] ~]#clustat -i 3 Cluster Status for mycluster @ Mon Aug 23 18:54:15 2010 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ web2 1 Online, rgmanager Mysql1 2 Online, rgmanager Mysql2 3 Online, rgmanager web1 4 Online, Local, rgmanager /dev/sdb7 0 Online, Quorum Disk Service Name Owner (Last) State ------- ---- ----- ------ -------- service:mysqlserver Mysql1 started service:webserver web1 started
The meaning of output content is as follows:
The “- I” parameter of clustat can display the running status of each node and service in the cluster system in real time. “- I 3” indicates that the cluster status is refreshed every three seconds.
In this output, you can see that each node is in “online” state, indicating that each node is running normally. If a node exits the cluster, the corresponding state should be “offline”. At the same time, you can see that the two services of the cluster are also in “started” state, running in mysql1 node and web1 node respectively.
In addition, you can know the corresponding relationship of cluster nodes through the “Id” column. For example, web2 corresponds to the “node 1” node in this cluster. Similarly, web1 corresponds to the “node 4” node. Understanding the order of cluster nodes is helpful to the interpretation of cluster logs.
CCS? Tool command
CCS tool is mainly used to manage cluster configuration file cluster.conf. Through CCS tool, you can add / delete nodes, add / delete fence devices, update cluster configuration files and other operations in the cluster.
Here are some application examples of CCS tool:
After modifying the configuration file in one node, you can execute the “CCS? Tool update” command to update the configuration file in all nodes, for example:
[[email protected] cluster]# ccs_tool update /etc/cluster/cluster.conf Proposed updated config file does not have greater version number. Current config_version :: 35 Proposed config_version:: 35 Failed to update config file.
CCS tool determines whether to update or not according to the “config” value in cluster.conf. Therefore, after modifying the cluster.conf file, you must update the config “version” value of cluster.conf, so that the configuration file can be updated when CCS tool is executed.
[[email protected] cluster]# ccs_tool update /etc/cluster/cluster.conf Config file updated from version 35 to 36 Update complete.