{"id":277627,"date":"2016-04-04T18:00:04","date_gmt":"2016-04-04T14:00:04","guid":{"rendered":"http:\/\/savepearlharbor.com\/?p=277627"},"modified":"-0001-11-30T00:00:00","modified_gmt":"-0001-11-29T21:00:00","slug":"","status":"publish","type":"post","link":"https:\/\/savepearlharbor.com\/?p=277627","title":{"rendered":"Active\/Passive PostgreSQL Cluster \u0441 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u0435\u043c Pacemaker, Corosync"},"content":{"rendered":"<p>       <img decoding=\"async\" src=\"https:\/\/habrastorage.org\/getpro\/habr\/post_images\/cb2\/776\/a29\/cb2776a2971117d6517936e78f696e38.jpg\" alt=\"image\"\/><\/p>\n<p>  <b><\/p>\n<h5>\u041e\u043f\u0438\u0441\u0430\u043d\u0438\u0435<\/h5>\n<p><\/b><br \/>  \u0412 \u0434\u0430\u043d\u043d\u043e\u0439 \u0441\u0442\u0430\u0442\u044c\u0435 \u0440\u0430\u0441\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u0435\u0442\u0441\u044f \u043f\u0440\u0438\u043c\u0435\u0440 \u043d\u0430\u0441\u0442\u0440\u043e\u0439\u043a\u0438 Active\/Passive \u043a\u043b\u0430\u0441\u0442\u0435\u0440\u0430 \u0434\u043b\u044f PostgreSQL \u0441 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u0435\u043c Pacemaker, Corosync. \u0412 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 \u0434\u0438\u0441\u043a\u043e\u0432\u043e\u0439 \u043f\u043e\u0434\u0441\u0438\u0441\u0442\u0435\u043c\u044b \u0440\u0430\u0441\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u0435\u0442\u0441\u044f \u0434\u0438\u0441\u043a \u043e\u0442 \u0441\u0438\u0441\u0442\u0435\u043c\u044b \u0445\u0440\u0430\u043d\u0435\u043d\u0438\u044f \u0434\u0430\u043d\u043d\u044b\u0445 (CSV). \u0420\u0435\u0448\u0435\u043d\u0438\u0435 \u043d\u0430\u043f\u043e\u043c\u0438\u043d\u0430\u0435\u0442 Windows Failover Cluster \u043e\u0442 Microsoft.<\/p>\n<p>  \u0422\u0435\u0445\u043d\u0438\u0447\u0435\u0441\u043a\u0438\u0435 \u043f\u043e\u0434\u0440\u043e\u0431\u043d\u043e\u0441\u0442\u0438:<br \/>  <i>\u0412\u0435\u0440\u0441\u0438\u044f \u043e\u043f\u0435\u0440\u0430\u0446\u0438\u043e\u043d\u043d\u043e\u0439 \u0441\u0438\u0441\u0442\u0435\u043c\u044b \u2014 CentOS 7.1<br \/>  \u0412\u0435\u0440\u0441\u0438\u044f \u043f\u0430\u043a\u0435\u0442\u0430 pacemaker \u2014 1.1.13-10<br \/>  \u0412\u0435\u0440\u0441\u0438\u044f \u043f\u0430\u043a\u0435\u0442\u0430 pcs \u2014 0.9.143<br \/>  \u0412 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 \u0441\u0435\u0440\u0432\u0435\u0440\u043e\u0432(2\u0448\u0442) \u2014 \u0436\u0435\u043b\u0435\u0437\u043d\u044b\u0435 \u0441\u0435\u0440\u0432\u0435\u0440\u0430 2*12 CPU\/ 94GB memory<br \/>  \u0412 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 CSV(Cluster Shared Volume) \u2014 \u043c\u0430\u0441\u0441\u0438\u0432 \u043a\u043b\u0430\u0441\u0441\u0430 Mid-Range Hitachi RAID 1+0<\/i><\/p>\n<p>  <b><\/p>\n<h5>\u041f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043a\u0430 \u0443\u0437\u043b\u043e\u0432 \u043a\u043b\u0430\u0441\u0442\u0435\u0440\u0430<\/h5>\n<p><\/b><br \/>  <a name=\"habracut\"><\/a>\u041f\u0440\u0430\u0432\u0438\u043c \/etc\/hosts \u043d\u0430 \u043e\u0431\u043e\u0438\u0445 \u0445\u043e\u0441\u0442\u0430\u0445 \u0438 \u0434\u0435\u043b\u0430\u0435\u043c \u0432\u0438\u0434\u0438\u043c\u043e\u0441\u0442\u044c \u0445\u043e\u0441\u0442\u043e\u0432 \u0434\u0440\u0443\u0433 \u0434\u0440\u0443\u0433\u0430 \u043f\u043e \u043a\u043e\u0440\u043e\u0442\u043a\u0438\u043c \u0438\u043c\u0435\u043d\u0430\u043c, \u043d\u0430\u043f\u0440\u0438\u043c\u0435\u0440:  <\/p>\n<pre><code class=\"bash\">[root@node1 ~]# cat \/etc\/hosts  127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.1.66.23 node1.local.lan node1 10.1.66.24 node2.local.lan node2 <\/code><\/pre>\n<p>  \u0422\u0430\u043a\u0436\u0435 \u0434\u0435\u043b\u0430\u0435\u043c \u043e\u0431\u043c\u0435\u043d \u043c\u0435\u0436\u0434\u0443 \u0441\u0435\u0440\u0432\u0435\u0440\u0430\u043c\u0438 \u0447\u0435\u0440\u0435\u0437 SSH-\u043a\u043b\u044e\u0447\u0438 \u0438 \u0440\u0430\u0441\u043a\u0438\u0434\u044b\u0432\u0430\u0435\u043c \u043a\u043b\u044e\u0447\u0438 \u043c\u0435\u0436\u0434\u0443 \u0445\u043e\u0441\u0442\u0430\u043c\u0438.<\/p>\n<p>  \u041f\u043e\u0441\u043b\u0435 \u044d\u0442\u043e\u0433\u043e \u043d\u0430\u0434\u043e \u0443\u0431\u0435\u0434\u0438\u0442\u044c\u0441\u044f, \u0447\u0442\u043e \u043e\u0431\u0430 \u0441\u0435\u0440\u0432\u0435\u0440\u0430 \u0432\u0438\u0434\u044f\u0442 \u0434\u0440\u0443\u0433 \u0434\u0440\u0443\u0433\u0430 \u043f\u043e \u043a\u043e\u0440\u043e\u0442\u043a\u0438\u043c \u0438\u043c\u0435\u043d\u0430\u043c:  <\/p>\n<pre><code class=\"bash\">[root@node1 ~]# ping node2 PING node2.local.lan (10.1.66.24) 56(84) bytes of data. 64 bytes from node2.local.lan (10.1.66.24): icmp_seq=1 ttl=64 time=0.204 ms 64 bytes from node2.local.lan (10.1.66.24): icmp_seq=2 ttl=64 time=0.221 ms 64 bytes from node2.local.lan (10.1.66.24): icmp_seq=3 ttl=64 time=0.202 ms 64 bytes from node2.local.lan (10.1.66.24): icmp_seq=4 ttl=64 time=0.207 ms    [root@node2 ~]# ping node1 PING node1.local.lan (10.1.66.23) 56(84) bytes of data. 64 bytes from node1.local.lan (10.1.66.23): icmp_seq=1 ttl=64 time=0.202 ms 64 bytes from node1.local.lan (10.1.66.23): icmp_seq=2 ttl=64 time=0.218 ms 64 bytes from node1.local.lan (10.1.66.23): icmp_seq=3 ttl=64 time=0.186 ms 64 bytes from node1.local.lan (10.1.66.23): icmp_seq=4 ttl=64 time=0.193 ms <\/code><\/pre>\n<p>  \u0423\u0441\u0442\u0430\u043d\u043e\u0432\u043a\u0430 \u043f\u0430\u043a\u0435\u0442\u043e\u0432 \u0434\u043b\u044f \u0441\u043e\u0437\u0434\u0430\u043d\u0438\u044f \u043a\u043b\u0430\u0441\u0442\u0435\u0440\u0430<br \/>  \u041d\u0430 \u043e\u0431\u043e\u0438\u0445 \u0445\u043e\u0441\u0442\u0430\u0445 \u0441\u0442\u0430\u0432\u0438\u043c \u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u044b\u0435 \u043f\u0430\u043a\u0435\u0442\u044b, \u0447\u0442\u043e\u0431\u044b \u0437\u0430\u0442\u0435\u043c \u0441\u043e\u0431\u0440\u0430\u0442\u044c \u043a\u043b\u0430\u0441\u0442\u0435\u0440:  <\/p>\n<pre><code class=\"bash\">yum install -y pacemaker pcs psmisc policycoreutils-python <\/code><\/pre>\n<p>  \u0417\u0430\u0442\u0435\u043c \u0441\u0442\u0430\u0440\u0442\u0443\u0435\u043c \u0438 \u0432\u043a\u043b\u044e\u0447\u0430\u0435\u043c \u0441\u043b\u0443\u0436\u0431\u0443 pcs:  <\/p>\n<pre><code class=\"bash\">systemctl start pcsd.service systemctl enable pcsd.service <\/code><\/pre>\n<p>  \u0414\u043b\u044f \u0443\u043f\u0440\u0430\u0432\u043b\u0435\u043d\u0438\u0435 \u043a\u043b\u0430\u0441\u0442\u0435\u0440\u043e\u043c \u043d\u0430\u043c \u043f\u043e\u0442\u0440\u0435\u0431\u0443\u0435\u0442\u0441\u044f \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0439 \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044c, \u0441\u043e\u0437\u0434\u0430\u0435\u043c \u0435\u0433\u043e \u043d\u0430 \u043e\u0431\u043e\u0438\u0445 \u0445\u043e\u0441\u0442\u0430\u0445:  <\/p>\n<pre><code class=\"bash\">passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully. Pacemaker|Corosync <\/code><\/pre>\n<p>  \u0414\u043b\u044f \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0438 \u0430\u0443\u0442\u0435\u043d\u0442\u0438\u0444\u0438\u043a\u0430\u0446\u0438\u0438, \u0441 \u043f\u0435\u0440\u0432\u043e\u0439 \u043d\u043e\u0434\u044b \u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u043e \u0432\u044b\u043f\u043e\u043b\u043d\u0438\u0442\u044c \u043a\u043e\u043c\u0430\u043d\u0434\u0443:  <\/p>\n<pre><code class=\"bash\">[root@node1 ~]# pcs cluster auth node1 node2 Username: hacluster Password: node1: Authorized node2: Authorized <\/code><\/pre>\n<p>  \u0414\u0430\u043b\u0435\u0435, \u0441\u0442\u0430\u0440\u0442\u0443\u0435\u043c \u043d\u0430\u0448 \u043a\u043b\u0430\u0441\u0442\u0435\u0440 \u0438 \u043f\u0440\u043e\u0432\u0435\u0440\u044f\u0435\u043c \u0441\u043e\u0441\u0442\u043e\u044f\u043d\u0438\u0435 \u0437\u0430\u043f\u0443\u0441\u043a\u0430:  <\/p>\n<pre><code class=\"bash\">pcs property set stonith-enabled=false pcs property set no-quorum-policy=ignore   pcs cluster start --all pcs status --all <\/code><\/pre>\n<p>  \u0412\u044b\u0432\u043e\u0434 \u043e \u0441\u043e\u0441\u0442\u043e\u044f\u043d\u0438\u0438 \u043a\u043b\u0430\u0441\u0442\u0435\u0440\u0430 \u0434\u043e\u043b\u0436\u0435\u043d \u0431\u044b\u0442\u044c \u043f\u0440\u0438\u043c\u0435\u0440\u043d\u043e \u0442\u0430\u043a\u043e\u0439:<\/p>\n<pre><code class=\"bash\">[root@node1 ~]# pcs status Cluster name: cluster01 WARNING: no stonith devices and stonith-enabled is not false Last updated: Tue Mar 16 10:11:29 2016 Last change: Tue Mar 16 10:12:47 2016 Stack: corosync Current DC: node2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum 2 Nodes configured 0 Resources configured   Online: [ node1 node2 ]  Full list of resources:   PCSD Status:   node1: Online   node2: Online  Daemon Status:   corosync: active\/disabled   pacemaker: active\/disabled   pcsd: active\/enabled <\/code><\/pre>\n<p>  <cut\/><\/p>\n<p>  \u0422\u0435\u043f\u0435\u0440\u044c \u043f\u0435\u0440\u0435\u0445\u043e\u0434\u0438\u043c \u043a \u043d\u0430\u0441\u0442\u0440\u043e\u0439\u043a\u0435 \u0440\u0435\u0441\u0443\u0440\u0441\u043e\u0432 \u0432 \u043a\u043b\u0430\u0441\u0442\u0435\u0440\u0435.<\/p>\n<h5>\u041d\u0430\u0441\u0442\u0440\u043e\u0439\u043a\u0430 CSV<\/h5>\n<p>  \u0417\u0430\u0445\u043e\u0434\u0438\u043c \u043d\u0430 \u043f\u0435\u0440\u0432\u044b\u0439 \u0445\u043e\u0441\u0442 \u0438 \u043d\u0430\u0441\u0442\u0440\u0430\u0438\u0432\u0430\u0435\u043c LVM:  <\/p>\n<pre><code class=\"bash\">pvcreate \/dev\/sdb vgcreate shared_vg \/dev\/sdb lvcreate -l 100%FREE -n ha_lv shared_vg mkfs.ext4 \/dev\/shared_vg\/ha_lv <\/code><\/pre>\n<p>  \u0414\u0438\u0441\u043a \u0433\u043e\u0442\u043e\u0432. \u0422\u0435\u043f\u0435\u0440\u044c \u043d\u0430\u043c \u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u043e \u0441\u0434\u0435\u043b\u0430\u0442\u044c \u0442\u0430\u043a, \u0447\u0442\u043e \u0431\u044b\u043b\u043e \u043d\u0430 \u0434\u0438\u0441\u043a \u043d\u0435 \u043f\u0440\u0438\u043c\u0435\u043d\u044f\u043b\u043e\u0441\u044c \u043f\u0440\u0430\u0432\u0438\u043b\u043e \u0430\u0432\u0442\u043e\u043c\u043e\u043d\u0442\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u044f \u0434\u043b\u044f LVM. \u0414\u0435\u043b\u0430\u0435\u0442\u0441\u044f \u044d\u0442\u043e \u0432\u043d\u0435\u0441\u0435\u043d\u0438\u0435\u043c \u0438\u0437\u043c\u0435\u043d\u0435\u043d\u0438\u0439 \u0432 \u0444\u0430\u0439\u043b\u0435 \/etc\/lvm\/lvm.conf (\u0440\u0430\u0437\u0434\u0435\u043b activation) \u043d\u0430 \u043e\u0431\u043e\u0438\u0445 \u0445\u043e\u0441\u0442\u0430\u0445:<\/p>\n<pre><code class=\"bash\">activation {.....   #volume_list = [ &quot;vg1&quot;, &quot;vg2\/lvol1&quot;, &quot;@tag1&quot;, &quot;@*&quot; ] volume_list = [ &quot;centos&quot;,  &quot;@node1&quot; ] <\/code><\/pre>\n<p>  \u041e\u0431\u043d\u043e\u0432\u043b\u044f\u0435\u043c initrams \u0438 \u043f\u0435\u0440\u0435\u0437\u0430\u0433\u0440\u0443\u0436\u0430\u0435\u043c \u043d\u043e\u0434\u044b:  <\/p>\n<pre><code class=\"bash\">dracut -H -f \/boot\/initramfs-$(uname -r).img $(uname -r) shutdown -h now <\/code><\/pre>\n<p>  \u0414\u043e\u0431\u0430\u0432\u043b\u0435\u043d\u0438\u0435 \u0440\u0435\u0441\u0443\u0440\u0441\u043e\u0432 \u0432 \u043a\u043b\u0430\u0441\u0442\u0435\u0440<br \/>  \u0422\u0435\u043f\u0435\u0440\u044c \u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u043e \u0441\u043e\u0437\u0434\u0430\u0442\u044c \u0433\u0440\u0443\u043f\u043f\u0443 \u0440\u0435\u0441\u0443\u0440\u0441\u043e\u0432 \u0432 \u043a\u043b\u0430\u0441\u0442\u0435\u0440\u0435 \u2014 \u0434\u0438\u0441\u043a c \u0444\u0430\u0439\u043b\u043e\u0432\u043e\u0439 \u0441\u0438\u0441\u0442\u0435\u043c\u043e\u0439 \u0438 IP.<\/p>\n<pre><code class=\"bash\">pcs resource create virtual_ip IPaddr2 ip=10.1.66.25 cidr_netmask=24 --group PGCLUSTER pcs resource create DATA ocf:heartbeat:LVM volgrpname=shared_vg exclusive=true --group PGCLUSTER pcs resource create DATA_FS Filesystem device=&quot;\/dev\/shared_vg\/ha_lv&quot; directory=&quot;\/data&quot; fstype=&quot;ext4&quot; force_unmount=&quot;true&quot; fast_stop=&quot;1&quot; --group PGCLUSTER pcs resource create pgsql pgsql pgctl=&quot;\/usr\/pgsql-9.4\/bin\/pg_ctl&quot; psql=&quot;\/usr\/pgsql-9.4\/bin\/psql&quot; pgdata=&quot;\/data&quot; pgport=&quot;5432&quot; pgdba=&quot;postgres&quot; node_list=&quot;node1 node2&quot; op start   timeout=&quot;60s&quot; interval=&quot;0s&quot;  on-fail=&quot;restart&quot;  op monitor timeout=&quot;60s&quot; interval=&quot;4s&quot; on-fail=&quot;restart&quot; op promote timeout=&quot;60s&quot; interval=&quot;0s&quot;  on-fail=&quot;restart&quot; op demote  timeout=&quot;60s&quot; interval=&quot;0s&quot;  on-fail=&quot;stop&quot;  op stop    timeout=&quot;60s&quot; interval=&quot;0s&quot;  on-fail=&quot;block&quot;  op notify  timeout=&quot;60s&quot; interval=&quot;0s&quot; --group PGCLUSTER <\/code><\/pre>\n<p>  \u041e\u0431\u0440\u0430\u0442\u0438\u0442\u0435 \u0432\u043d\u0438\u043c\u0430\u043d\u0438\u0435, \u0447\u0442\u043e \u0443 \u0432\u0441\u0435\u0445 \u0440\u0435\u0441\u0443\u0440\u0441\u043e\u0432 \u043e\u0434\u043d\u0430 \u0433\u0440\u0443\u043f\u043f\u0430.<br \/>  \u0422\u0430\u043a\u0436\u0435 \u043d\u0430\u0434\u043e \u043d\u0435 \u0437\u0430\u0431\u044b\u0442\u044c \u043f\u043e\u043f\u0440\u0430\u0432\u0438\u0442\u044c dafault -\u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440\u044b \u043a\u043b\u0430\u0441\u0442\u0435\u0440\u0430:  <\/p>\n<pre><code class=\"bash\">failure-timeout=60s migration-threshold=1 <\/code><\/pre>\n<p>  \u0412 \u043a\u043e\u043d\u0435\u0447\u043d\u043e \u0438\u0442\u043e\u0433\u0435, \u0412\u044b \u0434\u043e\u043b\u0436\u043d\u044b \u0443\u0432\u0438\u0434\u0435\u0442\u044c \u043f\u043e\u0434\u043e\u0431\u043d\u043e\u0435:<\/p>\n<pre><code class=\"bash\">[root@node1 ~]# pcs status Cluster name: cluster_web Last updated: Mon Apr 4 14:23:34 2016 Last change: Thu Mar 31 12:51:03 2016 by root via cibadmin on node2 Stack: corosync Current DC: node2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum 2 nodes and 4 resources configured  Online: [ node1 node2 ]  Full list of resources:  Resource Group: PGCLUSTER        DATA (ocf::heartbeat:LVM): Started node2        DATA_FS (ocf::heartbeat:Filesystem): Started node2        virtual_ip (ocf::heartbeat:IPaddr2): Started node2        pgsql (ocf::heartbeat:pgsql): Started node2  PCSD Status:   node1: Online   node2: Online  Daemon Status:   corosync: active\/disabled   pacemaker: active\/disabled   pcsd: active\/enabled <\/code><\/pre>\n<p>  \u041f\u0440\u043e\u0432\u0435\u0440\u044f\u0435\u043c \u0441\u0442\u0430\u0442\u0443\u0441 \u0441\u043b\u0443\u0436\u0431\u044b PostgreSQL, \u043d\u0430 \u0445\u043e\u0441\u0442\u0435, \u0433\u0434\u0435 \u0440\u0435\u0441\u0443\u0440\u0441\u043d\u0430\u044f \u0433\u0440\u0443\u043f\u043f\u0430:<\/p>\n<pre><code class=\"bash\">[root@node2~]# ps -ef | grep postgres postgres  4183     1  0 Mar31 ?        00:00:51 \/usr\/pgsql-9.4\/bin\/postgres -D \/data -c config_file=\/data\/postgresql.conf postgres  4204  4183  0 Mar31 ?        00:00:00 postgres: logger process postgres  4206  4183  0 Mar31 ?        00:00:00 postgres: checkpointer process postgres  4207  4183  0 Mar31 ?        00:00:02 postgres: writer process postgres  4208  4183  0 Mar31 ?        00:00:02 postgres: wal writer process postgres  4209  4183  0 Mar31 ?        00:00:09 postgres: autovacuum launcher process postgres  4210  4183  0 Mar31 ?        00:00:36 postgres: stats collector process root     16926 30749  0 16:41 pts\/0    00:00:00 grep --color=auto postgres <\/code><\/pre>\n<h5>\u041f\u0440\u043e\u0432\u0435\u0440\u044f\u0435\u043c \u0440\u0430\u0431\u043e\u0442\u043e\u0441\u043f\u043e\u0441\u043e\u0431\u043d\u043e\u0441\u0442\u044c<\/h5>\n<p>  \u0418\u043c\u0438\u0442\u0438\u0440\u0443\u0435\u043c \u043f\u0430\u0434\u0435\u043d\u0438\u0435 \u0441\u0435\u0440\u0432\u0438\u0441\u0430 \u043d\u0430 \u043d\u043e\u0434\u04352 \u0438 \u0441\u043c\u043e\u0442\u0440\u0438\u043c, \u0447\u0442\u043e \u043f\u0440\u043e\u0438\u0441\u0445\u043e\u0434\u0438\u0442:  <\/p>\n<pre><code class=\"bash\">[root@node2 ~]# pcs resource debug-stop pgsql Operation stop for pgsql (ocf:heartbeat:pgsql) returned 0  &gt;  stderr: ERROR: waiting for server to shut down....Terminated  &gt;  stderr: INFO: PostgreSQL is down <\/code><\/pre>\n<p>  \u041f\u0440\u043e\u0432\u0435\u0440\u044f\u0435\u043c \u0441\u0442\u0430\u0442\u0443\u0441 \u043d\u0430 \u043d\u043e\u0434\u04351:<\/p>\n<pre><code class=\"bash\">[root@node1 ~]# pcs status Cluster name: cluster_web Last updated: Mon Apr  4 16:51:59 2016          Last change: Thu Mar 31 12:51:03 2016 by root via cibadmin on node2 Stack: corosync Current DC: node2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum 2 nodes and 4 resources configured  Online: [ node1 node2 ]  Full list of resources:   Resource Group: PGCLUSTER      DATA       (ocf::heartbeat:LVM):   Started node1      DATA_FS    (ocf::heartbeat:Filesystem):    Started node1      virtual_ip (ocf::heartbeat:IPaddr2):       Started node1      pgsql      (ocf::heartbeat:pgsql): Started node1  Failed Actions: * pgsql_monitor_4000 on node2 'not running' (7): call=48, status=complete, exitreason='none',     last-rc-change='Mon Apr  4 16:51:11 2016', queued=0ms, exec=0ms   PCSD Status:   node1: Online   node2: Online  Daemon Status:   corosync: active\/disabled   pacemaker: active\/disabled   pcsd: active\/enabled <\/code><\/pre>\n<p>  \u041a\u0430\u043a \u043c\u044b \u0432\u0438\u0434\u0438\u043c \u0441\u0435\u0440\u0432\u0438\u0441 \u0443\u0436\u0435 \u043f\u0440\u0435\u043a\u0440\u0430\u0441\u043d\u043e \u0441\u0435\u0431\u044f \u0447\u0443\u0432\u0441\u0442\u0432\u0443\u0435\u0442 \u043d\u0430 \u043d\u043e\u0434\u04351.<\/p>\n<p>  ToDO: \u0441\u0434\u0435\u043b\u0430\u0442\u044c \u0437\u0430\u0432\u0438\u0441\u0438\u043c\u043e\u0441\u0442\u0438 \u043e\u0442 \u0440\u0435\u0441\u0443\u0440\u0441\u043e\u0432 \u0432\u043d\u0443\u0442\u0440\u0438 \u0433\u0440\u0443\u043f\u043f\u044b\u2026<\/p>\n<p>  \u041b\u0438\u0442\u0435\u0440\u0430\u0442\u0443\u0440\u0430:<br \/>  <a href=\"http:\/\/clusterlabs.org\/\">clusterlabs.org<\/a>               <\/p>\n<div class=\"clear\"><\/div>\n<p> \u0441\u0441\u044b\u043b\u043a\u0430 \u043d\u0430 \u043e\u0440\u0438\u0433\u0438\u043d\u0430\u043b \u0441\u0442\u0430\u0442\u044c\u0438 <a href=\"https:\/\/habrahabr.ru\/post\/280872\/\"> https:\/\/habrahabr.ru\/post\/280872\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>       <img decoding=\"async\" src=\"https:\/\/habrastorage.org\/getpro\/habr\/post_images\/cb2\/776\/a29\/cb2776a2971117d6517936e78f696e38.jpg\" alt=\"image\"\/><\/p>\n<p>  <b><\/p>\n<h5>\u041e\u043f\u0438\u0441\u0430\u043d\u0438\u0435<\/h5>\n<p><\/b><br \/>  \u0412 \u0434\u0430\u043d\u043d\u043e\u0439 \u0441\u0442\u0430\u0442\u044c\u0435 \u0440\u0430\u0441\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u0435\u0442\u0441\u044f \u043f\u0440\u0438\u043c\u0435\u0440 \u043d\u0430\u0441\u0442\u0440\u043e\u0439\u043a\u0438 Active\/Passive \u043a\u043b\u0430\u0441\u0442\u0435\u0440\u0430 \u0434\u043b\u044f PostgreSQL \u0441 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u0435\u043c Pacemaker, Corosync. \u0412 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 \u0434\u0438\u0441\u043a\u043e\u0432\u043e\u0439 \u043f\u043e\u0434\u0441\u0438\u0441\u0442\u0435\u043c\u044b \u0440\u0430\u0441\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u0435\u0442\u0441\u044f \u0434\u0438\u0441\u043a \u043e\u0442 \u0441\u0438\u0441\u0442\u0435\u043c\u044b \u0445\u0440\u0430\u043d\u0435\u043d\u0438\u044f \u0434\u0430\u043d\u043d\u044b\u0445 (CSV). \u0420\u0435\u0448\u0435\u043d\u0438\u0435 \u043d\u0430\u043f\u043e\u043c\u0438\u043d\u0430\u0435\u0442 Windows Failover Cluster \u043e\u0442 Microsoft.<\/p>\n<p>  \u0422\u0435\u0445\u043d\u0438\u0447\u0435\u0441\u043a\u0438\u0435 \u043f\u043e\u0434\u0440\u043e\u0431\u043d\u043e\u0441\u0442\u0438:<br \/>  <i>\u0412\u0435\u0440\u0441\u0438\u044f \u043e\u043f\u0435\u0440\u0430\u0446\u0438\u043e\u043d\u043d\u043e\u0439 \u0441\u0438\u0441\u0442\u0435\u043c\u044b \u2014 CentOS 7.1<br \/>  \u0412\u0435\u0440\u0441\u0438\u044f \u043f\u0430\u043a\u0435\u0442\u0430 pacemaker \u2014 1.1.13-10<br \/>  \u0412\u0435\u0440\u0441\u0438\u044f \u043f\u0430\u043a\u0435\u0442\u0430 pcs \u2014 0.9.143<br \/>  \u0412 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 \u0441\u0435\u0440\u0432\u0435\u0440\u043e\u0432(2\u0448\u0442) \u2014 \u0436\u0435\u043b\u0435\u0437\u043d\u044b\u0435 \u0441\u0435\u0440\u0432\u0435\u0440\u0430 2*12 CPU\/ 94GB memory<br \/>  \u0412 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 CSV(Cluster Shared Volume) \u2014 \u043c\u0430\u0441\u0441\u0438\u0432 \u043a\u043b\u0430\u0441\u0441\u0430 Mid-Range Hitachi RAID 1+0<\/i><\/p>\n<p>  <b><\/p>\n<h5>\u041f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043a\u0430 \u0443\u0437\u043b\u043e\u0432 \u043a\u043b\u0430\u0441\u0442\u0435\u0440\u0430<\/h5>\n<p><\/b>  <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[],"tags":[],"class_list":["post-277627","post","type-post","status-publish","format-standard","hentry"],"_links":{"self":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts\/277627","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=277627"}],"version-history":[{"count":0,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts\/277627\/revisions"}],"wp:attachment":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=277627"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=277627"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=277627"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}