Mailing List Archive

VIP on Active/Active cluster
Hello,

I wonder if someone can light me on how to handle the following cluster
scene:

2 Nodes Cluster (Active/Active)
1 Cluster managed VIP - RoundRobin ?
SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"

My main question is, can one VIP serve 2 nodes?

Thanks in advance.
Re: VIP on Active/Active cluster [ In reply to ]
----- Original Message -----
> From: "Paul Damken" <zen.suite@gmail.com>
> To: pacemaker@oss.clusterlabs.org
> Sent: Wednesday, May 9, 2012 3:10:03 PM
> Subject: [Pacemaker] VIP on Active/Active cluster
>
>
> Hello,
>
>
> I wonder if someone can light me on how to handle the following
> cluster scene:
>
>
> 2 Nodes Cluster (Active/Active)
> 1 Cluster managed VIP - RoundRobin ?
> SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"
>
>
> My main question is, can one VIP serve 2 nodes?
>

crm ra info ocf:hearbeat:IPaddr2

specificly clusterip_hash

Example:
primitive p_ip_vip ocf:heartbeat:IPaddr2 \
params ip="192.168.0.254" nic="eth0" cidr_netmask="22" broadcast="192.168.3.255" clusterip_hash="sourceip-sourceport" iflabel="VIP" \
operations $id="p_ip_vip-operations" \
op start interval="0" timeout="20" \
op stop interval="0" timeout="20" \
op monitor interval="10" timeout="20" start-delay="0"

clone cl_vip p_ip_vip

HTH

Jake


>
> Thanks in advance.
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: VIP on Active/Active cluster [ In reply to ]
What application is running on the nodes?

Sent from my iPad

On May 9, 2012, at 3:10 PM, Paul Damken <zen.suite@gmail.com> wrote:

> Hello,
>
> I wonder if someone can light me on how to handle the following cluster scene:
>
> 2 Nodes Cluster (Active/Active)
> 1 Cluster managed VIP - RoundRobin ?
> SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"
>
> My main question is, can one VIP serve 2 nodes?
>
> Thanks in advance.
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: VIP on Active/Active cluster [ In reply to ]
> Hello,
>
> I wonder if someone can light me on how to handle the following cluster
> scene:
>
> 2 Nodes Cluster (Active/Active)
> 1 Cluster managed VIP - RoundRobin ?
> SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"
>
> My main question is, can one VIP serve 2 nodes?
>
> Thanks in advance.

Yes. But I would use the "localnode" feature of the Linux Virtual Server. The
LVS is a real loadbalancer that offers more features than the clustered IP
address of the normal cluster.

--
Dr. Michael Schwartzkopff
Guardinistr. 63
81375 München

Tel: (0163) 172 50 98
Re: VIP on Active/Active cluster [ In reply to ]
> Hello,
> >
> > I wonder if someone can light me on how to handle the following cluster
> > scene:
> >
> > 2 Nodes Cluster (Active/Active)
> > 1 Cluster managed VIP - RoundRobin ?
> > SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"
> >
> > My main question is, can one VIP serve 2 nodes?
> >
> > Thanks in advance.
>
> Yes. But I would use the "localnode" feature of the Linux Virtual Server.
> The
> LVS is a real loadbalancer that offers more features than the clustered IP
> address of the normal cluster.
>
> --
> Dr. Michael Schwartzkopff
> Guardinistr. 63
> 81375 M?nchen
>
> Tel: (0163) 172 50 98
>

How could do so?
I tried setting up VIP + Clone VIP and once resource is cloned and started
on both nodes, it is no longer
reachable.

crm(live)configure# show
node havc1
node havc2
primitive failover-ip1 ocf:heartbeat:IPaddr2 \
params ip="192.168.1.20" cidr_netmask="24"
broadcast="192.168.1.255" nic="eth0" clusterip_hash="sourceip-sourceport" \
op monitor interval="20s"
clone ip1-clone failover-ip1 \
meta globally-unique="true" clone-max="2" clone-node-max="2"
target-role="Started"
property $id="cib-bootstrap-options" \
dc-version="1.1.2-2e096a41a5f9e184a1c1537c82c6da1093698eb5" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
last-lrm-refresh="1336841278"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"

-------------------------------------------------------------------------

Chain INPUT (policy ACCEPT)
target prot opt source destination
CLUSTERIP all -- anywhere 192.168.1.20 CLUSTERIP
hashmode=sourceip clustermac=81:30:6E:B7:6D:AF total_nodes=2 local_node=1
hash_init=0

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination


Any idea what is wrong, or causing this to not being reachable once both
IPaddr2 RA start on both nodes?
Re: VIP on Active/Active cluster [ In reply to ]
clone-node-max="2" should only be one.
How about the output from crm_mon -fr1
And ip a s on each node?

Jake

----- Reply message -----
From: "Paul Damken" <zen.suite@gmail.com>
To: <pacemaker@oss.clusterlabs.org>
Subject: [Pacemaker] VIP on Active/Active cluster
Date: Sat, May 12, 2012 2:49 pm


> Hello,
> >
> > I wonder if someone can light me on how to handle the following cluster
> > scene:
> >
> > 2 Nodes Cluster (Active/Active)
> > 1 Cluster managed VIP - RoundRobin ?
> > SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"
> >
> > My main question is, can one VIP serve 2 nodes?
> >
> > Thanks in advance.
>
> Yes. But I would use the "localnode" feature of the Linux Virtual Server.
> The
> LVS is a real loadbalancer that offers more features than the clustered IP
> address of the normal cluster.
>
> --
> Dr. Michael Schwartzkopff
> Guardinistr. 63
> 81375 M?nchen
>
> Tel: (0163) 172 50 98
>

How could do so?
I tried setting up VIP + Clone VIP and once resource is cloned and started
on both nodes, it is no longer
reachable.

crm(live)configure# show
node havc1
node havc2
primitive failover-ip1 ocf:heartbeat:IPaddr2 \
params ip="192.168.1.20" cidr_netmask="24"
broadcast="192.168.1.255" nic="eth0" clusterip_hash="sourceip-sourceport" \
op monitor interval="20s"
clone ip1-clone failover-ip1 \
meta globally-unique="true" clone-max="2" clone-node-max="2"
target-role="Started"
property $id="cib-bootstrap-options" \
dc-version="1.1.2-2e096a41a5f9e184a1c1537c82c6da1093698eb5" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
last-lrm-refresh="1336841278"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"

-------------------------------------------------------------------------

Chain INPUT (policy ACCEPT)
target prot opt source destination
CLUSTERIP all -- anywhere 192.168.1.20 CLUSTERIP
hashmode=sourceip clustermac=81:30:6E:B7:6D:AF total_nodes=2 local_node=1
hash_init=0

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination


Any idea what is wrong, or causing this to not being reachable once both
IPaddr2 RA start on both nodes?
Re: VIP on Active/Active cluster [ In reply to ]
Jake Smith <jsmith@...> writes:

>
>
> clone-node-max="2" should only be one.  How about the output from crm_mon -
fr1And ip a s on each node? Jake
> ----- Reply message -----From: "Paul Damken" <zen.suite <at> gmail.com>To:
<pacemaker <at> oss.clusterlabs.org>Subject: [Pacemaker] VIP on Active/Active
clusterDate: Sat, May 12, 2012 2:49 pm
>
>
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@...
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

Jake, Thanks here is the whole info. Same behavior. VIP not pingable nor
reachable.

Do you think that Share VIP should work on SLES 11 SP1 HAE?
I cannot get this VIP to work.

Resources:

primitive ip_vip ocf:heartbeat:IPaddr2 \
params ip="192.168.1.100" nic="bond0" cidr_netmask="22"
broadcast="192.168.1.255" clusterip_hash="sourceip-sourceport" iflabel="VIP1" \
op start interval="0" timeout="20" \
op stop interval="0" timeout="20" \
op monitor interval="10" timeout="20" start-delay="0"

clone cl_vip ip_vip \
meta interleave="true" globally-unique="true" clone-max="2" clone-node-
max="1" target-role="Started" is-managed="true"

crm_mon:

============
Last updated: Mon May 14 08:27:50 2012
Stack: openais
Current DC: hanode1 - partition with quorum
Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
2 Nodes configured, 2 expected votes
37 Resources configured.
============

Online: [ hanode2 hanode1 ]

Full list of resources:

cluster_mon (ocf::pacemaker:ClusterMon): Started hanode1
Clone Set: HASI [HASI_grp]
Started: [ hanode2 hanode1 ]
hanode1-stonith (stonith:external/ipmi-operator): Started hanode2
hanode2-stonith (stonith:external/ipmi-operator): Started hanode1
vghanode1 (ocf::heartbeat:LVM): Started hanode1
vghanode2 (ocf::heartbeat:LVM): Started hanode2
Clone Set: ora [ora_grp]
Started: [ hanode2 hanode1 ]
Clone Set: cl_vip [ip_vip] (unique)
ip_vip:0 (ocf::heartbeat:IPaddr2): Started hanode2
ip_vip:1 (ocf::heartbeat:IPaddr2): Started hanode1



hanode1:~ # ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master bond0 state UP qlen 1000
link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master bond0 state UP qlen 1000
link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP
link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.58/22 brd 192.168.1.255 scope global bond0
inet 192.168.1.100/22 brd 192.168.1.255 scope global secondary bond0:VIP1
inet6 fe80::9e8e:99ff:fe24:72a0/64 scope link
valid_lft forever preferred_lft forever

-----------------------


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: VIP on Active/Active cluster [ In reply to ]
Cloning IPAddr2 resources utilizes the iptables CLUSTERIP rule. Probably a good idea to start looking at it w/ tcpdump and seeing if either box gets the icmp echo-request packet (from a ping) and determining if it just doesn't respond properly, doesn't get it at all, or something else.

I'd say it's more of a iptables/networking issue than it is a pacemaker problem now. That said, you didn't detail why you wanted a shared VIP in the first place, or what the application is, so it's perhaps going to cause more problems than it's worth (e.g. if your app is running, but is broke on one box, the VIP will still route users to it).



On May 14, 2012, at 9:45 AM, Paul Damken wrote:

> Jake Smith <jsmith@...> writes:
>
>>
>>
>> clone-node-max="2" should only be one. How about the output from crm_mon -
> fr1And ip a s on each node? Jake
>> ----- Reply message -----From: "Paul Damken" <zen.suite <at> gmail.com>To:
> <pacemaker <at> oss.clusterlabs.org>Subject: [Pacemaker] VIP on Active/Active
> clusterDate: Sat, May 12, 2012 2:49 pm
>>
>>
>>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker@...
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>
> Jake, Thanks here is the whole info. Same behavior. VIP not pingable nor
> reachable.
>
> Do you think that Share VIP should work on SLES 11 SP1 HAE?
> I cannot get this VIP to work.
>
> Resources:
>
> primitive ip_vip ocf:heartbeat:IPaddr2 \
> params ip="192.168.1.100" nic="bond0" cidr_netmask="22"
> broadcast="192.168.1.255" clusterip_hash="sourceip-sourceport" iflabel="VIP1" \
> op start interval="0" timeout="20" \
> op stop interval="0" timeout="20" \
> op monitor interval="10" timeout="20" start-delay="0"
>
> clone cl_vip ip_vip \
> meta interleave="true" globally-unique="true" clone-max="2" clone-node-
> max="1" target-role="Started" is-managed="true"
>
> crm_mon:
>
> ============
> Last updated: Mon May 14 08:27:50 2012
> Stack: openais
> Current DC: hanode1 - partition with quorum
> Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
> 2 Nodes configured, 2 expected votes
> 37 Resources configured.
> ============
>
> Online: [ hanode2 hanode1 ]
>
> Full list of resources:
>
> cluster_mon (ocf::pacemaker:ClusterMon): Started hanode1
> Clone Set: HASI [HASI_grp]
> Started: [ hanode2 hanode1 ]
> hanode1-stonith (stonith:external/ipmi-operator): Started hanode2
> hanode2-stonith (stonith:external/ipmi-operator): Started hanode1
> vghanode1 (ocf::heartbeat:LVM): Started hanode1
> vghanode2 (ocf::heartbeat:LVM): Started hanode2
> Clone Set: ora [ora_grp]
> Started: [ hanode2 hanode1 ]
> Clone Set: cl_vip [ip_vip] (unique)
> ip_vip:0 (ocf::heartbeat:IPaddr2): Started hanode2
> ip_vip:1 (ocf::heartbeat:IPaddr2): Started hanode1
>
>
>
> hanode1:~ # ip a s
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
> inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UP
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> inet 192.168.1.58/22 brd 192.168.1.255 scope global bond0
> inet 192.168.1.100/22 brd 192.168.1.255 scope global secondary bond0:VIP1
> inet6 fe80::9e8e:99ff:fe24:72a0/64 scope link
> valid_lft forever preferred_lft forever
>
> -----------------------
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: VIP on Active/Active cluster [ In reply to ]
Cloning IPAddr2 resources utilizes the iptables CLUSTERIP rule. Probably a good idea to start looking at it w/ tcpdump and seeing if either box gets the icmp echo-request packet (from a ping) and determining if it just doesn't respond properly, doesn't get it at all, or something else.

I'd say it's more of a iptables/networking issue than it is a pacemaker problem now. That said, you didn't detail why you wanted a shared VIP in the first place, or what the application is, so it's perhaps going to cause more problems than it's worth (e.g. if your app is running, but is broke on one box, the VIP will still route users to it).



On May 14, 2012, at 9:45 AM, Paul Damken wrote:

> Jake Smith <jsmith@...> writes:
>
>>
>>
>> clone-node-max="2" should only be one. How about the output from crm_mon -
> fr1And ip a s on each node? Jake
>> ----- Reply message -----From: "Paul Damken" <zen.suite <at> gmail.com>To:
> <pacemaker <at> oss.clusterlabs.org>Subject: [Pacemaker] VIP on Active/Active
> clusterDate: Sat, May 12, 2012 2:49 pm
>>
>>
>>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker@...
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>
> Jake, Thanks here is the whole info. Same behavior. VIP not pingable nor
> reachable.
>
> Do you think that Share VIP should work on SLES 11 SP1 HAE?
> I cannot get this VIP to work.
>
> Resources:
>
> primitive ip_vip ocf:heartbeat:IPaddr2 \
> params ip="192.168.1.100" nic="bond0" cidr_netmask="22"
> broadcast="192.168.1.255" clusterip_hash="sourceip-sourceport" iflabel="VIP1" \
> op start interval="0" timeout="20" \
> op stop interval="0" timeout="20" \
> op monitor interval="10" timeout="20" start-delay="0"
>
> clone cl_vip ip_vip \
> meta interleave="true" globally-unique="true" clone-max="2" clone-node-
> max="1" target-role="Started" is-managed="true"
>
> crm_mon:
>
> ============
> Last updated: Mon May 14 08:27:50 2012
> Stack: openais
> Current DC: hanode1 - partition with quorum
> Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
> 2 Nodes configured, 2 expected votes
> 37 Resources configured.
> ============
>
> Online: [ hanode2 hanode1 ]
>
> Full list of resources:
>
> cluster_mon (ocf::pacemaker:ClusterMon): Started hanode1
> Clone Set: HASI [HASI_grp]
> Started: [ hanode2 hanode1 ]
> hanode1-stonith (stonith:external/ipmi-operator): Started hanode2
> hanode2-stonith (stonith:external/ipmi-operator): Started hanode1
> vghanode1 (ocf::heartbeat:LVM): Started hanode1
> vghanode2 (ocf::heartbeat:LVM): Started hanode2
> Clone Set: ora [ora_grp]
> Started: [ hanode2 hanode1 ]
> Clone Set: cl_vip [ip_vip] (unique)
> ip_vip:0 (ocf::heartbeat:IPaddr2): Started hanode2
> ip_vip:1 (ocf::heartbeat:IPaddr2): Started hanode1
>
>
>
> hanode1:~ # ip a s
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
> inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UP
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> inet 192.168.1.58/22 brd 192.168.1.255 scope global bond0
> inet 192.168.1.100/22 brd 192.168.1.255 scope global secondary bond0:VIP1
> inet6 fe80::9e8e:99ff:fe24:72a0/64 scope link
> valid_lft forever preferred_lft forever
>
> -----------------------
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: VIP on Active/Active cluster [ In reply to ]
----- Original Message -----
> From: "Paul Damken" <zen.suite@gmail.com>
> To: pacemaker@clusterlabs.org
> Sent: Monday, May 14, 2012 9:45:30 AM
> Subject: Re: [Pacemaker] VIP on Active/Active cluster
>
> Jake Smith <jsmith@...> writes:
>
> >
> >
> > clone-node-max="2" should only be one.  How about the output from
> > crm_mon -
> fr1And ip a s on each node? Jake
> > ----- Reply message -----From: "Paul Damken" <zen.suite <at>
> > gmail.com>To:
> <pacemaker <at> oss.clusterlabs.org>Subject: [Pacemaker] VIP on
> Active/Active
> clusterDate: Sat, May 12, 2012 2:49 pm
> >
> >
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker@...
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started:
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
>
> Jake, Thanks here is the whole info. Same behavior. VIP not pingable
> nor
> reachable.
>
> Do you think that Share VIP should work on SLES 11 SP1 HAE?
> I cannot get this VIP to work.

I use Ubuntu so I can't say 100% but I would expect so... I use it successfully in my cluster so I know it *can* work in general.

Your cidr_netmask looks odd to me given the broadcast address... should it be 24 or 23 not 22?

>
> Resources:
>
> primitive ip_vip ocf:heartbeat:IPaddr2 \
> params ip="192.168.1.100" nic="bond0" cidr_netmask="22"
> broadcast="192.168.1.255" clusterip_hash="sourceip-sourceport"
> iflabel="VIP1" \
> op start interval="0" timeout="20" \
> op stop interval="0" timeout="20" \
> op monitor interval="10" timeout="20" start-delay="0"
>
> clone cl_vip ip_vip \
> meta interleave="true" globally-unique="true" clone-max="2"
> clone-node-
> max="1" target-role="Started" is-managed="true"

Don't need any of these parameters really... just clone cl_vip ip_vip and nothing else. global-unique could be part of the problem too

interleave is default false if not defined and pretty sure you want it false
globally-unique is default false and should not be true for your use case
clone-max defaults to the number of nodes in cluster so if you have 2 nodes you get 2 clones
clone-node-max defaults to 1
target-role and is-managed are auto-generated when you did certain cluster actions and are fine as is or removed


>
> crm_mon:
>
> ============
> Last updated: Mon May 14 08:27:50 2012
> Stack: openais
> Current DC: hanode1 - partition with quorum
> Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
> 2 Nodes configured, 2 expected votes
> 37 Resources configured.
> ============
>
> Online: [ hanode2 hanode1 ]
>
> Full list of resources:
>
> cluster_mon (ocf::pacemaker:ClusterMon): Started hanode1
> Clone Set: HASI [HASI_grp]
> Started: [ hanode2 hanode1 ]
> hanode1-stonith (stonith:external/ipmi-operator): Started
> hanode2
> hanode2-stonith (stonith:external/ipmi-operator): Started
> hanode1
> vghanode1 (ocf::heartbeat:LVM): Started hanode1
> vghanode2 (ocf::heartbeat:LVM): Started hanode2
> Clone Set: ora [ora_grp]
> Started: [ hanode2 hanode1 ]
> Clone Set: cl_vip [ip_vip] (unique)
> ip_vip:0 (ocf::heartbeat:IPaddr2): Started hanode2
> ip_vip:1 (ocf::heartbeat:IPaddr2): Started hanode1
>

should not be (unique) as I stated above

>
>
> hanode1:~ # ip a s
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
> inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state
> UP
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> inet 192.168.1.58/22 brd 192.168.1.255 scope global bond0
> inet 192.168.1.100/22 brd 192.168.1.255 scope global secondary
> bond0:VIP1
> inet6 fe80::9e8e:99ff:fe24:72a0/64 scope link
> valid_lft forever preferred_lft forever
>

I would try the changes above to the clone and (possibly) the netmasks.

Then if it's still not pingable I would stop any firewall on the servers temporarily and test just to rule the firewall out.

If that doesn't work how about output from "crm_mon -fr1" "crm configure show". And "ip a s" from each node

HTH

Jake

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org