Mailing List Archive

DRBD primary/primary for KVM - what is the best option?
I'm just about to embark on my first adventure with DRBD, and I'm
interested in some advice on the most reliable approach.

I'm going to have two servers configured as a DRBD cluster, with
either a gigabit ethernet or infiniband link between them. This
cluster will host a number of virtual machine images, using KVM,
likely using Convirt as a management tool. I will have one or more
machines in addition to the two main machines, which will be used to
host images during maintenance or in the event of a machine failure.

My question is how best to export file system images to the various
machines. My first thought was to use a primary/primary setup and
export LVM volumes via multipath iSCSI. The advantage here is that the
load on the network and file system is spread evenly, the
configuration allows for failover from just about any conceivable
failure, and direct LVM access minimizes the file system overhead. The
configuration seems to be fairly complex, but I've tackled similar
complexity in the past without concern.

However, In my research, I've come across articles like this:
http://87.230.77.133/blog/?p=6. This is causing me to second guess as
to whether this is the best approach to take. I'm struggling to find
consensus on the web as to the best path to take.
Obviously, I have other options at my disposal, including:

1. Add OCFS2 to the mix, and use file-based images instead of LVM volumes.
2. Abandon iSCSI for NFS.
3. Abandon multipath in favor of a simple heartbeat failover.
4. Switch to a primary/secondary configuration.

Does anyone have opinions on the most appropriate path to take?

--
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
Michael, I have a similar setup, however, I am going with a simpler
configuration.

I have two Xen hypervisor machines, each with a 1TB volume. I use LVM
on top of the 1TB volume to carve out LVs for each VM harddrive.

Then I have DRBD replicating each LV. So, currently I have 14 DRBD
devices, I add a new DRBD resource whenever I create a new VM.

This allows each VM to migrate from one hypervisor to the other
independently. All the DRBD resources are setup for dual primary, this
is needed to support Xen live migration.

I let Hearbeat manage the VMs and I use the drbd: storage type for the
Xen VMs, so Xen can handle the DRBD resources. This gives me failover
for all the VMs, as well as manual live migration. Currently I run
half the VMs on each hypervisor 7/7, to spread the load, of course
Hearbeat will boot up the VMs on the remaining hypervisor if one of
the systems fail. When I perform maintenance, I can put a Heartbeat
node into standby and the VMs live migrate.

This has been a very stable configuration for me.

On Mon, May 10, 2010 at 11:30 AM, Michael Iverson
<dr.michael.iverson@gmail.com> wrote:
> I'm just about to embark on my first adventure with DRBD, and I'm
> interested in some advice on the most reliable approach.
>
> I'm going to have two servers configured as a DRBD cluster, with
> either a gigabit ethernet or infiniband link between them. This
> cluster will host a number of virtual machine images, using KVM,
> likely using Convirt as a management tool. I will have one or more
> machines in addition to the two main machines, which will be used to
> host images during maintenance or in the event of a machine failure.
>
> My question is how best to export file system images to the various
> machines. My first thought was to use a primary/primary setup and
> export LVM volumes via multipath iSCSI. The advantage here is that the
> load on the network and file system is spread evenly, the
> configuration allows for failover from just about any conceivable
> failure, and direct LVM access minimizes the file system overhead. The
> configuration seems to be fairly complex, but I've tackled similar
> complexity in the past without concern.
>
> However, In my research, I've come across articles like this:
> http://87.230.77.133/blog/?p=6. This is causing me to second guess as
> to whether this is the best approach to take. I'm struggling to find
> consensus on the web as to the best path to take.
> Obviously, I have other options at my disposal, including:
>
> 1. Add OCFS2 to the mix, and use file-based images instead of LVM volumes.
> 2. Abandon iSCSI for NFS.
> 3. Abandon multipath in favor of a simple heartbeat failover.
> 4. Switch to a primary/secondary configuration.
>
> Does anyone have opinions on the most appropriate path to take?
>
> --
> Dr. Michael Iverson
> Director of Information Technology
> Hatteras Printing
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
Michael, sorry I did not speak about KVM, I have never used it, my
experience is with Xen, so I can only assume you can do something
similar with KVM. My point was that having a dedicated DRBD resource
for each VM (as opposed to replicating the entire volume) gives you
granular control over each virtual disk. Allowing you to move a VM
from one machine to the other, and not requiring that you have a
primary/primary setup and a cluster file system. This is of course at
the expense of having many LVs and DRBD resources, but I have not run
into any issues with my setup so far.

On Mon, May 10, 2010 at 2:29 PM, Ben Timby <btimby@gmail.com> wrote:
> Michael, I have a similar setup, however, I am going with a simpler
> configuration.
>
> I have two Xen hypervisor machines, each with a 1TB volume. I use LVM
> on top of the 1TB volume to carve out LVs for each VM harddrive.
>
> Then I have DRBD replicating each LV. So, currently I have 14 DRBD
> devices, I add a new DRBD resource whenever I create a new VM.
>
> This allows each VM to migrate from one hypervisor to the other
> independently. All the DRBD resources are setup for dual primary, this
> is needed to support Xen live migration.
>
> I let Hearbeat manage the VMs and I use the drbd: storage type for the
> Xen VMs, so Xen can handle the DRBD resources. This gives me failover
> for all the VMs, as well as manual live migration. Currently I run
> half the VMs on each hypervisor 7/7, to spread the load, of course
> Hearbeat will boot up the VMs on the remaining hypervisor if one of
> the systems fail. When I perform maintenance, I can put a Heartbeat
> node into standby and the VMs live migrate.
>
> This has been a very stable configuration for me.
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
Ben,

This is an interesting approach. I can see how recovering from split
brain would be relatively easy in this situation, as each volume
would, in effect, be operating in a primary/secondary role, as the
virtual image would only be running in a single location.

The only reason I've brought iSCSI into the mix is to have the option
of including a third node into the cluster, used to off load
processing if one of the primary nodes is down. This, in theory, would
allow the cluster to run at higher utilization.

I guess there are three ways I could bring the third node into play:

1. Use direct disk access for normal production in a manner similar to
your setup. If I need to use the third node, I could manually start
iSCSI on the drbd images, and cold migrate some the VMs to the third
node. This way, I'd get good performance in normal production, but
still could bring the third node into play when needed, with the loss
of live migration.

2. Use a similar approach to yours, only with the addition of iSCSI.
Each LUN could be associated with the same compute node where the VM
image normally runs, with failover to the opposite node. This would
allow the iSCSI traffic to remain local on the machine in most cases,
and only use the network in the event the image was migrated to
another node. This would keep live migration between any node a viable
option. If I can figure out how to hot migrate individual iSCSI
targets, it would be even more flexible.

I suppose I'll have to do some benchmarks to see if the performance
loss of a local and/or remote iSCSI service brings to the VM versus
direct block device access.

3. The third option would be an all iSCSI setup similar to what I
mentioned in my original email, with the DRBD cluster acting as a
load-balanced, multipath SAN.

Thanks,

Mike

On Mon, May 10, 2010 at 2:34 PM, Ben Timby <btimby@gmail.com> wrote:
> Michael, sorry I did not speak about KVM, I have never used it, my
> experience is with Xen, so I can only assume you can do something
> similar with KVM. My point was that having a dedicated DRBD resource
> for each VM (as opposed to replicating the entire volume) gives you
> granular control over each virtual disk. Allowing you to move a VM
> from one machine to the other, and not requiring that you have a
> primary/primary setup and a cluster file system. This is of course at
> the expense of having many LVs and DRBD resources, but I have not run
> into any issues with my setup so far.
>
> On Mon, May 10, 2010 at 2:29 PM, Ben Timby <btimby@gmail.com> wrote:
>> Michael, I have a similar setup, however, I am going with a simpler
>> configuration.
>>
>> I have two Xen hypervisor machines, each with a 1TB volume. I use LVM
>> on top of the 1TB volume to carve out LVs for each VM harddrive.
>>
>> Then I have DRBD replicating each LV. So, currently I have 14 DRBD
>> devices, I add a new DRBD resource whenever I create a new VM.
>>
>> This allows each VM to migrate from one hypervisor to the other
>> independently. All the DRBD resources are setup for dual primary, this
>> is needed to support Xen live migration.
>>
>> I let Hearbeat manage the VMs and I use the drbd: storage type for the
>> Xen VMs, so Xen can handle the DRBD resources. This gives me failover
>> for all the VMs, as well as manual live migration. Currently I run
>> half the VMs on each hypervisor 7/7, to spread the load, of course
>> Hearbeat will boot up the VMs on the remaining hypervisor if one of
>> the systems fail. When I perform maintenance, I can put a Heartbeat
>> node into standby and the VMs live migrate.
>>
>> This has been a very stable configuration for me.
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>



--
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
Thanks all for your input. Based on what was posted here, plus some
private discussions, and some reflection, I think I've come to my
senses, and really only have two viable options:

1. iSCSI with a conventional primary/secondary setup and failover.
Take the disk performance hit in exchange for flexible live migration.
This should be perfect for some of my not-very-io-intensive tasks. I
might still play with multipath for network redundancy and link
throughput, but it will be multipath to one host at a time.

2. The multiple drbd partition that Ben Timby mentioned. This should
be better for io-bound applications, while avoiding much of the split
brain headaches caused by a single volume primary-primary setup. The
only downside is that live migration is only going to be between two
machines.

For my initial cluster I'm building now, I'm going to go with the
former method. However, the latter method will probably be ideal for
phase two of my HA project, which will tackle a rather large database
we have.

Mike


On Mon, May 10, 2010 at 4:23 PM, Michael Iverson <miverson@4hatteras.com> wrote:
> Ben,
>
> This is an interesting approach. I can see how recovering from split
> brain would be relatively easy in this situation, as each volume
> would, in effect, be operating in a primary/secondary role, as the
> virtual image would only be running in a single location.
>
> The only reason I've brought iSCSI into the mix is to have the option
> of including a third node into the cluster, used to off load
> processing if one of the primary nodes is down. This, in theory, would
> allow the cluster to run at higher utilization.
>
> I guess there are three ways I could bring the third node into play:
>
> 1. Use direct disk access for normal production in a manner similar to
> your setup. If I need to use the third node, I could manually start
> iSCSI on the drbd images, and cold migrate some the VMs to the third
> node. This way, I'd get good performance in normal production, but
> still could bring the third node into play when needed, with the loss
> of live migration.
>
> 2. Use a similar approach to yours, only with the addition of iSCSI.
> Each LUN could be associated with the same compute node where the VM
> image normally runs, with failover to the opposite node. This would
> allow the iSCSI traffic to remain local on the machine in most cases,
> and only use the network in the event the image was migrated to
> another node. This would keep live migration between any node a viable
> option. If I can figure out how to hot migrate individual iSCSI
> targets, it would be even more flexible.
>
> I suppose I'll have to do some benchmarks to see if the performance
> loss of a local and/or remote iSCSI service brings to the VM versus
> direct block device access.
>
> 3. The third option would be an all iSCSI setup similar to what I
> mentioned in my original email, with the DRBD cluster acting as a
> load-balanced, multipath SAN.
>
> Thanks,
>
> Mike
>
> On Mon, May 10, 2010 at 2:34 PM, Ben Timby <btimby@gmail.com> wrote:
>> Michael, sorry I did not speak about KVM, I have never used it, my
>> experience is with Xen, so I can only assume you can do something
>> similar with KVM. My point was that having a dedicated DRBD resource
>> for each VM (as opposed to replicating the entire volume) gives you
>> granular control over each virtual disk. Allowing you to move a VM
>> from one machine to the other, and not requiring that you have a
>> primary/primary setup and a cluster file system. This is of course at
>> the expense of having many LVs and DRBD resources, but I have not run
>> into any issues with my setup so far.
>>
>> On Mon, May 10, 2010 at 2:29 PM, Ben Timby <btimby@gmail.com> wrote:
>>> Michael, I have a similar setup, however, I am going with a simpler
>>> configuration.
>>>
>>> I have two Xen hypervisor machines, each with a 1TB volume. I use LVM
>>> on top of the 1TB volume to carve out LVs for each VM harddrive.
>>>
>>> Then I have DRBD replicating each LV. So, currently I have 14 DRBD
>>> devices, I add a new DRBD resource whenever I create a new VM.
>>>
>>> This allows each VM to migrate from one hypervisor to the other
>>> independently. All the DRBD resources are setup for dual primary, this
>>> is needed to support Xen live migration.
>>>
>>> I let Hearbeat manage the VMs and I use the drbd: storage type for the
>>> Xen VMs, so Xen can handle the DRBD resources. This gives me failover
>>> for all the VMs, as well as manual live migration. Currently I run
>>> half the VMs on each hypervisor 7/7, to spread the load, of course
>>> Hearbeat will boot up the VMs on the remaining hypervisor if one of
>>> the systems fail. When I perform maintenance, I can put a Heartbeat
>>> node into standby and the VMs live migrate.
>>>
>>> This has been a very stable configuration for me.
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>
>
>
> --
> Dr. Michael Iverson
> Director of Information Technology
> Hatteras Printing
>



--
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On Tuesday 11 May 2010 01:28:37 Michael Iverson wrote:
> Thanks all for your input. Based on what was posted here, plus some
> private discussions, and some reflection, I think I've come to my
> senses, and really only have two viable options:
>
> 1. iSCSI with a conventional primary/secondary setup and failover.
> Take the disk performance hit in exchange for flexible live migration.
> This should be perfect for some of my not-very-io-intensive tasks. I
> might still play with multipath for network redundancy and link
> throughput, but it will be multipath to one host at a time.
>
> 2. The multiple drbd partition that Ben Timby mentioned. This should
> be better for io-bound applications, while avoiding much of the split
> brain headaches caused by a single volume primary-primary setup. The
> only downside is that live migration is only going to be between two
> machines.
>
> For my initial cluster I'm building now, I'm going to go with the
> former method. However, the latter method will probably be ideal for
> phase two of my HA project, which will tackle a rather large database
> we have.
>
> Mike
>
> On Mon, May 10, 2010 at 4:23 PM, Michael Iverson <miverson@4hatteras.com>
wrote:
> > Ben,
> >
> > This is an interesting approach. I can see how recovering from split
> > brain would be relatively easy in this situation, as each volume
> > would, in effect, be operating in a primary/secondary role, as the
> > virtual image would only be running in a single location.
> >
> > The only reason I've brought iSCSI into the mix is to have the option
> > of including a third node into the cluster, used to off load
> > processing if one of the primary nodes is down. This, in theory, would
> > allow the cluster to run at higher utilization.
> >
> > I guess there are three ways I could bring the third node into play:
> >
> > 1. Use direct disk access for normal production in a manner similar to
> > your setup. If I need to use the third node, I could manually start
> > iSCSI on the drbd images, and cold migrate some the VMs to the third
> > node. This way, I'd get good performance in normal production, but
> > still could bring the third node into play when needed, with the loss
> > of live migration.
> >
> > 2. Use a similar approach to yours, only with the addition of iSCSI.
> > Each LUN could be associated with the same compute node where the VM
> > image normally runs, with failover to the opposite node. This would
> > allow the iSCSI traffic to remain local on the machine in most cases,
> > and only use the network in the event the image was migrated to
> > another node. This would keep live migration between any node a viable
> > option. If I can figure out how to hot migrate individual iSCSI
> > targets, it would be even more flexible.
> >
> > I suppose I'll have to do some benchmarks to see if the performance
> > loss of a local and/or remote iSCSI service brings to the VM versus
> > direct block device access.
> >
> > 3. The third option would be an all iSCSI setup similar to what I
> > mentioned in my original email, with the DRBD cluster acting as a
> > load-balanced, multipath SAN.
> >
> > Thanks,
> >
> > Mike
> >
> > On Mon, May 10, 2010 at 2:34 PM, Ben Timby <btimby@gmail.com> wrote:
> >> Michael, sorry I did not speak about KVM, I have never used it, my
> >> experience is with Xen, so I can only assume you can do something
> >> similar with KVM. My point was that having a dedicated DRBD resource
> >> for each VM (as opposed to replicating the entire volume) gives you
> >> granular control over each virtual disk. Allowing you to move a VM
> >> from one machine to the other, and not requiring that you have a
> >> primary/primary setup and a cluster file system. This is of course at
> >> the expense of having many LVs and DRBD resources, but I have not run
> >> into any issues with my setup so far.
> >>
> >> On Mon, May 10, 2010 at 2:29 PM, Ben Timby <btimby@gmail.com> wrote:
> >>> Michael, I have a similar setup, however, I am going with a simpler
> >>> configuration.
> >>>
> >>> I have two Xen hypervisor machines, each with a 1TB volume. I use LVM
> >>> on top of the 1TB volume to carve out LVs for each VM harddrive.
> >>>
> >>> Then I have DRBD replicating each LV. So, currently I have 14 DRBD
> >>> devices, I add a new DRBD resource whenever I create a new VM.
> >>>
> >>> This allows each VM to migrate from one hypervisor to the other
> >>> independently. All the DRBD resources are setup for dual primary, this
> >>> is needed to support Xen live migration.
> >>>
> >>> I let Hearbeat manage the VMs and I use the drbd: storage type for the
> >>> Xen VMs, so Xen can handle the DRBD resources. This gives me failover
> >>> for all the VMs, as well as manual live migration. Currently I run
> >>> half the VMs on each hypervisor 7/7, to spread the load, of course
> >>> Hearbeat will boot up the VMs on the remaining hypervisor if one of
> >>> the systems fail. When I perform maintenance, I can put a Heartbeat
> >>> node into standby and the VMs live migrate.
> >>>
> >>> This has been a very stable configuration for me.
> >>
> >> _______________________________________________
> >> drbd-user mailing list
> >> drbd-user@lists.linbit.com
> >> http://lists.linbit.com/mailman/listinfo/drbd-user
> >
> > --
> > Dr. Michael Iverson
> > Director of Information Technology
> > Hatteras Printing
>
What also might be a nice idea (though theoretical, don't know if this is
feasible) is to have a primary/secondary setup for iSCSI and have Heartbeat
take care of individual targets on individual DRBD resources. One set could
run on a seperate IP address on one node and the other could run on the other
node on a different IP address. This would perfectly divide the load.
However, while failing over, the Heartbeat resource agent should instruct IETD
(if this is the iSCSI software you are using) to first bind to the other IP-
address and then dynamically load the new targets.
I'm not sure if this is possible at all and if this will result in breaking
existing iSCSI sessions or not, but it would be a nice thing to happen though
...


Bart
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On 05/11/2010 10:44 AM, Bart Coninckx wrote:
> What also might be a nice idea (though theoretical, don't know if this is
> feasible) is to have a primary/secondary setup for iSCSI and have Heartbeat
> take care of individual targets on individual DRBD resources. One set could
> run on a seperate IP address on one node and the other could run on the other
> node on a different IP address. This would perfectly divide the load.
> However, while failing over, the Heartbeat resource agent should instruct IETD
> (if this is the iSCSI software you are using) to first bind to the other IP-
> address and then dynamically load the new targets.
> I'm not sure if this is possible at all and if this will result in breaking
> existing iSCSI sessions or not, but it would be a nice thing to happen though
> ...

It is. I'm planning to showcase this in one of our upcoming webinars.

Cheers,
Florian
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On Tuesday 11 May 2010 10:51:26 Florian Haas wrote:
> On 05/11/2010 10:44 AM, Bart Coninckx wrote:
> > What also might be a nice idea (though theoretical, don't know if this is
> > feasible) is to have a primary/secondary setup for iSCSI and have
> > Heartbeat take care of individual targets on individual DRBD resources.
> > One set could run on a seperate IP address on one node and the other
> > could run on the other node on a different IP address. This would
> > perfectly divide the load. However, while failing over, the Heartbeat
> > resource agent should instruct IETD (if this is the iSCSI software you
> > are using) to first bind to the other IP- address and then dynamically
> > load the new targets.
> > I'm not sure if this is possible at all and if this will result in
> > breaking existing iSCSI sessions or not, but it would be a nice thing to
> > happen though ...
>
> It is. I'm planning to showcase this in one of our upcoming webinars.
>
> Cheers,
> Florian
>

Excellent, any timeframe on this? As it happens I'm dealing with a setup now
that could definitely benefit from this.

B.
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
I'd be quite interested as well, obviously. So this is what we would
end up with:

Host A is primary for drbd volume 1, and secondary for drbd volume 2.
It acts as an iSCSI target for whatever's on volume 1.

Host B is primary for volume 2, and secondary for volume 1. It acts as
a target for whatever's on volume 2.

If either node fails, the opposite node takes over the secondary
volume, and exports its fallen comrade's iSCSI targets.

This idea could possibly be extended with Ben's approach of one DRBD
volume per iSCSI target. (Except that it would be in a
primary/secondary role, instead of primary/primary.) This would make
the process of rebalancing the load between the two nodes fairly
trivial.

Mike

On Tue, May 11, 2010 at 5:09 AM, Bart Coninckx <bart.coninckx@telenet.be> wrote:
>>
>> It is. I'm planning to showcase this in one of our upcoming webinars.
>>
>> Cheers,
>> Florian
>>
>
> Excellent, any timeframe on this? As it happens I'm dealing with a setup now
> that could definitely benefit from this.
>
> B.
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>



--
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On Tuesday 11 May 2010 12:58:45 Michael Iverson wrote:
> I'd be quite interested as well, obviously. So this is what we would
> end up with:
>
> Host A is primary for drbd volume 1, and secondary for drbd volume 2.
> It acts as an iSCSI target for whatever's on volume 1.
>
> Host B is primary for volume 2, and secondary for volume 1. It acts as
> a target for whatever's on volume 2.
>
> If either node fails, the opposite node takes over the secondary
> volume, and exports its fallen comrade's iSCSI targets.
>
> This idea could possibly be extended with Ben's approach of one DRBD
> volume per iSCSI target. (Except that it would be in a
> primary/secondary role, instead of primary/primary.) This would make
> the process of rebalancing the load between the two nodes fairly
> trivial.
>
> Mike
>
> On Tue, May 11, 2010 at 5:09 AM, Bart Coninckx <bart.coninckx@telenet.be>
wrote:
> >> It is. I'm planning to showcase this in one of our upcoming webinars.
> >>
> >> Cheers,
> >> Florian
> >
> > Excellent, any timeframe on this? As it happens I'm dealing with a setup
> > now that could definitely benefit from this.
> >
> > B.
> > _______________________________________________
> > drbd-user mailing list
> > drbd-user@lists.linbit.com
> > http://lists.linbit.com/mailman/listinfo/drbd-user
>

Agreed, but what might be less trivial is to convince a running IETD target to
have the config for the "other" targets merged to the existing targets and at
the same time bind to the new secondary IP address, preferably while not
breaking running operation. This all should be taken care of by Heartbeat.

I'm going to try to dive into the challenge and report back to the list,
unless the webinar would happen fairly soon.


B.
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
I've done about zero research into this, but perhaps you could run two
separate daemon instances, one listening on each IP.

On Tue, May 11, 2010 at 8:55 AM, Bart Coninckx <bart.coninckx@telenet.be>wrote:

> On Tuesday 11 May 2010 12:58:45 Michael Iverson wrote:
> > I'd be quite interested as well, obviously. So this is what we would
> > end up with:
> >
> > Host A is primary for drbd volume 1, and secondary for drbd volume 2.
> > It acts as an iSCSI target for whatever's on volume 1.
> >
> > Host B is primary for volume 2, and secondary for volume 1. It acts as
> > a target for whatever's on volume 2.
> >
> > If either node fails, the opposite node takes over the secondary
> > volume, and exports its fallen comrade's iSCSI targets.
> >
> > This idea could possibly be extended with Ben's approach of one DRBD
> > volume per iSCSI target. (Except that it would be in a
> > primary/secondary role, instead of primary/primary.) This would make
> > the process of rebalancing the load between the two nodes fairly
> > trivial.
> >
> > Mike
> >
> > On Tue, May 11, 2010 at 5:09 AM, Bart Coninckx <bart.coninckx@telenet.be
> >
> wrote:
> > >> It is. I'm planning to showcase this in one of our upcoming webinars.
> > >>
> > >> Cheers,
> > >> Florian
> > >
> > > Excellent, any timeframe on this? As it happens I'm dealing with a
> setup
> > > now that could definitely benefit from this.
> > >
> > > B.
> > > _______________________________________________
> > > drbd-user mailing list
> > > drbd-user@lists.linbit.com
> > > http://lists.linbit.com/mailman/listinfo/drbd-user
> >
>
> Agreed, but what might be less trivial is to convince a running IETD target
> to
> have the config for the "other" targets merged to the existing targets and
> at
> the same time bind to the new secondary IP address, preferably while not
> breaking running operation. This all should be taken care of by Heartbeat.
>
> I'm going to try to dive into the challenge and report back to the list,
> unless the webinar would happen fairly soon.
>
>
> B.
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>



--
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
It seems possible to start two distinct instances of the target
daemon, and force them on to specific ports of IP addresses:

http://manpages.ubuntu.com/manpages/intrepid/man8/ietd.8.html

On Tue, May 11, 2010 at 9:15 AM, Michael Iverson <miverson@4hatteras.com> wrote:
>
> I've done about zero research into this, but perhaps you could run two separate daemon instances, one listening on each IP.
>
> On Tue, May 11, 2010 at 8:55 AM, Bart Coninckx <bart.coninckx@telenet.be> wrote:
>>
>> On Tuesday 11 May 2010 12:58:45 Michael Iverson wrote:
>> > I'd be quite interested as well, obviously. So this is what we would
>> > end up with:
>> >
>> > Host A is primary for drbd volume 1, and secondary for drbd volume 2.
>> > It acts as an iSCSI target for whatever's on volume 1.
>> >
>> > Host B is primary for volume 2, and secondary for volume 1. It acts as
>> > a target for whatever's on volume 2.
>> >
>> > If either node fails, the opposite node takes over the secondary
>> > volume, and exports its fallen comrade's iSCSI targets.
>> >
>> > This idea could possibly be extended with Ben's approach of one DRBD
>> > volume per iSCSI target. (Except that it would be in a
>> > primary/secondary role, instead of primary/primary.) This would make
>> > the process of rebalancing the load between the two nodes fairly
>> > trivial.
>> >
>> > Mike
>> >
>> > On Tue, May 11, 2010 at 5:09 AM, Bart Coninckx <bart.coninckx@telenet.be>
>> wrote:
>> > >> It is. I'm planning to showcase this in one of our upcoming webinars.
>> > >>
>> > >> Cheers,
>> > >> Florian
>> > >
>> > > Excellent, any timeframe on this? As it happens I'm dealing with a setup
>> > > now that could definitely benefit from this.
>> > >
>> > > B.
>> > > _______________________________________________
>> > > drbd-user mailing list
>> > > drbd-user@lists.linbit.com
>> > > http://lists.linbit.com/mailman/listinfo/drbd-user
>> >
>>
>> Agreed, but what might be less trivial is to convince a running IETD target to
>> have the config for the "other" targets merged to the existing targets and at
>> the same time bind to the new secondary IP address, preferably while not
>> breaking running operation. This all should be taken care of by Heartbeat.
>>
>> I'm going to try to dive into the challenge and report back to the list,
>> unless the webinar would happen fairly soon.
>>
>>
>> B.
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>
>
>
> --
> Dr. Michael Iverson
> Director of Information Technology
> Hatteras Printing



--
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On Tuesday 11 May 2010 15:15:34 Michael Iverson wrote:
> I've done about zero research into this, but perhaps you could run two
> separate daemon instances, one listening on each IP.
>
> On Tue, May 11, 2010 at 8:55 AM, Bart Coninckx
<bart.coninckx@telenet.be>wrote:
> > On Tuesday 11 May 2010 12:58:45 Michael Iverson wrote:
> > > I'd be quite interested as well, obviously. So this is what we would
> > > end up with:
> > >
> > > Host A is primary for drbd volume 1, and secondary for drbd volume 2.
> > > It acts as an iSCSI target for whatever's on volume 1.
> > >
> > > Host B is primary for volume 2, and secondary for volume 1. It acts as
> > > a target for whatever's on volume 2.
> > >
> > > If either node fails, the opposite node takes over the secondary
> > > volume, and exports its fallen comrade's iSCSI targets.
> > >
> > > This idea could possibly be extended with Ben's approach of one DRBD
> > > volume per iSCSI target. (Except that it would be in a
> > > primary/secondary role, instead of primary/primary.) This would make
> > > the process of rebalancing the load between the two nodes fairly
> > > trivial.
> > >
> > > Mike
> > >
> > > On Tue, May 11, 2010 at 5:09 AM, Bart Coninckx
> > > <bart.coninckx@telenet.be
> >
> > wrote:
> > > >> It is. I'm planning to showcase this in one of our upcoming
> > > >> webinars.
> > > >>
> > > >> Cheers,
> > > >> Florian
> > > >
> > > > Excellent, any timeframe on this? As it happens I'm dealing with a
> >
> > setup
> >
> > > > now that could definitely benefit from this.
> > > >
> > > > B.
> > > > _______________________________________________
> > > > drbd-user mailing list
> > > > drbd-user@lists.linbit.com
> > > > http://lists.linbit.com/mailman/listinfo/drbd-user
> >
> > Agreed, but what might be less trivial is to convince a running IETD
> > target to
> > have the config for the "other" targets merged to the existing targets
> > and at
> > the same time bind to the new secondary IP address, preferably while not
> > breaking running operation. This all should be taken care of by
> > Heartbeat.
> >
> > I'm going to try to dive into the challenge and report back to the list,
> > unless the webinar would happen fairly soon.
> >
> >
> > B.
> > _______________________________________________
> > drbd-user mailing list
> > drbd-user@lists.linbit.com
> > http://lists.linbit.com/mailman/listinfo/drbd-user
>

Not possible:

http://sourceforge.net/mailarchive/message.php?msg_id=02dd01c8263f%244496ae60%245dd810d1%40e3demo


Rgds,

B.
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
ietadm is the answer.

These might help:

http://old.nabble.com/IET-on-DRBD-howto--td20567810.html
http://www.gossamer-threads.com/lists/linuxha/users/45280
http://www.markround.com/archives/50-Building-a-redundant-iSCSI-and-NFS-cluster-with-Debian-Part-4.html

On Tue, May 11, 2010 at 9:51 AM, Bart Coninckx <bart.coninckx@telenet.be> wrote:
> On Tuesday 11 May 2010 15:15:34 Michael Iverson wrote:
>> I've done about zero research into this, but perhaps you could run two
>> separate daemon instances, one listening on each IP.
>>
>> On Tue, May 11, 2010 at 8:55 AM, Bart Coninckx
> <bart.coninckx@telenet.be>wrote:
>> > On Tuesday 11 May 2010 12:58:45 Michael Iverson wrote:
>> > > I'd be quite interested as well, obviously. So this is what we would
>> > > end up with:
>> > >
>> > > Host A is primary for drbd volume 1, and secondary for drbd volume 2.
>> > > It acts as an iSCSI target for whatever's on volume 1.
>> > >
>> > > Host B is primary for volume 2, and secondary for volume 1. It acts as
>> > > a target for whatever's on volume 2.
>> > >
>> > > If either node fails, the opposite node takes over the secondary
>> > > volume, and exports its fallen comrade's iSCSI targets.
>> > >
>> > > This idea could possibly be extended with Ben's approach of one DRBD
>> > > volume per iSCSI target. (Except that it would be in a
>> > > primary/secondary role, instead of primary/primary.) This would make
>> > > the process of rebalancing the load between the two nodes fairly
>> > > trivial.
>> > >
>> > > Mike
>> > >
>> > > On Tue, May 11, 2010 at 5:09 AM, Bart Coninckx
>> > > <bart.coninckx@telenet.be
>> >
>> > wrote:
>> > > >> It is. I'm planning to showcase this in one of our upcoming
>> > > >> webinars.
>> > > >>
>> > > >> Cheers,
>> > > >> Florian
>> > > >
>> > > > Excellent, any timeframe on this? As it happens I'm dealing with a
>> >
>> > setup
>> >
>> > > > now that could definitely benefit from this.
>> > > >
>> > > > B.
>> > > > _______________________________________________
>> > > > drbd-user mailing list
>> > > > drbd-user@lists.linbit.com
>> > > > http://lists.linbit.com/mailman/listinfo/drbd-user
>> >
>> > Agreed, but what might be less trivial is to convince a running IETD
>> > target to
>> > have the config for the "other" targets merged to the existing targets
>> > and at
>> > the same time bind to the new secondary IP address, preferably while not
>> > breaking running operation. This all should be taken care of by
>> > Heartbeat.
>> >
>> > I'm going to try to dive into the challenge and report back to the list,
>> > unless the webinar would happen fairly soon.
>> >
>> >
>> > B.
>> > _______________________________________________
>> > drbd-user mailing list
>> > drbd-user@lists.linbit.com
>> > http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>
> Not possible:
>
> http://sourceforge.net/mailarchive/message.php?msg_id=02dd01c8263f%244496ae60%245dd810d1%40e3demo
>
>
> Rgds,
>
> B.
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>



--
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On Tuesday 11 May 2010 16:09:08 Michael Iverson wrote:
> ietadm is the answer.
>
> These might help:
>
> http://old.nabble.com/IET-on-DRBD-howto--td20567810.html
> http://www.gossamer-threads.com/lists/linuxha/users/45280
> http://www.markround.com/archives/50-Building-a-redundant-iSCSI-and-NFS-clu
> ster-with-Debian-Part-4.html
>
> On Tue, May 11, 2010 at 9:51 AM, Bart Coninckx <bart.coninckx@telenet.be>
wrote:
> > On Tuesday 11 May 2010 15:15:34 Michael Iverson wrote:
> >> I've done about zero research into this, but perhaps you could run two
> >> separate daemon instances, one listening on each IP.
> >>
> >> On Tue, May 11, 2010 at 8:55 AM, Bart Coninckx
> >
> > <bart.coninckx@telenet.be>wrote:
> >> > On Tuesday 11 May 2010 12:58:45 Michael Iverson wrote:
> >> > > I'd be quite interested as well, obviously. So this is what we would
> >> > > end up with:
> >> > >
> >> > > Host A is primary for drbd volume 1, and secondary for drbd volume
> >> > > 2. It acts as an iSCSI target for whatever's on volume 1.
> >> > >
> >> > > Host B is primary for volume 2, and secondary for volume 1. It acts
> >> > > as a target for whatever's on volume 2.
> >> > >
> >> > > If either node fails, the opposite node takes over the secondary
> >> > > volume, and exports its fallen comrade's iSCSI targets.
> >> > >
> >> > > This idea could possibly be extended with Ben's approach of one DRBD
> >> > > volume per iSCSI target. (Except that it would be in a
> >> > > primary/secondary role, instead of primary/primary.) This would make
> >> > > the process of rebalancing the load between the two nodes fairly
> >> > > trivial.
> >> > >
> >> > > Mike
> >> > >
> >> > > On Tue, May 11, 2010 at 5:09 AM, Bart Coninckx
> >> > > <bart.coninckx@telenet.be
> >> >
> >> > wrote:
> >> > > >> It is. I'm planning to showcase this in one of our upcoming
> >> > > >> webinars.
> >> > > >>
> >> > > >> Cheers,
> >> > > >> Florian
> >> > > >
> >> > > > Excellent, any timeframe on this? As it happens I'm dealing with a
> >> >
> >> > setup
> >> >
> >> > > > now that could definitely benefit from this.
> >> > > >
> >> > > > B.
> >> > > > _______________________________________________
> >> > > > drbd-user mailing list
> >> > > > drbd-user@lists.linbit.com
> >> > > > http://lists.linbit.com/mailman/listinfo/drbd-user
> >> >
> >> > Agreed, but what might be less trivial is to convince a running IETD
> >> > target to
> >> > have the config for the "other" targets merged to the existing targets
> >> > and at
> >> > the same time bind to the new secondary IP address, preferably while
> >> > not breaking running operation. This all should be taken care of by
> >> > Heartbeat.
> >> >
> >> > I'm going to try to dive into the challenge and report back to the
> >> > list, unless the webinar would happen fairly soon.
> >> >
> >> >
> >> > B.
> >> > _______________________________________________
> >> > drbd-user mailing list
> >> > drbd-user@lists.linbit.com
> >> > http://lists.linbit.com/mailman/listinfo/drbd-user
> >
> > Not possible:
> >
> > http://sourceforge.net/mailarchive/message.php?msg_id=02dd01c8263f%244496
> >ae60%245dd810d1%40e3demo
> >
> >
> > Rgds,
> >
> > B.
> > _______________________________________________
> > drbd-user mailing list
> > drbd-user@lists.linbit.com
> > http://lists.linbit.com/mailman/listinfo/drbd-user
>

Building a HA lcuster with IETD and DRBD is not really challenging, has been
done numerous times. The challenge would be having a active/passive one on
which each node is both active for some LUNs and active for others, especially
at failover.

I don't quite get the suggestion on the first link, having a active-active one
and both nodes serving stuff. But I guess it would not distribute load in
between two nodes, what my fist idea would do.


_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
The interesting bit on the first link was the statement:

"If active-active isn't possible, maybe there is distance involved and it's
doing asynchronous replication, then you will need to implement something
like heartbeat to add the volume using ietadm to the running iSCSI target
once drbd B becomes primary..."

The quote is a little short on implementation details, but based on
it, and some snippets
from the other links, it is the tool to dynamically add or remove
volumes without messing
with the remainder of the live volumes.

The only challenge I see is that any changes that are made are
dynamic, and would not
survive a reboot or a daemon restart. So, somehow, upon a restart, the
ietd daemon needs
a method to reliably determine which volumes it should or should not
be serving, or be
told what to server by heartbeat or the state of the drbd volume.

On Tue, May 11, 2010 at 12:52 PM, Bart Coninckx
<bart.coninckx@telenet.be> wrote:
> On Tuesday 11 May 2010 16:09:08 Michael Iverson wrote:
>> ietadm is the answer.
>>
>> These might help:
>>
>> http://old.nabble.com/IET-on-DRBD-howto--td20567810.html
>> http://www.gossamer-threads.com/lists/linuxha/users/45280
>> http://www.markround.com/archives/50-Building-a-redundant-iSCSI-and-NFS-clu
>> ster-with-Debian-Part-4.html
>>
>> On Tue, May 11, 2010 at 9:51 AM, Bart Coninckx <bart.coninckx@telenet.be>
> wrote:
>> > On Tuesday 11 May 2010 15:15:34 Michael Iverson wrote:
>> >> I've done about zero research into this, but perhaps you could run two
>> >> separate daemon instances, one listening on each IP.
>> >>
>> >> On Tue, May 11, 2010 at 8:55 AM, Bart Coninckx
>> >
>> > <bart.coninckx@telenet.be>wrote:
>> >> > On Tuesday 11 May 2010 12:58:45 Michael Iverson wrote:
>> >> > > I'd be quite interested as well, obviously. So this is what we would
>> >> > > end up with:
>> >> > >
>> >> > > Host A is primary for drbd volume 1, and secondary for drbd volume
>> >> > > 2. It acts as an iSCSI target for whatever's on volume 1.
>> >> > >
>> >> > > Host B is primary for volume 2, and secondary for volume 1. It acts
>> >> > > as a target for whatever's on volume 2.
>> >> > >
>> >> > > If either node fails, the opposite node takes over the secondary
>> >> > > volume, and exports its fallen comrade's iSCSI targets.
>> >> > >
>> >> > > This idea could possibly be extended with Ben's approach of one DRBD
>> >> > > volume per iSCSI target. (Except that it would be in a
>> >> > > primary/secondary role, instead of primary/primary.) This would make
>> >> > > the process of rebalancing the load between the two nodes fairly
>> >> > > trivial.
>> >> > >
>> >> > > Mike
>> >> > >
>> >> > > On Tue, May 11, 2010 at 5:09 AM, Bart Coninckx
>> >> > > <bart.coninckx@telenet.be
>> >> >
>> >> > wrote:
>> >> > > >> It is. I'm planning to showcase this in one of our upcoming
>> >> > > >> webinars.
>> >> > > >>
>> >> > > >> Cheers,
>> >> > > >> Florian
>> >> > > >
>> >> > > > Excellent, any timeframe on this? As it happens I'm dealing with a
>> >> >
>> >> > setup
>> >> >
>> >> > > > now that could definitely benefit from this.
>> >> > > >
>> >> > > > B.
>> >> > > > _______________________________________________
>> >> > > > drbd-user mailing list
>> >> > > > drbd-user@lists.linbit.com
>> >> > > > http://lists.linbit.com/mailman/listinfo/drbd-user
>> >> >
>> >> > Agreed, but what might be less trivial is to convince a running IETD
>> >> > target to
>> >> > have the config for the "other" targets merged to the existing targets
>> >> > and at
>> >> > the same time bind to the new secondary IP address, preferably while
>> >> > not breaking running operation. This all should be taken care of by
>> >> > Heartbeat.
>> >> >
>> >> > I'm going to try to dive into the challenge and report back to the
>> >> > list, unless the webinar would happen fairly soon.
>> >> >
>> >> >
>> >> > B.
>> >> > _______________________________________________
>> >> > drbd-user mailing list
>> >> > drbd-user@lists.linbit.com
>> >> > http://lists.linbit.com/mailman/listinfo/drbd-user
>> >
>> > Not possible:
>> >
>> > http://sourceforge.net/mailarchive/message.php?msg_id=02dd01c8263f%244496
>> >ae60%245dd810d1%40e3demo
>> >
>> >
>> > Rgds,
>> >
>> > B.
>> > _______________________________________________
>> > drbd-user mailing list
>> > drbd-user@lists.linbit.com
>> > http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>
> Building a HA lcuster with IETD and DRBD is not really challenging, has been
> done numerous times. The challenge would be having a active/passive one on
> which each node is both active for some LUNs and active for others, especially
> at failover.
>
> I don't quite get the suggestion on the first link, having a active-active one
> and both nodes serving stuff. But I guess it would not distribute load in
> between two nodes, what my fist idea would do.
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>



--
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On 5/11/2010 1:12 PM, Michael Iverson wrote:
> The interesting bit on the first link was the statement:
>
> "If active-active isn't possible, maybe there is distance involved and it's
> doing asynchronous replication, then you will need to implement something
> like heartbeat to add the volume using ietadm to the running iSCSI target
> once drbd B becomes primary..."
>
> The quote is a little short on implementation details, but based on
> it, and some snippets
> from the other links, it is the tool to dynamically add or remove
> volumes without messing
> with the remainder of the live volumes.
>
> The only challenge I see is that any changes that are made are
> dynamic, and would not
> survive a reboot or a daemon restart. So, somehow, upon a restart, the
> ietd daemon needs
> a method to reliably determine which volumes it should or should not
> be serving, or be
> told what to server by heartbeat or the state of the drbd volume.
>

A bit off topic, but instead of using IETD as the target daemon, using
SCST will provide better performance. Also, for those looking into using
a drbd/heartbeat/iscsi solution to host Microsoft Hyper-V clusters, SCST
offers SCSI-3 compliant Persistent Reservations. IETD, to my knowledge,
does not support PR.

--
Ryan Manikowski

ryan@devision.us | 716.771.2282

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On Tuesday 11 May 2010 19:12:23 Michael Iverson wrote:
> The interesting bit on the first link was the statement:
>
> "If active-active isn't possible, maybe there is distance involved and it's
> doing asynchronous replication, then you will need to implement something
> like heartbeat to add the volume using ietadm to the running iSCSI target
> once drbd B becomes primary..."
>
> The quote is a little short on implementation details, but based on
> it, and some snippets
> from the other links, it is the tool to dynamically add or remove
> volumes without messing
> with the remainder of the live volumes.
>
> The only challenge I see is that any changes that are made are
> dynamic, and would not
> survive a reboot or a daemon restart. So, somehow, upon a restart, the
> ietd daemon needs
> a method to reliably determine which volumes it should or should not
> be serving, or be
> told what to server by heartbeat or the state of the drbd volume.
>
> On Tue, May 11, 2010 at 12:52 PM, Bart Coninckx
>
> <bart.coninckx@telenet.be> wrote:
> > On Tuesday 11 May 2010 16:09:08 Michael Iverson wrote:
> >> ietadm is the answer.
> >>
> >> These might help:
> >>
> >> http://old.nabble.com/IET-on-DRBD-howto--td20567810.html
> >> http://www.gossamer-threads.com/lists/linuxha/users/45280
> >> http://www.markround.com/archives/50-Building-a-redundant-iSCSI-and-NFS-
> >>clu ster-with-Debian-Part-4.html
> >>
> >> On Tue, May 11, 2010 at 9:51 AM, Bart Coninckx
> >> <bart.coninckx@telenet.be>
> >
> > wrote:
> >> > On Tuesday 11 May 2010 15:15:34 Michael Iverson wrote:
> >> >> I've done about zero research into this, but perhaps you could run
> >> >> two separate daemon instances, one listening on each IP.
> >> >>
> >> >> On Tue, May 11, 2010 at 8:55 AM, Bart Coninckx
> >> >
> >> > <bart.coninckx@telenet.be>wrote:
> >> >> > On Tuesday 11 May 2010 12:58:45 Michael Iverson wrote:
> >> >> > > I'd be quite interested as well, obviously. So this is what we
> >> >> > > would end up with:
> >> >> > >
> >> >> > > Host A is primary for drbd volume 1, and secondary for drbd
> >> >> > > volume 2. It acts as an iSCSI target for whatever's on volume 1.
> >> >> > >
> >> >> > > Host B is primary for volume 2, and secondary for volume 1. It
> >> >> > > acts as a target for whatever's on volume 2.
> >> >> > >
> >> >> > > If either node fails, the opposite node takes over the secondary
> >> >> > > volume, and exports its fallen comrade's iSCSI targets.
> >> >> > >
> >> >> > > This idea could possibly be extended with Ben's approach of one
> >> >> > > DRBD volume per iSCSI target. (Except that it would be in a
> >> >> > > primary/secondary role, instead of primary/primary.) This would
> >> >> > > make the process of rebalancing the load between the two nodes
> >> >> > > fairly trivial.
> >> >> > >
> >> >> > > Mike
> >> >> > >
> >> >> > > On Tue, May 11, 2010 at 5:09 AM, Bart Coninckx
> >> >> > > <bart.coninckx@telenet.be
> >> >> >
> >> >> > wrote:
> >> >> > > >> It is. I'm planning to showcase this in one of our upcoming
> >> >> > > >> webinars.
> >> >> > > >>
> >> >> > > >> Cheers,
> >> >> > > >> Florian
> >> >> > > >
> >> >> > > > Excellent, any timeframe on this? As it happens I'm dealing
> >> >> > > > with a
> >> >> >
> >> >> > setup
> >> >> >
> >> >> > > > now that could definitely benefit from this.
> >> >> > > >
> >> >> > > > B.
> >> >> > > > _______________________________________________
> >> >> > > > drbd-user mailing list
> >> >> > > > drbd-user@lists.linbit.com
> >> >> > > > http://lists.linbit.com/mailman/listinfo/drbd-user
> >> >> >
> >> >> > Agreed, but what might be less trivial is to convince a running
> >> >> > IETD target to
> >> >> > have the config for the "other" targets merged to the existing
> >> >> > targets and at
> >> >> > the same time bind to the new secondary IP address, preferably
> >> >> > while not breaking running operation. This all should be taken care
> >> >> > of by Heartbeat.
> >> >> >
> >> >> > I'm going to try to dive into the challenge and report back to the
> >> >> > list, unless the webinar would happen fairly soon.
> >> >> >
> >> >> >
> >> >> > B.
> >> >> > _______________________________________________
> >> >> > drbd-user mailing list
> >> >> > drbd-user@lists.linbit.com
> >> >> > http://lists.linbit.com/mailman/listinfo/drbd-user
> >> >
> >> > Not possible:
> >> >
> >> > http://sourceforge.net/mailarchive/message.php?msg_id=02dd01c8263f%244
> >> >496 ae60%245dd810d1%40e3demo
> >> >
> >> >
> >> > Rgds,
> >> >
> >> > B.
> >> > _______________________________________________
> >> > drbd-user mailing list
> >> > drbd-user@lists.linbit.com
> >> > http://lists.linbit.com/mailman/listinfo/drbd-user
> >
> > Building a HA lcuster with IETD and DRBD is not really challenging, has
> > been done numerous times. The challenge would be having a active/passive
> > one on which each node is both active for some LUNs and active for
> > others, especially at failover.
> >
> > I don't quite get the suggestion on the first link, having a
> > active-active one and both nodes serving stuff. But I guess it would not
> > distribute load in between two nodes, what my fist idea would do.
> >
> >
> > _______________________________________________
> > drbd-user mailing list
> > drbd-user@lists.linbit.com
> > http://lists.linbit.com/mailman/listinfo/drbd-user
>

Heartbeat normally takes care of both things: failing over and starting all
resources at boot. I guess we need a Heartbeat resource agent that can use
ietdadm. I really wonder what the webinar is about.

B.


_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On Tuesday 11 May 2010 19:28:46 Ryan Manikowski wrote:
> On 5/11/2010 1:12 PM, Michael Iverson wrote:
> > The interesting bit on the first link was the statement:
> >
> > "If active-active isn't possible, maybe there is distance involved and
> > it's doing asynchronous replication, then you will need to implement
> > something like heartbeat to add the volume using ietadm to the running
> > iSCSI target once drbd B becomes primary..."
> >
> > The quote is a little short on implementation details, but based on
> > it, and some snippets
> > from the other links, it is the tool to dynamically add or remove
> > volumes without messing
> > with the remainder of the live volumes.
> >
> > The only challenge I see is that any changes that are made are
> > dynamic, and would not
> > survive a reboot or a daemon restart. So, somehow, upon a restart, the
> > ietd daemon needs
> > a method to reliably determine which volumes it should or should not
> > be serving, or be
> > told what to server by heartbeat or the state of the drbd volume.
>
> A bit off topic, but instead of using IETD as the target daemon, using
> SCST will provide better performance. Also, for those looking into using
> a drbd/heartbeat/iscsi solution to host Microsoft Hyper-V clusters, SCST
> offers SCSI-3 compliant Persistent Reservations. IETD, to my knowledge,
> does not support PR.
>

True, but the integration with Heartbeat and SCST is poorly documented.
Recently I found only one reference to a howto.

B.
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On 5/11/2010 2:47 PM, Bart Coninckx wrote:
> On Tuesday 11 May 2010 19:28:46 Ryan Manikowski wrote:
>
>> On 5/11/2010 1:12 PM, Michael Iverson wrote:
>>
>>> The interesting bit on the first link was the statement:
>>>
>>> "If active-active isn't possible, maybe there is distance involved and
>>> it's doing asynchronous replication, then you will need to implement
>>> something like heartbeat to add the volume using ietadm to the running
>>> iSCSI target once drbd B becomes primary..."
>>>
>>> The quote is a little short on implementation details, but based on
>>> it, and some snippets
>>> from the other links, it is the tool to dynamically add or remove
>>> volumes without messing
>>> with the remainder of the live volumes.
>>>
>>> The only challenge I see is that any changes that are made are
>>> dynamic, and would not
>>> survive a reboot or a daemon restart. So, somehow, upon a restart, the
>>> ietd daemon needs
>>> a method to reliably determine which volumes it should or should not
>>> be serving, or be
>>> told what to server by heartbeat or the state of the drbd volume.
>>>
>> A bit off topic, but instead of using IETD as the target daemon, using
>> SCST will provide better performance. Also, for those looking into using
>> a drbd/heartbeat/iscsi solution to host Microsoft Hyper-V clusters, SCST
>> offers SCSI-3 compliant Persistent Reservations. IETD, to my knowledge,
>> does not support PR.
>>
>>
> True, but the integration with Heartbeat and SCST is poorly documented.
> Recently I found only one reference to a howto.
>
> B.
>

Documentation I have written for Centos 5.x. Someone let me know where
to post it. howtoforger perhaps?


--
Ryan Manikowski

ryan@devision.us | 716.771.2282

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On 5/11/2010 2:45 PM, Bart Coninckx wrote:

<snip>

> Heartbeat normally takes care of both things: failing over and starting all
> resources at boot. I guess we need a Heartbeat resource agent that can use
> ietdadm. I really wonder what the webinar is about.
>
> B.
>
>
>

No resource agent needed. Heartbeat automagically sources /etc/init.d
for daemon scripts. Just place inline as you would any other resource in
haresources and it will apply 'start/stop' according to status.

--
Ryan Manikowski

ryan@devision.us | 716.771.2282

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On Tuesday 11 May 2010 20:54:14 Ryan Manikowski wrote:
> On 5/11/2010 2:47 PM, Bart Coninckx wrote:
> > On Tuesday 11 May 2010 19:28:46 Ryan Manikowski wrote:
> >> On 5/11/2010 1:12 PM, Michael Iverson wrote:
> >>> The interesting bit on the first link was the statement:
> >>>
> >>> "If active-active isn't possible, maybe there is distance involved and
> >>> it's doing asynchronous replication, then you will need to implement
> >>> something like heartbeat to add the volume using ietadm to the running
> >>> iSCSI target once drbd B becomes primary..."
> >>>
> >>> The quote is a little short on implementation details, but based on
> >>> it, and some snippets
> >>> from the other links, it is the tool to dynamically add or remove
> >>> volumes without messing
> >>> with the remainder of the live volumes.
> >>>
> >>> The only challenge I see is that any changes that are made are
> >>> dynamic, and would not
> >>> survive a reboot or a daemon restart. So, somehow, upon a restart, the
> >>> ietd daemon needs
> >>> a method to reliably determine which volumes it should or should not
> >>> be serving, or be
> >>> told what to server by heartbeat or the state of the drbd volume.
> >>
> >> A bit off topic, but instead of using IETD as the target daemon, using
> >> SCST will provide better performance. Also, for those looking into using
> >> a drbd/heartbeat/iscsi solution to host Microsoft Hyper-V clusters, SCST
> >> offers SCSI-3 compliant Persistent Reservations. IETD, to my knowledge,
> >> does not support PR.
> >
> > True, but the integration with Heartbeat and SCST is poorly documented.
> > Recently I found only one reference to a howto.
> >
> > B.
>
> Documentation I have written for Centos 5.x. Someone let me know where
> to post it. howtoforger perhaps?
>

Definitely.

B.
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On 05/11/2010 08:56 PM, Ryan Manikowski wrote:
>> Heartbeat normally takes care of both things: failing over and starting all
>> resources at boot. I guess we need a Heartbeat resource agent that can use
>> ietdadm. I really wonder what the webinar is about.
>
> No resource agent needed. Heartbeat automagically sources /etc/init.d
> for daemon scripts. Just place inline as you would any other resource in
> haresources and it will apply 'start/stop' according to status.

Sorry to put it this bluntly, but that's bogus. Firstly, it's not quite
that simple, specifically for active/active setups. Secondly, no-one
should be using haresources setup anymore, we have Pacemaker for a reason.

This might be insightful:
http://www.linux-ha.org/doc/re-ra-iSCSITarget.html
http://www.linux-ha.org/doc/re-ra-iSCSILogicalUnit.html

Cheers,
Florian
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On 5/11/2010 4:01 PM, Florian Haas wrote:
> On 05/11/2010 08:56 PM, Ryan Manikowski wrote:
>
>>> Heartbeat normally takes care of both things: failing over and starting all
>>> resources at boot. I guess we need a Heartbeat resource agent that can use
>>> ietdadm. I really wonder what the webinar is about.
>>>
>> No resource agent needed. Heartbeat automagically sources /etc/init.d
>> for daemon scripts. Just place inline as you would any other resource in
>> haresources and it will apply 'start/stop' according to status.
>>
> Sorry to put it this bluntly, but that's bogus. Firstly, it's not quite
> that simple, specifically for active/active setups. Secondly, no-one
> should be using haresources setup anymore, we have Pacemaker for a reason.
>
> This might be insightful:
> http://www.linux-ha.org/doc/re-ra-iSCSITarget.html
> http://www.linux-ha.org/doc/re-ra-iSCSILogicalUnit.html
>
> Cheers,
> Florian
>
>
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>

I would have gotten around to configuring pacemaker but heartbeat is
still available to handle resource failover. I appreciate the community
having gotten around to building a 'better solution', but heartbeat has
always worked for me in my configurations.

The older documentation (for the version of heartbeat that ships with
Centos/RHEL) for LSBResourceAgents:

http://www.linux-ha.org/LSBResourceAgent

Yes, I realize the information is for historical purposes because Linux
HA has moved on to newer code, but it still works.



--
Ryan Manikowski

ryan@devision.us | 716.771.2282
Re: DRBD primary/primary for KVM - what is the best option? [ In reply to ]
On Tuesday 11 May 2010 22:01:59 Florian Haas wrote:
> On 05/11/2010 08:56 PM, Ryan Manikowski wrote:
> >> Heartbeat normally takes care of both things: failing over and starting
> >> all resources at boot. I guess we need a Heartbeat resource agent that
> >> can use ietdadm. I really wonder what the webinar is about.
> >
> > No resource agent needed. Heartbeat automagically sources /etc/init.d
> > for daemon scripts. Just place inline as you would any other resource in
> > haresources and it will apply 'start/stop' according to status.
>
> Sorry to put it this bluntly, but that's bogus. Firstly, it's not quite
> that simple, specifically for active/active setups. Secondly, no-one
> should be using haresources setup anymore, we have Pacemaker for a reason.
>
> This might be insightful:
> http://www.linux-ha.org/doc/re-ra-iSCSITarget.html
> http://www.linux-ha.org/doc/re-ra-iSCSILogicalUnit.html
>
> Cheers,
> Florian
>

Florian, thx for the links. I suppose in an active/passive setup the first RA
can be used to start a target on each individual node and the second RA to
alternately attribute LUNs to each target?

Cheers,

B.
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

1 2  View All