Mailing List Archive

Infiniband card support and help
Hi,

I'm looking for a 10Gbit backend for storage drbd replication. I'm
expecting to setup infiniband solution connected back to back, this
means both nodes will be connected together without a switch.

I wonder if I bought two of this cards MHES14-xtc and a cable, I will be
able to produce such setup?

Link to the cards:
http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41

Another question, I intend to use this with infiniband sdr support added
to drbd in 8.3.3, and I found this on the spec's of the card.

"In addition, the card includes internal Subnet Management Agent (SMA)
and General Service Agents, eliminating the requirement for an external
management agent CPU."

This means I don't need to run openSM in any nodes? I will just need to
setup two cards, a cable, connect them, and setup IPoIB to start
replicating in 10Gbit?

Thanks very much,

--
Igor Neves<igor.neves@3gnt.net>
3GNTW - Tecnologias de Informação, Lda

SIP: igor@3gnt.net
MSN: igor@3gnt.net
JID: igor@3gnt.net
PSTN: 00351 252377120


_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Infiniband card support and help [ In reply to ]
Hi,

I'm looking for a 10Gbit backend for storage drbd replication. I'm
expecting to setup infiniband solution connected back to back, this
means both nodes will be connected together without a switch.

I wonder if I bought two of this cards MHES14-xtc and a cable, I will be
able to produce such setup?

Link to the cards:
http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41

Another question, I intend to use this with infiniband sdr support added
to drbd in 8.3.3, and I found this on the spec's of the card.

"In addition, the card includes internal Subnet Management Agent (SMA)
and General Service Agents, eliminating the requirement for an external
management agent CPU."

This means I don't need to run openSM in any nodes? I will just need to
setup two cards, a cable, connect them, and setup IPoIB to start
replicating in 10Gbit?

Thanks very much,

--
Igor Neves<igor.neves@3gnt.net>
3GNTW - Tecnologias de Informação, Lda

SIP: igor@3gnt.net
MSN: igor@3gnt.net
JID: igor@3gnt.net
PSTN: 00351 252377120


_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Infiniband card support and help [ In reply to ]
Igor,

I'm basically doing the same thing, only with MHEA28-XTC cards. I wouldn't
think you'll have any problems creating a similar setup with the MHES cards.

I've not attempted to use infiniband sdr, just ipoib. I am running opensm on
one of the nodes. I'm getting throughput numbers like this:

cirrus:~$ netperf -H stratus-ib
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
stratus-ib.focus1.com (172.16.24.1) port 0 AF_INET : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 16384 16384 10.00 7861.61

A couple of things to watch out for:

1. Upgrade the firmware on the cards to the latest and greatest version. I
saw about a 25% increase in throughput as a result. The firmware updater was
a pain to compile, but that was mostly due to Ubuntu's fairly rigid default
compiler flags.

2. Run the cards in connected mode, rather than datagram mode, and put the
MTU at the max value of 65520. My performance benchmarks of drbd show that
this is the best setup.

The replication rate on my setup is completely limited by the bandwidth of
my disk subsystem, which is about 200 MB/s for writes. I can share some
performance comparisons between this and bonded gigabit ethernet, if you
would like. However, I won't be able to provide it until tomorrow, as it is
a holiday in the US today, and I don't have ready access to the data.


On Mon, May 31, 2010 at 6:17 AM, Igor Neves <igor@3gnt.net> wrote:

> Hi,
>
> I'm looking for a 10Gbit backend for storage drbd replication. I'm
> expecting to setup infiniband solution connected back to back, this means
> both nodes will be connected together without a switch.
>
> I wonder if I bought two of this cards MHES14-xtc and a cable, I will be
> able to produce such setup?
>
> Link to the cards:
> http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41
>
> Another question, I intend to use this with infiniband sdr support added to
> drbd in 8.3.3, and I found this on the spec's of the card.
>
> "In addition, the card includes internal Subnet Management Agent (SMA) and
> General Service Agents, eliminating the requirement for an external
> management agent CPU."
>
> This means I don't need to run openSM in any nodes? I will just need to
> setup two cards, a cable, connect them, and setup IPoIB to start replicating
> in 10Gbit?
>
> Thanks very much,
>
> --
> Igor Neves<igor.neves@3gnt.net>
> 3GNTW - Tecnologias de Informação, Lda
>
> SIP: igor@3gnt.net
> MSN: igor@3gnt.net
> JID: igor@3gnt.net
> PSTN: 00351 252377120
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>



--
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
Re: Infiniband card support and help [ In reply to ]
Hi,



On 05/31/2010 12:45 PM, Michael Iverson wrote:
> Igor,
>
> I'm basically doing the same thing, only with MHEA28-XTC cards. I
> wouldn't think you'll have any problems creating a similar setup with
> the MHES cards.
>
> I've not attempted to use infiniband sdr, just ipoib. I am running
> opensm on one of the nodes. I'm getting throughput numbers like this:
>

Would be very nice if you could test your setup with drbd infiniband sdr
support, probably you will not need to re-sync anything.

> cirrus:~$ netperf -H stratus-ib
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> stratus-ib.focus1.com <http://stratus-ib.focus1.com> (172.16.24.1)
> port 0 AF_INET : demo
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 16384 16384 10.00 7861.61

Nice.

>
> A couple of things to watch out for:
>
> 1. Upgrade the firmware on the cards to the latest and greatest
> version. I saw about a 25% increase in throughput as a result. The
> firmware updater was a pain to compile, but that was mostly due to
> Ubuntu's fairly rigid default compiler flags.

Will watch that!

> 2. Run the cards in connected mode, rather than datagram mode, and put
> the MTU at the max value of 65520. My performance benchmarks of drbd
> show that this is the best setup.

If I use infiniband sdr support from drbd, should I care about MTU?

> The replication rate on my setup is completely limited by the
> bandwidth of my disk subsystem, which is about 200 MB/s for writes. I
> can share some performance comparisons between this and bonded gigabit
> ethernet, if you would like. However, I won't be able to provide it
> until tomorrow, as it is a holiday in the US today, and I don't have
> ready access to the data.

We have a couple of setups that have I/O performances greater than
500MB/sec, so we really need 10Gbit trunks.

Thanks for the help, but I don't need performance results from Gbit
setup's, we have a couple, and we know the problems! :) Anyway if you
want to paste it here, I guess no one will complain.

> On Mon, May 31, 2010 at 6:17 AM, Igor Neves <igor@3gnt.net
> <mailto:igor@3gnt.net>> wrote:
>
> Hi,
>
> I'm looking for a 10Gbit backend for storage drbd replication. I'm
> expecting to setup infiniband solution connected back to back,
> this means both nodes will be connected together without a switch.
>
> I wonder if I bought two of this cards MHES14-xtc and a cable, I
> will be able to produce such setup?
>
> Link to the cards:
> http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41
> <http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41>
>
> Another question, I intend to use this with infiniband sdr support
> added to drbd in 8.3.3, and I found this on the spec's of the card.
>
> "In addition, the card includes internal Subnet Management Agent
> (SMA) and General Service Agents, eliminating the requirement for
> an external management agent CPU."
>
> This means I don't need to run openSM in any nodes? I will just
> need to setup two cards, a cable, connect them, and setup IPoIB to
> start replicating in 10Gbit?
>
> Thanks very much,
>

Thanks, once again.

--
Igor Neves<igor.neves@3gnt.net>
3GNTW - Tecnologias de Informação, Lda

SIP: igor@3gnt.net
MSN: igor@3gnt.net
JID: igor@3gnt.net
PSTN: 00351 252377120
Re: Infiniband card support and help [ In reply to ]
Hi Igor,





SDP performance can be measured with the Qperf utility included with the OFED drivers/utilities. I was testing with single port 3rd Gen (Same series as the ones you link) Mellanox DDR (20/16Gbit) cards on Opterons 2 years ago and was getting 11Gbit/sec. The newer 4th gen ConnectX mellanox cards will achieve 13-14Gbit/sec in DDR so if you are buying new get ConnectX.



For comparison in datagram mode I was getting 3.2Gbit/sec and in connected mode with a 32Kb MTU in connected mode I was getting 7Gbit/sec. This was 2 years ago so new version of OFED may well have improved performance.





Hope this helps,





Rob



From: drbd-user-bounces@lists.linbit.com [mailto:drbd-user-bounces@lists.linbit.com] On Behalf Of Igor Neves
Sent: 31 May 2010 14:33
To: Michael Iverson
Cc: drbd-user@lists.linbit.com
Subject: Re: [DRBD-user] Infiniband card support and help



Hi,



On 05/31/2010 12:45 PM, Michael Iverson wrote:

Igor,



I'm basically doing the same thing, only with MHEA28-XTC cards. I wouldn't think you'll have any problems creating a similar setup with the MHES cards.



I've not attempted to use infiniband sdr, just ipoib. I am running opensm on one of the nodes. I'm getting throughput numbers like this:




Would be very nice if you could test your setup with drbd infiniband sdr support, probably you will not need to re-sync anything.




cirrus:~$ netperf -H stratus-ib

TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to stratus-ib.focus1.com (172.16.24.1) port 0 AF_INET : demo

Recv Send Send

Socket Socket Message Elapsed

Size Size Size Time Throughput

bytes bytes bytes secs. 10^6bits/sec



87380 16384 16384 10.00 7861.61


Nice.






A couple of things to watch out for:



1. Upgrade the firmware on the cards to the latest and greatest version. I saw about a 25% increase in throughput as a result. The firmware updater was a pain to compile, but that was mostly due to Ubuntu's fairly rigid default compiler flags.


Will watch that!




2. Run the cards in connected mode, rather than datagram mode, and put the MTU at the max value of 65520. My performance benchmarks of drbd show that this is the best setup.


If I use infiniband sdr support from drbd, should I care about MTU?




The replication rate on my setup is completely limited by the bandwidth of my disk subsystem, which is about 200 MB/s for writes. I can share some performance comparisons between this and bonded gigabit ethernet, if you would like. However, I won't be able to provide it until tomorrow, as it is a holiday in the US today, and I don't have ready access to the data.


We have a couple of setups that have I/O performances greater than 500MB/sec, so we really need 10Gbit trunks.

Thanks for the help, but I don't need performance results from Gbit setup's, we have a couple, and we know the problems! :) Anyway if you want to paste it here, I guess no one will complain.




On Mon, May 31, 2010 at 6:17 AM, Igor Neves <igor@3gnt.net> wrote:

Hi,

I'm looking for a 10Gbit backend for storage drbd replication. I'm expecting to setup infiniband solution connected back to back, this means both nodes will be connected together without a switch.

I wonder if I bought two of this cards MHES14-xtc and a cable, I will be able to produce such setup?

Link to the cards: http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41

Another question, I intend to use this with infiniband sdr support added to drbd in 8.3.3, and I found this on the spec's of the card.

"In addition, the card includes internal Subnet Management Agent (SMA) and General Service Agents, eliminating the requirement for an external management agent CPU."

This means I don't need to run openSM in any nodes? I will just need to setup two cards, a cable, connect them, and setup IPoIB to start replicating in 10Gbit?

Thanks very much,


Thanks, once again.



--
Igor Neves <igor.neves@3gnt.net> <mailto:igor.neves@3gnt.net>
3GNTW - Tecnologias de Informação, Lda

SIP: igor@3gnt.net
MSN: igor@3gnt.net
JID: igor@3gnt.net
PSTN: 00351 252377120
Re: Infiniband card support and help [ In reply to ]
Hi,

Em 01-06-2010 08:16, Robert Dunkley escreveu:
>
> Hi Igor,
>
> SDP performance can be measured with the Qperf utility included with
> the OFED drivers/utilities. I was testing with single port 3^rd Gen
> (Same series as the ones you link) Mellanox DDR (20/16Gbit) cards on
> Opterons 2 years ago and was getting 11Gbit/sec. The newer 4^th gen
> ConnectX mellanox cards will achieve 13-14Gbit/sec in DDR so if you
> are buying new get ConnectX.
>

I understand what you are trying to tell, but ConnectX it's so much
expensive then the one I mention and the bandwith provided in Mellanox
MHES14-xtc will be more then enought for our setup's.

> For comparison in datagram mode I was getting 3.2Gbit/sec and in
> connected mode with a 32Kb MTU in connected mode I was getting
> 7Gbit/sec. This was 2 years ago so new version of OFED may well have
> improved performance.
>

I will have to try myself, I do not know the technology. I'm just trying
to make sure I will not buy anything that don't work out and stuck with
the cards in the closet.

Thanks very much for your help.

> Hope this helps,
>
> Rob
>
> *From:* drbd-user-bounces@lists.linbit.com
> [mailto:drbd-user-bounces@lists.linbit.com] *On Behalf Of *Igor Neves
> *Sent:* 31 May 2010 14:33
> *To:* Michael Iverson
> *Cc:* drbd-user@lists.linbit.com
> *Subject:* Re: [DRBD-user] Infiniband card support and help
>
> Hi,
>
>
>
> On 05/31/2010 12:45 PM, Michael Iverson wrote:
>
> Igor,
>
> I'm basically doing the same thing, only with MHEA28-XTC cards. I
> wouldn't think you'll have any problems creating a similar setup with
> the MHES cards.
>
> I've not attempted to use infiniband sdr, just ipoib. I am running
> opensm on one of the nodes. I'm getting throughput numbers like this:
>
>
> Would be very nice if you could test your setup with drbd infiniband
> sdr support, probably you will not need to re-sync anything.
>
>
> cirrus:~$ netperf -H stratus-ib
>
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> stratus-ib.focus1.com <http://stratus-ib.focus1.com> (172.16.24.1)
> port 0 AF_INET : demo
>
> Recv Send Send
>
> Socket Socket Message Elapsed
>
> Size Size Size Time Throughput
>
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 16384 16384 10.00 7861.61
>
>
> Nice.
>
>
> A couple of things to watch out for:
>
> 1. Upgrade the firmware on the cards to the latest and greatest
> version. I saw about a 25% increase in throughput as a result. The
> firmware updater was a pain to compile, but that was mostly due to
> Ubuntu's fairly rigid default compiler flags.
>
>
> Will watch that!
>
>
> 2. Run the cards in connected mode, rather than datagram mode, and put
> the MTU at the max value of 65520. My performance benchmarks of drbd
> show that this is the best setup.
>
>
> If I use infiniband sdr support from drbd, should I care about MTU?
>
>
> The replication rate on my setup is completely limited by the
> bandwidth of my disk subsystem, which is about 200 MB/s for writes. I
> can share some performance comparisons between this and bonded gigabit
> ethernet, if you would like. However, I won't be able to provide it
> until tomorrow, as it is a holiday in the US today, and I don't have
> ready access to the data.
>
>
> We have a couple of setups that have I/O performances greater than
> 500MB/sec, so we really need 10Gbit trunks.
>
> Thanks for the help, but I don't need performance results from Gbit
> setup's, we have a couple, and we know the problems! :) Anyway if you
> want to paste it here, I guess no one will complain.
>
>
> On Mon, May 31, 2010 at 6:17 AM, Igor Neves <igor@3gnt.net
> <mailto:igor@3gnt.net>> wrote:
>
> Hi,
>
> I'm looking for a 10Gbit backend for storage drbd replication. I'm
> expecting to setup infiniband solution connected back to back, this
> means both nodes will be connected together without a switch.
>
> I wonder if I bought two of this cards MHES14-xtc and a cable, I will
> be able to produce such setup?
>
> Link to the cards:
> http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41
> <http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41>
>
> Another question, I intend to use this with infiniband sdr support
> added to drbd in 8.3.3, and I found this on the spec's of the card.
>
> "In addition, the card includes internal Subnet Management Agent (SMA)
> and General Service Agents, eliminating the requirement for an
> external management agent CPU."
>
> This means I don't need to run openSM in any nodes? I will just need
> to setup two cards, a cable, connect them, and setup IPoIB to start
> replicating in 10Gbit?
>
> Thanks very much,
>
>
> Thanks, once again.
>
> --
> Igor Neves<igor.neves@3gnt.net> <mailto:igor.neves@3gnt.net>
> 3GNTW - Tecnologias de Informação, Lda
>
> SIP:igor@3gnt.net <mailto:igor@3gnt.net>
> MSN:igor@3gnt.net <mailto:igor@3gnt.net>
> JID:igor@3gnt.net <mailto:igor@3gnt.net>
> PSTN: 00351 252377120
>
>
> **
>
> *The SAQ Group*
>
> *Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ*
> SAQ is the trading name of SEMTEC Limited. Registered in England & Wales
> Company Number: 06481952
>
> http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219
>
> SAQ Group Delivers high quality, honestly priced communication and
> I.T. services to UK Business.
>
> Broadband : Domains : Email : Hosting : CoLo : Servers : Racks :
> Transit : Backups : Managed Networks : Remote Support.
>
> SAQ Group
>
> *ISPA Member*
>

--
Igor Neves<igor.neves@3gnt.net>
3GNTW - Tecnologias de Informação, Lda

SIP: igor@3gnt.net
MSN: igor@3gnt.net
JID: igor@3gnt.net
PSTN: 00351 252377120