On Mon, Apr 30, 2012 at 1:11 PM, James Harper
<firstname.lastname@example.org> wrote: > I'm considering configurations for a pair of new servers - a 2 node Xen cluster with shared storage.
> It looks like I can build a HP server with direct connected 10gbe or ib for approximately the same price. Given the choice, what is the preference these days? The link will be dedicated to DRBD so other communications over the link are unimportant.
> And are the HP ib cards well supported under Linux? Anecdotal reports appreciated!
We serve customers that use both, and in general recent distributions
support both OFED (for IB) and 10 GbE quite well. If your main pain
point is latency, you'll want to go with IB; if it's throughput,
you're essentially free to pick and choose -- although of course _not_
having to install any of the OFED libraries may be a plus for 10 GbE.
Cost of switches is usually not much of a factor in the decision, as
most people tend to wire their DRBD clusters back-to-back, but if
you're planning on a switched topology you may have to factor that in,
Both IB and 10 GbE do require a fair amount of kernel and DRBD tuning
so that DRBD can actually max them out. Don't expect to be able to use
your distro's standard set of sysctls, and default DRBD config, and
then everything magically goes a million times faster.
Generally speaking, also don't expect too much of a performance boost
when using SDP (Sockets Direct Protocol) over IB. In general, we've
found that the performance effect in comparison to IPoIB is negligable
or even negative, but that's fine -- chances are you'll likely max out
your underlying storage hardware with IPoIB anyhow. :) SDP is also
currently suffering from a module refcount issue that is fixed in git
but as yet unreleased, so that's a bit of an SDP show-stopper too...
but as pointed out, IPoIB does do the trick nicely.
Hope this helps.
Need help with High Availability? http://www.hastexo.com/now
drbd-user mailing list