Mailing List Archive

Jumbo Packets with netapp FAS3050
Hello All,

We have been using our netapp primarily for NAS storage, but I am
experimenting with iSCSI now for some specific applications. I
managed to create a bit of a network issue recently when I tried
enabling jumbo packets on my netapp and my test server. The iSCSI
connection kept resyncing and was not stable. I think this was
because I didn't create a separate vlan for the devices using jumbo
packets. Upon further research, I found mention that some devices/
oses calculate the checksums differently and that can affect the max
packet size. Does anyone have any recommendations for a packet size
for both the netapp and for my linux servers? is 10240 a safe bet(max
size supported by my switch) or should I use something like 9000 or
8192? Should I use a different packet size if I am connecting a
target to a windows box? I would ideally like to make a jumbo packets
vlan and just set all the devices on that vlan to a larger MTU, but I
don't want to create a network traffic problem in the process.

Brent Ellis
Systems Analyst/Consultant
CAS Computing Services Group
Boston University
interi@bu.edu
RE: Jumbo Packets with netapp FAS3050 [ In reply to ]
We've always put our MTU size at 9000, because we're primarily database-oriented. However, I have no idea if Linux can support the larger jumbo frames. 9000 definitely works though, and lets 8k database or filesystem blocks fit within one frame, so there's no fragmentation.

Thanks,
Matt

--
Matthew Zito
Chief Scientist
GridApp Systems
P: 646-452-4090
mzito@gridapp.com
http://www.gridapp.com



-----Original Message-----
From: owner-toasters@mathworks.com on behalf of Brent Ellis
Sent: Tue 2/26/2008 9:28 AM
To: toasters@mathworks.com
Subject: Jumbo Packets with netapp FAS3050

Hello All,

We have been using our netapp primarily for NAS storage, but I am
experimenting with iSCSI now for some specific applications. I
managed to create a bit of a network issue recently when I tried
enabling jumbo packets on my netapp and my test server. The iSCSI
connection kept resyncing and was not stable. I think this was
because I didn't create a separate vlan for the devices using jumbo
packets. Upon further research, I found mention that some devices/
oses calculate the checksums differently and that can affect the max
packet size. Does anyone have any recommendations for a packet size
for both the netapp and for my linux servers? is 10240 a safe bet(max
size supported by my switch) or should I use something like 9000 or
8192? Should I use a different packet size if I am connecting a
target to a windows box? I would ideally like to make a jumbo packets
vlan and just set all the devices on that vlan to a larger MTU, but I
don't want to create a network traffic problem in the process.

Brent Ellis
Systems Analyst/Consultant
CAS Computing Services Group
Boston University
interi@bu.edu
RE: Jumbo Packets with netapp FAS3050 [ In reply to ]
Even though the switch may support it, some switches still have problems
with Jumbo Packets and can choke on the heavy traffic. I don't believe
that the filer has an issue with MTU of 9000, but as Matt pointed out,
I'd check the info on your linux distro as well.



________________________________

From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Matthew Zito
Sent: Tuesday, February 26, 2008 11:32 AM
To: Brent Ellis; toasters@mathworks.com
Subject: RE: Jumbo Packets with netapp FAS3050





We've always put our MTU size at 9000, because we're primarily
database-oriented. However, I have no idea if Linux can support the
larger jumbo frames. 9000 definitely works though, and lets 8k database
or filesystem blocks fit within one frame, so there's no fragmentation.

Thanks,
Matt

--
Matthew Zito
Chief Scientist
GridApp Systems
P: 646-452-4090
mzito@gridapp.com
http://www.gridapp.com



-----Original Message-----
From: owner-toasters@mathworks.com on behalf of Brent Ellis
Sent: Tue 2/26/2008 9:28 AM
To: toasters@mathworks.com
Subject: Jumbo Packets with netapp FAS3050

Hello All,

We have been using our netapp primarily for NAS storage, but I am
experimenting with iSCSI now for some specific applications. I
managed to create a bit of a network issue recently when I tried
enabling jumbo packets on my netapp and my test server. The iSCSI
connection kept resyncing and was not stable. I think this was
because I didn't create a separate vlan for the devices using jumbo
packets. Upon further research, I found mention that some devices/
oses calculate the checksums differently and that can affect the max
packet size. Does anyone have any recommendations for a packet size
for both the netapp and for my linux servers? is 10240 a safe bet(max
size supported by my switch) or should I use something like 9000 or
8192? Should I use a different packet size if I am connecting a
target to a windows box? I would ideally like to make a jumbo packets
vlan and just set all the devices on that vlan to a larger MTU, but I
don't want to create a network traffic problem in the process.

Brent Ellis
Systems Analyst/Consultant
CAS Computing Services Group
Boston University
interi@bu.edu
RE: Jumbo Packets with netapp FAS3050 [ In reply to ]
We have MTU set at 9000, and linux will definately support it if the NIC drivers are decent. We use MTU=9000 for our dedicated backup network to backup linux boxes. You should stop everything until you have a dedicated VLAN for iSCSI, as performance testing will hurt whatever public network you are on :)

I wouldn't use the 10xxx byte value, since most clients won't support that.. You should attempt to keep all MTU settings the same - ie keep the filer at 9000, and all iSCSI attached clients at 9000. It optimizes the network traffic.

HTH,

- Hadrian
________________________________________
From: owner-toasters@mathworks.com [owner-toasters@mathworks.com] On Behalf Of Glenn Walker [ggwalker@mindspring.com]
Sent: Tuesday, February 26, 2008 6:24 PM
To: Matthew Zito; Brent Ellis; toasters@mathworks.com
Subject: RE: Jumbo Packets with netapp FAS3050

Even though the switch may support it, some switches still have problems with Jumbo Packets and can choke on the heavy traffic. I don't believe that the filer has an issue with MTU of 9000, but as Matt pointed out, I'd check the info on your linux distro as well.

________________________________
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Matthew Zito
Sent: Tuesday, February 26, 2008 11:32 AM
To: Brent Ellis; toasters@mathworks.com
Subject: RE: Jumbo Packets with netapp FAS3050



We've always put our MTU size at 9000, because we're primarily database-oriented. However, I have no idea if Linux can support the larger jumbo frames. 9000 definitely works though, and lets 8k database or filesystem blocks fit within one frame, so there's no fragmentation.

Thanks,
Matt

--
Matthew Zito
Chief Scientist
GridApp Systems
P: 646-452-4090
mzito@gridapp.com
http://www.gridapp.com



-----Original Message-----
From: owner-toasters@mathworks.com on behalf of Brent Ellis
Sent: Tue 2/26/2008 9:28 AM
To: toasters@mathworks.com
Subject: Jumbo Packets with netapp FAS3050

Hello All,

We have been using our netapp primarily for NAS storage, but I am
experimenting with iSCSI now for some specific applications. I
managed to create a bit of a network issue recently when I tried
enabling jumbo packets on my netapp and my test server. The iSCSI
connection kept resyncing and was not stable. I think this was
because I didn't create a separate vlan for the devices using jumbo
packets. Upon further research, I found mention that some devices/
oses calculate the checksums differently and that can affect the max
packet size. Does anyone have any recommendations for a packet size
for both the netapp and for my linux servers? is 10240 a safe bet(max
size supported by my switch) or should I use something like 9000 or
8192? Should I use a different packet size if I am connecting a
target to a windows box? I would ideally like to make a jumbo packets
vlan and just set all the devices on that vlan to a larger MTU, but I
don't want to create a network traffic problem in the process.

Brent Ellis
Systems Analyst/Consultant
CAS Computing Services Group
Boston University
interi@bu.edu
Re: Jumbo Packets with netapp FAS3050 [ In reply to ]
On Wed, Feb 27, 2008 at 8:40 AM, Hadrian Baron <Hadrian.Baron@vegas.com> wrote:
> I wouldn't use the 10xxx byte value, since most clients won't support that.. You should attempt to keep all MTU settings the same - ie keep the filer at 9000, and all iSCSI attached clients at 9000. It optimizes the network traffic.
>
> HTH,
>
> - Hadrian

Quick question to all the networking gurus on this list. Is there a
performance impact if the filer and the clients have different MTU
sizes defined. Also is there a way to check this. Our filers all have
jumbo frame MTU set to 9000, but for some reason our solaris admin
chose to set them on his side to 9194. ifstat on the filer interfaces
don't show any errors/discards or retransmits.
Example:

-- interface e4c (72 days, 8 hours, 38 minutes, 9 seconds) --

RECEIVE
Frames/second: 2285 | Bytes/second: 2805k | Errors/minute: 0
Discards/minute: 0 | Total frames: 2946m | Total bytes: 5787g
Total errors: 0 | Total discards: 31 | Multi/broadcast: 0
No buffers: 14 | Non-primary u/c: 0 | Tag drop: 0
Vlan tag drop: 0 | Vlan untag drop: 0 | CRC errors: 0
Runt frames: 0 | Fragment: 0 | Long frames: 0
Jabber: 0 | Alignment errors: 0 | Bus overruns: 17
Queue overflows: 0 | Xon: 0 | Xoff: 0
Jumbo: 734m | Reset: 0 | Reset1: 0
Reset2: 0
TRANSMIT
Frames/second: 936 | Bytes/second: 5467k | Errors/minute: 0
Discards/minute: 0 | Total frames: 1187m | Total bytes: 7680g
Total errors: 0 | Total discards: 0 | Multi/broadcast: 228k
Queue overflows: 0 | No buffers: 0 | Max collisions: 0
Single collision: 0 | Multi collisions: 0 | Late collisions: 0
Timeout: 0 | Xon: 0 | Xoff: 0
Jumbo: 919m
LINK_INFO
Current state: up | Up to downs: 9 | Auto: on
Speed: 1000m | Duplex: full | Flowcontrol: none

TIA
-G
Re: Jumbo Packets with netapp FAS3050 [ In reply to ]
On 3/5/08 9:05 PM, "Sto Rage©" <netbacker@gmail.com> wrote:

> Quick question to all the networking gurus on this list. Is there a
> performance impact if the filer and the clients have different MTU
> sizes defined. Also is there a way to check this. Our filers all have
> jumbo frame MTU set to 9000, but for some reason our solaris admin
> chose to set them on his side to 9194. ifstat on the filer interfaces
> don't show any errors/discards or retransmits.
> Example:

Yes, there's definitely a performance impact. Essentially, if one packet's
9100 and one's 9000, you've got 100 that's going to have to be split back
into another packet. Your filer will give a response saying "MTU size is too
big", please break it into chunks. I'm not sure if this only happens once /
session, or per-packet, but if it happens per-packet it's a lot of extra
overhead. Even if it's not, that extra 100 needs to go into a new packet, so
you're getting overhead of splitting it into chunks and then overhead of
re-assembling it on the destination.

Essentially... Yes. :) you want them all to match.


--
Nicholas Bernstein
http://nicholasbernstein.com
Re: Jumbo Packets with netapp FAS3050 [ In reply to ]
Thanks for the reply. Now my next question is does NetApp support a
MTU size of 9194? It looks easier for me to change the NetApp end
(just 1 cluser) instead of asking the Solaris admin to change on 18+
hosts.
All the documents I have seen from NetApp seems to only suggest using
9000 as the default, where as all the SUN documents suggest 9194 as
their default.
Thanks once again
-G


On Thu, Mar 6, 2008 at 11:26 PM, Nicholas Bernstein
<nick@nicholasbernstein.com> wrote:
> On 3/5/08 9:05 PM, "Sto Rage(c)" <netbacker@gmail.com> wrote:
>
> > Quick question to all the networking gurus on this list. Is there a
> > performance impact if the filer and the clients have different MTU
> > sizes defined. Also is there a way to check this. Our filers all have
> > jumbo frame MTU set to 9000, but for some reason our solaris admin
> > chose to set them on his side to 9194. ifstat on the filer interfaces
> > don't show any errors/discards or retransmits.
> > Example:
>
> Yes, there's definitely a performance impact. Essentially, if one packet's
> 9100 and one's 9000, you've got 100 that's going to have to be split back
> into another packet. Your filer will give a response saying "MTU size is too
> big", please break it into chunks. I'm not sure if this only happens once /
> session, or per-packet, but if it happens per-packet it's a lot of extra
> overhead. Even if it's not, that extra 100 needs to go into a new packet, so
> you're getting overhead of splitting it into chunks and then overhead of
> re-assembling it on the destination.
>
> Essentially... Yes. :) you want them all to match.
>
>
> --
> Nicholas Bernstein
> http://nicholasbernstein.com
>
>
>
>
>
Re: Jumbo Packets with netapp FAS3050 [ In reply to ]
I'm not sure, but if you run tcpdump* on the solaris box and connect to
the netapp with both set to 9194, and you don't see any ICMP type 3 code
4 traffic, you're probably OK. You should also be able to see the
actual packet lengths in ethereal, iirc.

-Nick


* output it to a file and open it in ethereal/wireshark.

On Fri, 2008-03-07 at 14:04 -0800, Sto Rage© wrote:
> Thanks for the reply. Now my next question is does NetApp support a
> MTU size of 9194? It looks easier for me to change the NetApp end
> (just 1 cluser) instead of asking the Solaris admin to change on 18+
> hosts.
> All the documents I have seen from NetApp seems to only suggest using
> 9000 as the default, where as all the SUN documents suggest 9194 as
> their default.
> Thanks once again
> -G
>
>
> On Thu, Mar 6, 2008 at 11:26 PM, Nicholas Bernstein
> <nick@nicholasbernstein.com> wrote:
> > On 3/5/08 9:05 PM, "Sto Rage(c)" <netbacker@gmail.com> wrote:
> >
> > > Quick question to all the networking gurus on this list. Is there a
> > > performance impact if the filer and the clients have different MTU
> > > sizes defined. Also is there a way to check this. Our filers all have
> > > jumbo frame MTU set to 9000, but for some reason our solaris admin
> > > chose to set them on his side to 9194. ifstat on the filer interfaces
> > > don't show any errors/discards or retransmits.
> > > Example:
> >
> > Yes, there's definitely a performance impact. Essentially, if one packet's
> > 9100 and one's 9000, you've got 100 that's going to have to be split back
> > into another packet. Your filer will give a response saying "MTU size is too
> > big", please break it into chunks. I'm not sure if this only happens once /
> > session, or per-packet, but if it happens per-packet it's a lot of extra
> > overhead. Even if it's not, that extra 100 needs to go into a new packet, so
> > you're getting overhead of splitting it into chunks and then overhead of
> > re-assembling it on the destination.
> >
> > Essentially... Yes. :) you want them all to match.
> >
> >
> > --
> > Nicholas Bernstein
> > http://nicholasbernstein.com
> >
> >
> >
> >
> >
--
Nicholas Bernstein
Techologist, Technical Consultant, Instructor
nick@nicholasbernstein.com