Mailing List Archive

running SAN on NetApp
Folks,

Does anyone have experience with running production workloads on FC-based LUNs on NetApp? Am curious to know how performance of hosting virtual machines (including Exchange, database environments) compares to more traditional block-based SANs (EMC, 3Par, Hitachi, etc), since what I've read is that NetApp's LUNs feature still sits on top of WAFL?

We have some native FAS NetApps, along with many N-series rebranded NetApps, but all are run in 7-Mode and using NFS connections.

Also, how do you all implement data tiering in your NetApp environments? We are currently using IBM SAN (Storwize/V7000) and this has tiering capability. We'd consider moving some SAN workloads to NetApp if we could get as good SAN performance and also address the tiering capability.

Thanks,

Eric Peng | Enterprise Storage Systems Administrator
Esri | 380 New York St. | Redlands, CA 92373 | USA
T 909 793 2853 x3567 | M 909 367 1691
epeng@esri.com<mailto:epeng@esri.com> | esri.com<http://esri.com/>
Re: running SAN on NetApp [ In reply to ]
Yes, I use this extensively for exchange at my current job and VMware at my
old job. It works fine. Performance will depend on the hardware, and
generally stack up well with other options.

Tiering is a different concept in Netapp than it is in the other vendors
you mentioned. On Netapp, you use SSD in an aggregate, or on the
controller. If it's on the aggregate, it's called flashpool. On the
controller, flashcash. They have different sets of advantages and
disadvantages compared to each other- but you can't tier between different
types of hard drives like with those other boxes.

On Tuesday, 12 April 2016, Eric Peng <epeng@esri.com> wrote:

> Folks,
>
>
>
> Does anyone have experience with running production workloads on FC-based
> LUNs on NetApp? Am curious to know how performance of hosting virtual
> machines (including Exchange, database environments) compares to more
> traditional block-based SANs (EMC, 3Par, Hitachi, etc), since what I’ve
> read is that NetApp’s LUNs feature still sits on top of WAFL?
>
>
>
> We have some native FAS NetApps, along with many N-series rebranded
> NetApps, but all are run in 7-Mode and using NFS connections.
>
>
>
> Also, how do you all implement data tiering in your NetApp environments?
> We are currently using IBM SAN (Storwize/V7000) and this has tiering
> capability. We’d consider moving some SAN workloads to NetApp if we could
> get as good SAN performance and also address the tiering capability.
>
>
>
> Thanks,
>
>
>
> *Eric Peng | Enterprise Storage Systems Administrator *
> Esri | 380 New York St. | Redlands, CA 92373 | USA
> T 909 793 2853 x3567 | M 909 367 1691
> epeng@esri.com <javascript:_e(%7B%7D,'cvml','epeng@esri.com');> | esri.com
>
>
>
Re: running SAN on NetApp [ In reply to ]
Hi Eric,

I generally stay away from using a NetApp for SAN, but not for any
particular reason other than it's first and foremost a really, really good
NAS machine, and SAN is more complicated and a bit less efficient in terms
of storage utilization.

When I've used it, though, performance has been fine, and pretty much on
par with anything else, given that there are of course differences in
hardware and workloads.

As far as tiering, the NetApp doesn't tier the way IBM does with Easytier,
or a 3PAR does with Adaptive Optimization. On NetApp, you can add SSD to
an aggregate (called flashpool) to act as a large cache, and the NetApp can
utilize this very well to increase performance. However, you can't tier
automatically between SAS and nearline drives, and it doesn't actually
*move* blocks up and down to SSD -- it just uses it as an advanced cache.

Using a flashpool with nearline drives can work very well, though, again
depending on the workload, and can save money over buying SAS drives.

-Adam

On Tue, Apr 12, 2016 at 4:19 PM, Eric Peng <epeng@esri.com> wrote:

> Folks,
>
>
>
> Does anyone have experience with running production workloads on FC-based
> LUNs on NetApp? Am curious to know how performance of hosting virtual
> machines (including Exchange, database environments) compares to more
> traditional block-based SANs (EMC, 3Par, Hitachi, etc), since what I’ve
> read is that NetApp’s LUNs feature still sits on top of WAFL?
>
>
>
> We have some native FAS NetApps, along with many N-series rebranded
> NetApps, but all are run in 7-Mode and using NFS connections.
>
>
>
> Also, how do you all implement data tiering in your NetApp environments?
> We are currently using IBM SAN (Storwize/V7000) and this has tiering
> capability. We’d consider moving some SAN workloads to NetApp if we could
> get as good SAN performance and also address the tiering capability.
>
>
>
> Thanks,
>
>
>
> *Eric Peng | Enterprise Storage Systems Administrator *
> Esri | 380 New York St. | Redlands, CA 92373 | USA
> T 909 793 2853 x3567 | M 909 367 1691
> epeng@esri.com | esri.com
>
>
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> http://www.teaparty.net/mailman/listinfo/toasters
>
>
Re: running SAN on NetApp [ In reply to ]
Eric> Does anyone have experience with running production workloads on
Eric> FC-based LUNs on NetApp? Am curious to know how performance of
Eric> hosting virtual machines (including Exchange, database
Eric> environments) compares to more traditional block-based SANs
Eric> (EMC, 3Par, Hitachi, etc), since what I’ve read is that NetApp’s
Eric> LUNs feature still sits on top of WAFL?

I've run Oracle DBs on Netapp LUNs in 7-mode. We ended up with
performance issues which were solved by adding in a new dedicated
shelf for that particularly crappily setup Oracle DB and application.

We now run a cDOT cluster on four nodes doing alot of VMware ESX
DataStores, but using NFS. We do use some iSCSI LUNs for MS SQL stuff
that requires it, and also Oracle on NFS.

Eric> We have some native FAS NetApps, along with many N-series
Eric> rebranded NetApps, but all are run in 7-Mode and using NFS
Eric> connections.

This works well, I like NFS and Netapps run it well. The 32xx series
are a but low powered and serverly expansion limited in my
experience. The new 80xx series is better, but still limited at
times.

Eric> Also, how do you all implement data tiering in your NetApp
Eric> environments? We are currently using IBM SAN (Storwize/V7000)
Eric> and this has tiering capability. We’d consider moving some SAN
Eric> workloads to NetApp if we could get as good SAN performance and
Eric> also address the tiering capability.

What do you mean exactly by tiering? Are you moving LUNs between
types of disks with different performance characteristics? I find
that moving LUNs/Volumes is doable, but takes alot of time and alot of
space with a decent impact on performance, since you need to copy all
the blocks from one aggregate to another using snapmirror before you
can cutover.

I can't say I've moved many LUNs, it's usually been simpler to just
spin up a new LUN, mirror it on the host, and then detach the original
LUN and destroy it when done.

But it all really depends on your current workloads on the FAS filers
and if you can add the needed FC Fibre Cards for doing the LUNS.

But overall, I dislike LUNs because you pay a big overhead penatly and
lose lots of disk space and the flexibility to dynamically grow and
shrink NFS/CIFS volumes in a much simpler and quicker way.

LUNs are just a pain, unless you plan them out ahead of time from the
host side with block management in place to help grow/shrink/move
stuff around.

John

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
RE: running SAN on NetApp [ In reply to ]
Hi all, Jeff from NetApp here. I feel compelled to chime in. Forgive any typos, I'm at 37,000 feet over Murmansk and there's a little turbulence.

First, I'll try to get to the point:


1) Protocol is less important than ever these days. Part of the reason for a change is that in the past it was hard to deal with the delays caused by physical drive heads moving around. Lots of solutions were offered, ranging from just using 10% of each drive to minimize drive head movement or just having huge frames that had massive amounts of disks and dedicated ASICS to tie it all together. These days you can get an astonishing amount of horsepower from a few Xeons, and combined with Flash all those old problems go away. I spend 50% of my time with the largest database projects on earth and for the most part I don't care what protocol is used. It's really a business decision. NFS is easier to manage, but if someone already has an FC infrastructure investment they might as well keep using it.

2) 19 out of 20 projects I see are hitting performance limits related to the bandwidth available. Customer A might have a couple of 8Gb FC HBA's running at line speed, while customer B has a couple of 10Gb NIC's running at line speed. That 20th project tends to actually require SAN, but we're talking about databases that are pushing many GB/sec of bandwidth, such as lighting up ten 16GB FC ports simultaneously. When you get that kind of demand, FC is currently faster and easier to manage than NFS.

3) Personally, I recommend FlashPool for tiering. I know there are other options out there, but I think they offer a false sense of security. There are a lot of variables that affects storage performance and lots of IO types, and it's easy to damage performance by making incorrect assumptions about precisely which IO types are important. FlashPool does a really good job at figuring out what media should host what IO and placing the data accordingly. For those fringe situations, FlashPool is highly tunable but almost nobody needs to depart from the defaults. For more common situations, such as archiving old VM's or LUN's, you can nondisruptively move the LUNs from SSD to SAS to SATA if you wish, but almost everyone these days goes directly to all-Flash. I still run into an occasional customer looking for really huge amounts of space, like 500TB, where all-flash isn't quite deemed affordable, and they seem to go with SAS+SSD in a FlashPool configuration.

4) When virtualization is used, I recommend caution with datastores. If you're looking for the best possible performance, keep the IO path simple. Don't re-virtualize a perfectly good LUN inside another container like a VMDK. The performance impact is probably minimal, but it isn't zero. Few customers would notice the overhead, but if you're really worried then consider an RDM or iSCSI LUNs owned directly by the guest itself. Let the hypervisor just manage the raw protocol itself. It also simplifies management. For example, if you use RDM/iSCSI you can move data between physical and virtual resources easily, and you can identify the source of a given LUN without having to look at the hypervisor management console.

Strictly speaking, I would agree that a NetApp LUN is "on top of WAFL" but unless you're using raw hard drives that's true of any storage array. Anybody's SAN implementation is running on top of some kind of virtualization layer that distributes the blocks across multiple drives. The main difference between ONTAP and the so-called "traditional" block-based arrays is there's OS that controls block placement is more complicated.

I know the competition loves to say that ONTAP is "simulating" SAN, but that's just nonsense. The actual protocol itself is actually pretty much the same sort of thing whether you're talking about NAS or SAN. You have a protocol and it processes inbound requests based on things like file offsets, sizes of reads, sizes of writes, etc. Servicing a request like "please read 1MB from an offset of 256K" isn't all that different when you're dealing with FC, iSCSI, CIFS, or NFS. SAN protocols are really more like subsets of file protocols. The hard part of SAN is mostly things like the QA process for ensuring a particular HBA with a particular firmware works will under a majorly faulty SAN situation without corruption. There's where the effort lies.

Sometimes, that extra stuff in ONTAP brings benefits. That's how you get things like snapshots, snapmirror, flexclones, snaprestore, and so forth. Not everybody needs that, which is why we have products like the E-Series which are mostly geared for environments that don't need all the ONTAP goodies. I used to work for an unspecified large database company somewhere near Palo Alto and where the snapshot stuff was needed, we invariably bought ONTAP and when it wasn't needed I almost always went for the IBM DS3000 series arrays, which eventually became E-Series. Neither one was better or worse than the other, it depended on what you want to do with it.

I won't like - you don't get something for nothing. Speaking as someone who influenced, specified, or signed off on about $10M in total storage spending, the only time I really noticed a difference was with random vs sequential IO. ONTAP has more sophistication dealing with heavy random IO and it delivered generally better results, while the simplicity of Santricity (E-Series) allowed better raw bandwidth. Both do all types of IO well, but there has to be some kind of tradeoff or all arrays would have 100% identical characteristics and we all know they don't.

We have all sorts of documents that demonstrate what the various protocols can do on ONTAP and E-Series, but if you need specific proof with your workloads you can talk to your NetApp rep and arrange a POC. We have an extensive customer POC team that does this sort of thing every day in their lab, but sometimes it can be as easy as just getting a temporary iSCSI license so you can see what iSCSI on a 10Gb NIC can do.

From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Eric Peng
Sent: Wednesday, April 13, 2016 4:19 AM
To: toasters@teaparty.net
Subject: running SAN on NetApp

Folks,

Does anyone have experience with running production workloads on FC-based LUNs on NetApp? Am curious to know how performance of hosting virtual machines (including Exchange, database environments) compares to more traditional block-based SANs (EMC, 3Par, Hitachi, etc), since what I've read is that NetApp's LUNs feature still sits on top of WAFL?

We have some native FAS NetApps, along with many N-series rebranded NetApps, but all are run in 7-Mode and using NFS connections.

Also, how do you all implement data tiering in your NetApp environments? We are currently using IBM SAN (Storwize/V7000) and this has tiering capability. We'd consider moving some SAN workloads to NetApp if we could get as good SAN performance and also address the tiering capability.

Thanks,

Eric Peng | Enterprise Storage Systems Administrator
Esri | 380 New York St. | Redlands, CA 92373 | USA
T 909 793 2853 x3567 | M 909 367 1691
epeng@esri.com<mailto:epeng@esri.com> | esri.com<http://esri.com/>
Re: running SAN on NetApp [ In reply to ]
RDMs on VMWare should be avoided. This is out of scope of the initial
question, but just something I felt should be said. Other hypervisors are
OK with that, but for VMWare, it's better to use a datastore.

On Wednesday, 13 April 2016, Steiner, Jeffrey <Jeffrey.Steiner@netapp.com>
wrote:

> Hi all, Jeff from NetApp here. I feel compelled to chime in. Forgive any
> typos, I'm at 37,000 feet over Murmansk and there's a little turbulence.
>
>
>
> First, I'll try to get to the point:
>
>
>
> 1) Protocol is less important than ever these days. Part of the
> reason for a change is that in the past it was hard to deal with the delays
> caused by physical drive heads moving around. Lots of solutions were
> offered, ranging from just using 10% of each drive to minimize drive head
> movement or just having huge frames that had massive amounts of disks and
> dedicated ASICS to tie it all together. These days you can get an
> astonishing amount of horsepower from a few Xeons, and combined with Flash
> all those old problems go away. I spend 50% of my time with the largest
> database projects on earth and for the most part I don't care what protocol
> is used. It's really a business decision. NFS is easier to manage, but if
> someone already has an FC infrastructure investment they might as well keep
> using it.
>
> 2) 19 out of 20 projects I see are hitting performance limits
> related to the bandwidth available. Customer A might have a couple of 8Gb
> FC HBA's running at line speed, while customer B has a couple of 10Gb NIC's
> running at line speed. That 20th project tends to actually require SAN,
> but we're talking about databases that are pushing many GB/sec of
> bandwidth, such as lighting up ten 16GB FC ports simultaneously. When you
> get that kind of demand, FC is currently faster and easier to manage than
> NFS.
>
> 3) Personally, I recommend FlashPool for tiering. I know there are
> other options out there, but I think they offer a false sense of security.
> There are a lot of variables that affects storage performance and lots of
> IO types, and it's easy to damage performance by making incorrect
> assumptions about precisely which IO types are important. FlashPool does a
> really good job at figuring out what media should host what IO and placing
> the data accordingly. For those fringe situations, FlashPool is highly
> tunable but almost nobody needs to depart from the defaults. For more
> common situations, such as archiving old VM's or LUN's, you can
> nondisruptively move the LUNs from SSD to SAS to SATA if you wish, but
> almost everyone these days goes directly to all-Flash. I still run into an
> occasional customer looking for really huge amounts of space, like 500TB,
> where all-flash isn't quite deemed affordable, and they seem to go with
> SAS+SSD in a FlashPool configuration.
>
> 4) When virtualization is used, I recommend caution with datastores.
> If you're looking for the best possible performance, keep the IO path
> simple. Don't re-virtualize a perfectly good LUN inside another container
> like a VMDK. The performance impact is probably minimal, but it isn't zero.
> Few customers would notice the overhead, but if you're really worried then
> consider an RDM or iSCSI LUNs owned directly by the guest itself. Let the
> hypervisor just manage the raw protocol itself. It also simplifies
> management. For example, if you use RDM/iSCSI you can move data between
> physical and virtual resources easily, and you can identify the source of a
> given LUN without having to look at the hypervisor management console.
>
>
>
> Strictly speaking, I would agree that a NetApp LUN is "on top of WAFL" but
> unless you're using raw hard drives that's true of any storage array.
> Anybody's SAN implementation is running on top of some kind of
> virtualization layer that distributes the blocks across multiple drives.
> The main difference between ONTAP and the so-called "traditional"
> block-based arrays is there's OS that controls block placement is more
> complicated.
>
>
>
> I know the competition loves to say that ONTAP is "simulating" SAN, but
> that's just nonsense. The actual protocol itself is actually pretty much
> the same sort of thing whether you're talking about NAS or SAN. You have a
> protocol and it processes inbound requests based on things like file
> offsets, sizes of reads, sizes of writes, etc. Servicing a request like
> "please read 1MB from an offset of 256K" isn't all that different when
> you're dealing with FC, iSCSI, CIFS, or NFS. SAN protocols are really more
> like subsets of file protocols. The hard part of SAN is mostly things like
> the QA process for ensuring a particular HBA with a particular firmware
> works will under a majorly faulty SAN situation without corruption. There's
> where the effort lies.
>
>
>
> Sometimes, that extra stuff in ONTAP brings benefits. That's how you get
> things like snapshots, snapmirror, flexclones, snaprestore, and so forth.
> Not everybody needs that, which is why we have products like the E-Series
> which are mostly geared for environments that don't need all the ONTAP
> goodies. I used to work for an unspecified large database company somewhere
> near Palo Alto and where the snapshot stuff was needed, we invariably
> bought ONTAP and when it wasn't needed I almost always went for the IBM
> DS3000 series arrays, which eventually became E-Series. Neither one was
> better or worse than the other, it depended on what you want to do with it.
>
>
>
> I won't like - you don't get something for nothing. Speaking as someone
> who influenced, specified, or signed off on about $10M in total storage
> spending, the only time I really noticed a difference was with random vs
> sequential IO. ONTAP has more sophistication dealing with heavy random IO
> and it delivered generally better results, while the simplicity of
> Santricity (E-Series) allowed better raw bandwidth. Both do all types of
> IO well, but there has to be some kind of tradeoff or all arrays would have
> 100% identical characteristics and we all know they don't.
>
>
>
> We have all sorts of documents that demonstrate what the various protocols
> can do on ONTAP and E-Series, but if you need specific proof with your
> workloads you can talk to your NetApp rep and arrange a POC. We have an
> extensive customer POC team that does this sort of thing every day in their
> lab, but sometimes it can be as easy as just getting a temporary iSCSI
> license so you can see what iSCSI on a 10Gb NIC can do.
>
>
>
> *From:* toasters-bounces@teaparty.net
> <javascript:_e(%7B%7D,'cvml','toasters-bounces@teaparty.net');> [mailto:
> toasters-bounces@teaparty.net
> <javascript:_e(%7B%7D,'cvml','toasters-bounces@teaparty.net');>] *On
> Behalf Of *Eric Peng
> *Sent:* Wednesday, April 13, 2016 4:19 AM
> *To:* toasters@teaparty.net
> <javascript:_e(%7B%7D,'cvml','toasters@teaparty.net');>
> *Subject:* running SAN on NetApp
>
>
>
> Folks,
>
>
>
> Does anyone have experience with running production workloads on FC-based
> LUNs on NetApp? Am curious to know how performance of hosting virtual
> machines (including Exchange, database environments) compares to more
> traditional block-based SANs (EMC, 3Par, Hitachi, etc), since what I’ve
> read is that NetApp’s LUNs feature still sits on top of WAFL?
>
>
>
> We have some native FAS NetApps, along with many N-series rebranded
> NetApps, but all are run in 7-Mode and using NFS connections.
>
>
>
> Also, how do you all implement data tiering in your NetApp environments?
> We are currently using IBM SAN (Storwize/V7000) and this has tiering
> capability. We’d consider moving some SAN workloads to NetApp if we could
> get as good SAN performance and also address the tiering capability.
>
>
>
> Thanks,
>
>
>
> *Eric Peng | Enterprise Storage Systems Administrator *
> Esri | 380 New York St. | Redlands, CA 92373 | USA
> T 909 793 2853 x3567 | M 909 367 1691
> epeng@esri.com <javascript:_e(%7B%7D,'cvml','epeng@esri.com');> | esri.com
>
>
>
Re: running SAN on NetApp [ In reply to ]
This is an oldie but a goodie. There are some slight difference between the protocols. I ran Exchange 2003 exchange clusters (2node) on iSCSI for ~ 7500 users back in the day (multi-clusters). The key is to follow best practices such as host utilities, snapdrive/manager (we used SME) etc. Here is the reference to the image

http://www.netapp.com/us/media/tr-3916.pdf


[cid:DD809608-C1C4-436D-8049-04300C730CE8]
From: <toasters-bounces@teaparty.net<mailto:toasters-bounces@teaparty.net>> on behalf of Basil <basilberntsen@gmail.com<mailto:basilberntsen@gmail.com>>
Date: Wednesday, April 13, 2016 at 5:43 AM
To: "Steiner, Jeffrey" <Jeffrey.Steiner@netapp.com<mailto:Jeffrey.Steiner@netapp.com>>
Cc: "toasters@teaparty.net<mailto:toasters@teaparty.net>" <Toasters@teaparty.net<mailto:Toasters@teaparty.net>>
Subject: Re: running SAN on NetApp

RDMs on VMWare should be avoided. This is out of scope of the initial question, but just something I felt should be said. Other hypervisors are OK with that, but for VMWare, it's better to use a datastore.

On Wednesday, 13 April 2016, Steiner, Jeffrey <Jeffrey.Steiner@netapp.com<mailto:Jeffrey.Steiner@netapp.com>> wrote:
Hi all, Jeff from NetApp here. I feel compelled to chime in. Forgive any typos, I'm at 37,000 feet over Murmansk and there's a little turbulence.

First, I'll try to get to the point:


1) Protocol is less important than ever these days. Part of the reason for a change is that in the past it was hard to deal with the delays caused by physical drive heads moving around. Lots of solutions were offered, ranging from just using 10% of each drive to minimize drive head movement or just having huge frames that had massive amounts of disks and dedicated ASICS to tie it all together. These days you can get an astonishing amount of horsepower from a few Xeons, and combined with Flash all those old problems go away. I spend 50% of my time with the largest database projects on earth and for the most part I don't care what protocol is used. It's really a business decision. NFS is easier to manage, but if someone already has an FC infrastructure investment they might as well keep using it.

2) 19 out of 20 projects I see are hitting performance limits related to the bandwidth available. Customer A might have a couple of 8Gb FC HBA's running at line speed, while customer B has a couple of 10Gb NIC's running at line speed. That 20th project tends to actually require SAN, but we're talking about databases that are pushing many GB/sec of bandwidth, such as lighting up ten 16GB FC ports simultaneously. When you get that kind of demand, FC is currently faster and easier to manage than NFS.

3) Personally, I recommend FlashPool for tiering. I know there are other options out there, but I think they offer a false sense of security. There are a lot of variables that affects storage performance and lots of IO types, and it's easy to damage performance by making incorrect assumptions about precisely which IO types are important. FlashPool does a really good job at figuring out what media should host what IO and placing the data accordingly. For those fringe situations, FlashPool is highly tunable but almost nobody needs to depart from the defaults. For more common situations, such as archiving old VM's or LUN's, you can nondisruptively move the LUNs from SSD to SAS to SATA if you wish, but almost everyone these days goes directly to all-Flash. I still run into an occasional customer looking for really huge amounts of space, like 500TB, where all-flash isn't quite deemed affordable, and they seem to go with SAS+SSD in a FlashPool configuration.

4) When virtualization is used, I recommend caution with datastores. If you're looking for the best possible performance, keep the IO path simple. Don't re-virtualize a perfectly good LUN inside another container like a VMDK. The performance impact is probably minimal, but it isn't zero. Few customers would notice the overhead, but if you're really worried then consider an RDM or iSCSI LUNs owned directly by the guest itself. Let the hypervisor just manage the raw protocol itself. It also simplifies management. For example, if you use RDM/iSCSI you can move data between physical and virtual resources easily, and you can identify the source of a given LUN without having to look at the hypervisor management console.

Strictly speaking, I would agree that a NetApp LUN is "on top of WAFL" but unless you're using raw hard drives that's true of any storage array. Anybody's SAN implementation is running on top of some kind of virtualization layer that distributes the blocks across multiple drives. The main difference between ONTAP and the so-called "traditional" block-based arrays is there's OS that controls block placement is more complicated.

I know the competition loves to say that ONTAP is "simulating" SAN, but that's just nonsense. The actual protocol itself is actually pretty much the same sort of thing whether you're talking about NAS or SAN. You have a protocol and it processes inbound requests based on things like file offsets, sizes of reads, sizes of writes, etc. Servicing a request like "please read 1MB from an offset of 256K" isn't all that different when you're dealing with FC, iSCSI, CIFS, or NFS. SAN protocols are really more like subsets of file protocols. The hard part of SAN is mostly things like the QA process for ensuring a particular HBA with a particular firmware works will under a majorly faulty SAN situation without corruption. There's where the effort lies.

Sometimes, that extra stuff in ONTAP brings benefits. That's how you get things like snapshots, snapmirror, flexclones, snaprestore, and so forth. Not everybody needs that, which is why we have products like the E-Series which are mostly geared for environments that don't need all the ONTAP goodies. I used to work for an unspecified large database company somewhere near Palo Alto and where the snapshot stuff was needed, we invariably bought ONTAP and when it wasn't needed I almost always went for the IBM DS3000 series arrays, which eventually became E-Series. Neither one was better or worse than the other, it depended on what you want to do with it.

I won't like - you don't get something for nothing. Speaking as someone who influenced, specified, or signed off on about $10M in total storage spending, the only time I really noticed a difference was with random vs sequential IO. ONTAP has more sophistication dealing with heavy random IO and it delivered generally better results, while the simplicity of Santricity (E-Series) allowed better raw bandwidth. Both do all types of IO well, but there has to be some kind of tradeoff or all arrays would have 100% identical characteristics and we all know they don't.

We have all sorts of documents that demonstrate what the various protocols can do on ONTAP and E-Series, but if you need specific proof with your workloads you can talk to your NetApp rep and arrange a POC. We have an extensive customer POC team that does this sort of thing every day in their lab, but sometimes it can be as easy as just getting a temporary iSCSI license so you can see what iSCSI on a 10Gb NIC can do.

From: toasters-bounces@teaparty.net<javascript:_e(%7B%7D,'cvml','toasters-bounces@teaparty.net');> [mailto:toasters-bounces@teaparty.net<javascript:_e(%7B%7D,'cvml','toasters-bounces@teaparty.net');>] On Behalf Of Eric Peng
Sent: Wednesday, April 13, 2016 4:19 AM
To: toasters@teaparty.net<javascript:_e(%7B%7D,'cvml','toasters@teaparty.net');>
Subject: running SAN on NetApp

Folks,

Does anyone have experience with running production workloads on FC-based LUNs on NetApp? Am curious to know how performance of hosting virtual machines (including Exchange, database environments) compares to more traditional block-based SANs (EMC, 3Par, Hitachi, etc), since what I’ve read is that NetApp’s LUNs feature still sits on top of WAFL?

We have some native FAS NetApps, along with many N-series rebranded NetApps, but all are run in 7-Mode and using NFS connections.

Also, how do you all implement data tiering in your NetApp environments? We are currently using IBM SAN (Storwize/V7000) and this has tiering capability. We’d consider moving some SAN workloads to NetApp if we could get as good SAN performance and also address the tiering capability.

Thanks,

Eric Peng | Enterprise Storage Systems Administrator
Esri | 380 New York St. | Redlands, CA 92373 | USA
T 909 793 2853 x3567 | M 909 367 1691
epeng@esri.com<javascript:_e(%7B%7D,'cvml','epeng@esri.com');> | esri.com<http://esri.com/>
RE: running SAN on NetApp [ In reply to ]
Thanks for all of the responses (Basil, Adam, John, Jeff).

So it sounds like, by and large, you’ve all had good experiences with using NetApp to host “SAN” type of workloads, whether using NFS or iSCSI/FC (and across LUNs). On our SAN-side of the house, we’ve grown accustom to IBM’s Easy Tier, but personally I will say that I still like all the bells and whistles in ONTAP (snaps, snapvault, flexclone). Given the suggestions around FlashPool, I think that’s worth exploring potentially with some of our VM environment. Will certainly reach out to our NetApp contact to investigate/POC further.

Cheers,
Eric


From: Basil [mailto:basilberntsen@gmail.com]
Sent: Wednesday, April 13, 2016 5:43 AM
To: Steiner, Jeffrey
Cc: Eric Peng; toasters@teaparty.net
Subject: Re: running SAN on NetApp

RDMs on VMWare should be avoided. This is out of scope of the initial question, but just something I felt should be said. Other hypervisors are OK with that, but for VMWare, it's better to use a datastore.

On Wednesday, 13 April 2016, Steiner, Jeffrey <Jeffrey.Steiner@netapp.com<mailto:Jeffrey.Steiner@netapp.com>> wrote:
Hi all, Jeff from NetApp here. I feel compelled to chime in. Forgive any typos, I'm at 37,000 feet over Murmansk and there's a little turbulence.

First, I'll try to get to the point:


1) Protocol is less important than ever these days. Part of the reason for a change is that in the past it was hard to deal with the delays caused by physical drive heads moving around. Lots of solutions were offered, ranging from just using 10% of each drive to minimize drive head movement or just having huge frames that had massive amounts of disks and dedicated ASICS to tie it all together. These days you can get an astonishing amount of horsepower from a few Xeons, and combined with Flash all those old problems go away. I spend 50% of my time with the largest database projects on earth and for the most part I don't care what protocol is used. It's really a business decision. NFS is easier to manage, but if someone already has an FC infrastructure investment they might as well keep using it.

2) 19 out of 20 projects I see are hitting performance limits related to the bandwidth available. Customer A might have a couple of 8Gb FC HBA's running at line speed, while customer B has a couple of 10Gb NIC's running at line speed. That 20th project tends to actually require SAN, but we're talking about databases that are pushing many GB/sec of bandwidth, such as lighting up ten 16GB FC ports simultaneously. When you get that kind of demand, FC is currently faster and easier to manage than NFS.

3) Personally, I recommend FlashPool for tiering. I know there are other options out there, but I think they offer a false sense of security. There are a lot of variables that affects storage performance and lots of IO types, and it's easy to damage performance by making incorrect assumptions about precisely which IO types are important. FlashPool does a really good job at figuring out what media should host what IO and placing the data accordingly. For those fringe situations, FlashPool is highly tunable but almost nobody needs to depart from the defaults. For more common situations, such as archiving old VM's or LUN's, you can nondisruptively move the LUNs from SSD to SAS to SATA if you wish, but almost everyone these days goes directly to all-Flash. I still run into an occasional customer looking for really huge amounts of space, like 500TB, where all-flash isn't quite deemed affordable, and they seem to go with SAS+SSD in a FlashPool configuration.

4) When virtualization is used, I recommend caution with datastores. If you're looking for the best possible performance, keep the IO path simple. Don't re-virtualize a perfectly good LUN inside another container like a VMDK. The performance impact is probably minimal, but it isn't zero. Few customers would notice the overhead, but if you're really worried then consider an RDM or iSCSI LUNs owned directly by the guest itself. Let the hypervisor just manage the raw protocol itself. It also simplifies management. For example, if you use RDM/iSCSI you can move data between physical and virtual resources easily, and you can identify the source of a given LUN without having to look at the hypervisor management console.

Strictly speaking, I would agree that a NetApp LUN is "on top of WAFL" but unless you're using raw hard drives that's true of any storage array. Anybody's SAN implementation is running on top of some kind of virtualization layer that distributes the blocks across multiple drives. The main difference between ONTAP and the so-called "traditional" block-based arrays is there's OS that controls block placement is more complicated.

I know the competition loves to say that ONTAP is "simulating" SAN, but that's just nonsense. The actual protocol itself is actually pretty much the same sort of thing whether you're talking about NAS or SAN. You have a protocol and it processes inbound requests based on things like file offsets, sizes of reads, sizes of writes, etc. Servicing a request like "please read 1MB from an offset of 256K" isn't all that different when you're dealing with FC, iSCSI, CIFS, or NFS. SAN protocols are really more like subsets of file protocols. The hard part of SAN is mostly things like the QA process for ensuring a particular HBA with a particular firmware works will under a majorly faulty SAN situation without corruption. There's where the effort lies.

Sometimes, that extra stuff in ONTAP brings benefits. That's how you get things like snapshots, snapmirror, flexclones, snaprestore, and so forth. Not everybody needs that, which is why we have products like the E-Series which are mostly geared for environments that don't need all the ONTAP goodies. I used to work for an unspecified large database company somewhere near Palo Alto and where the snapshot stuff was needed, we invariably bought ONTAP and when it wasn't needed I almost always went for the IBM DS3000 series arrays, which eventually became E-Series. Neither one was better or worse than the other, it depended on what you want to do with it.

I won't like - you don't get something for nothing. Speaking as someone who influenced, specified, or signed off on about $10M in total storage spending, the only time I really noticed a difference was with random vs sequential IO. ONTAP has more sophistication dealing with heavy random IO and it delivered generally better results, while the simplicity of Santricity (E-Series) allowed better raw bandwidth. Both do all types of IO well, but there has to be some kind of tradeoff or all arrays would have 100% identical characteristics and we all know they don't.

We have all sorts of documents that demonstrate what the various protocols can do on ONTAP and E-Series, but if you need specific proof with your workloads you can talk to your NetApp rep and arrange a POC. We have an extensive customer POC team that does this sort of thing every day in their lab, but sometimes it can be as easy as just getting a temporary iSCSI license so you can see what iSCSI on a 10Gb NIC can do.

From: toasters-bounces@teaparty.net<javascript:_e(%7B%7D,'cvml','toasters-bounces@teaparty.net');> [mailto:toasters-bounces@teaparty.net<javascript:_e(%7B%7D,'cvml','toasters-bounces@teaparty.net');>] On Behalf Of Eric Peng
Sent: Wednesday, April 13, 2016 4:19 AM
To: toasters@teaparty.net<javascript:_e(%7B%7D,'cvml','toasters@teaparty.net');>
Subject: running SAN on NetApp

Folks,

Does anyone have experience with running production workloads on FC-based LUNs on NetApp? Am curious to know how performance of hosting virtual machines (including Exchange, database environments) compares to more traditional block-based SANs (EMC, 3Par, Hitachi, etc), since what I’ve read is that NetApp’s LUNs feature still sits on top of WAFL?

We have some native FAS NetApps, along with many N-series rebranded NetApps, but all are run in 7-Mode and using NFS connections.

Also, how do you all implement data tiering in your NetApp environments? We are currently using IBM SAN (Storwize/V7000) and this has tiering capability. We’d consider moving some SAN workloads to NetApp if we could get as good SAN performance and also address the tiering capability.

Thanks,

Eric Peng | Enterprise Storage Systems Administrator
Esri | 380 New York St. | Redlands, CA 92373 | USA
T 909 793 2853 x3567 | M 909 367 1691
epeng@esri.com<javascript:_e(%7B%7D,'cvml','epeng@esri.com');> | esri.com<http://esri.com/>