Mailing List Archive

cDOT 8.2.x, grow aggregates or new ones?
Guys,

I've got a four node, 8060 cluster running cDOT 8.2.x which we just
added in a bunch of disks, spread evenly across our two HA pairs. 2 x
DS4246 x 2Tb SATA, and 4 x DS2246 x 600g SAS shelves/disks.

I suspect the smart thing to do is to just grow my existing
aggregates, which lets me grow existing volumes, etc without too much
trouble. And of course I would add in disks in complete Raid Groups.

Or do people think I should just create brand new aggregates and move
as needed? Do I really need to worry about reallocation issues when I
do these adds?

My current setup is:

n1sas1 20.67TB 7.37TB 64% online 28 n1 raid_dp,
n1sata1 56.75TB 12.15TB 79% online 79 n1 raid_dp,
n1sata2 56.75TB 7.30TB 87% online 99 n1 raid_dp,
n2sas1 26.92TB 6.19TB 77% online 20 n2 raid_dp,
n2sas2 26.92TB 9.51TB 65% online 29 n2 raid_dp,
n2sas3 20.19TB 9.54TB 53% online 6 n2 raid_dp,
n2ssd1 13.09TB 10.84TB 17% online 7 n2 raid_dp,
n3sas1 26.92TB 7.24TB 73% online 31 n3 raid_dp,
n3sas2 26.92TB 8.13TB 70% online 37 n3 raid_dp,
n3sas3 20.19TB 2.97TB 85% online 50 n3 raid_dp,
n4sata1 56.75TB 9.73TB 83% online 69 n4 raid_dp,
n4sata2 53.84TB 9.46TB 82% online 33 n4 raid_dp,

As you can see, we did screw up in n1/n2 by having both SAS and SATA
aggregates on the same head. Ooops. We fixed that for the second
pair when we grew the cluster.

Just looking for a confirmation of my thinking.

Thanks,
John
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
Re: cDOT 8.2.x, grow aggregates or new ones? [ In reply to ]
John,

I generally favor fewer larger aggregates, to keep capacity fragmentation down. One exception to this personal rule is if I have a bunch of DS4243s that I will look to retire in the near(er) future with hot-shelf remove than the newer shelves with IOM6 or IOM12, in which case I will try not to mix DS4243s and the newer shelves in the same aggregate.

Noticed you have several, smallish sized, aggregates of the same disk type in each of your nodes. I suppose they are different size/rpm disks? Or wanted to segregate applications to different aggregates?

> On Sep 7, 2016, at 5:52 PM, John Stoffel <john@stoffel.org> wrote:
>
>
> Guys,
>
> I've got a four node, 8060 cluster running cDOT 8.2.x which we just
> added in a bunch of disks, spread evenly across our two HA pairs. 2 x
> DS4246 x 2Tb SATA, and 4 x DS2246 x 600g SAS shelves/disks.
>
> I suspect the smart thing to do is to just grow my existing
> aggregates, which lets me grow existing volumes, etc without too much
> trouble. And of course I would add in disks in complete Raid Groups.
>
> Or do people think I should just create brand new aggregates and move
> as needed? Do I really need to worry about reallocation issues when I
> do these adds?
>
> My current setup is:
>
> n1sas1 20.67TB 7.37TB 64% online 28 n1 raid_dp,
> n1sata1 56.75TB 12.15TB 79% online 79 n1 raid_dp,
> n1sata2 56.75TB 7.30TB 87% online 99 n1 raid_dp,
> n2sas1 26.92TB 6.19TB 77% online 20 n2 raid_dp,
> n2sas2 26.92TB 9.51TB 65% online 29 n2 raid_dp,
> n2sas3 20.19TB 9.54TB 53% online 6 n2 raid_dp,
> n2ssd1 13.09TB 10.84TB 17% online 7 n2 raid_dp,
> n3sas1 26.92TB 7.24TB 73% online 31 n3 raid_dp,
> n3sas2 26.92TB 8.13TB 70% online 37 n3 raid_dp,
> n3sas3 20.19TB 2.97TB 85% online 50 n3 raid_dp,
> n4sata1 56.75TB 9.73TB 83% online 69 n4 raid_dp,
> n4sata2 53.84TB 9.46TB 82% online 33 n4 raid_dp,
>
> As you can see, we did screw up in n1/n2 by having both SAS and SATA
> aggregates on the same head. Ooops. We fixed that for the second
> pair when we grew the cluster.
>
> Just looking for a confirmation of my thinking.
>
> Thanks,
> John
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> http://www.teaparty.net/mailman/listinfo/toasters


_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
Re: cDOT 8.2.x, grow aggregates or new ones? [ In reply to ]
Recently I don't recommend to create aggregates 'as big as possible' any more.
I would create aggregates between 50-100TB and than move volumes online between them if needed.

This way you could relocate / phase out some aggregates more easy than if you have few 'extra large' aggregates.


Met vriendelijke groeten,

Wouter Vervloesem<https://be.linkedin.com/pub/wouter-vervloesem/5/a63/a41>
Storage Consultant

Neoria NV<http://www.neoria.be/>
Prins Boudewijnlaan 41 - 2650 Edegem
T +32 3 451 23 82 | M +32 496 52 93 61

Op 8 sep. 2016, om 05:25 heeft Francis Kim <fkim@BERKCOM.com<mailto:fkim@berkcom.com>> het volgende geschreven:

John,

I generally favor fewer larger aggregates, to keep capacity fragmentation down. One exception to this personal rule is if I have a bunch of DS4243s that I will look to retire in the near(er) future with hot-shelf remove than the newer shelves with IOM6 or IOM12, in which case I will try not to mix DS4243s and the newer shelves in the same aggregate.

Noticed you have several, smallish sized, aggregates of the same disk type in each of your nodes. I suppose they are different size/rpm disks? Or wanted to segregate applications to different aggregates?

On Sep 7, 2016, at 5:52 PM, John Stoffel <john@stoffel.org<mailto:john@stoffel.org>> wrote:


Guys,

I've got a four node, 8060 cluster running cDOT 8.2.x which we just
added in a bunch of disks, spread evenly across our two HA pairs. 2 x
DS4246 x 2Tb SATA, and 4 x DS2246 x 600g SAS shelves/disks.

I suspect the smart thing to do is to just grow my existing
aggregates, which lets me grow existing volumes, etc without too much
trouble. And of course I would add in disks in complete Raid Groups.

Or do people think I should just create brand new aggregates and move
as needed? Do I really need to worry about reallocation issues when I
do these adds?

My current setup is:

n1sas1 20.67TB 7.37TB 64% online 28 n1 raid_dp,
n1sata1 56.75TB 12.15TB 79% online 79 n1 raid_dp,
n1sata2 56.75TB 7.30TB 87% online 99 n1 raid_dp,
n2sas1 26.92TB 6.19TB 77% online 20 n2 raid_dp,
n2sas2 26.92TB 9.51TB 65% online 29 n2 raid_dp,
n2sas3 20.19TB 9.54TB 53% online 6 n2 raid_dp,
n2ssd1 13.09TB 10.84TB 17% online 7 n2 raid_dp,
n3sas1 26.92TB 7.24TB 73% online 31 n3 raid_dp,
n3sas2 26.92TB 8.13TB 70% online 37 n3 raid_dp,
n3sas3 20.19TB 2.97TB 85% online 50 n3 raid_dp,
n4sata1 56.75TB 9.73TB 83% online 69 n4 raid_dp,
n4sata2 53.84TB 9.46TB 82% online 33 n4 raid_dp,

As you can see, we did screw up in n1/n2 by having both SAS and SATA
aggregates on the same head. Ooops. We fixed that for the second
pair when we grew the cluster.

Just looking for a confirmation of my thinking.

Thanks,
John
_______________________________________________
Toasters mailing list
Toasters@teaparty.net<mailto:Toasters@teaparty.net>
http://www.teaparty.net/mailman/listinfo/toasters


_______________________________________________
Toasters mailing list
Toasters@teaparty.net<mailto:Toasters@teaparty.net>
http://www.teaparty.net/mailman/listinfo/toasters