Mailing List Archive

Backup issues with OCFS2
Hi,

First off all: I posted a similar thread on the OCFS2 mailing list, but
I didn't receive a lot of response. This list seems to be busier, maybe
more luck over here...

I'm having trouble backing up a OCFS2 file system. I'm using rsync and I
find this way, way slower than rsyncing a 'traditional' file system.

The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on
hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer
speed. Read and write speeds are OK on the file system.

Some figures:

My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5
million files on it in 95 directories. About 3000 new files are added
each day, few files are changed.

Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this
filesystem over a Gb connection takes 70 minutes:

Number of files: 1495981
Number of files transferred: 2944
Total file size: 201701039047 bytes
Total transferred file size: 613318155 <tel:613318155> bytes
Literal data: 613292255 <tel:613292255> bytes
Matched data: 25900 bytes
File list size: 24705311
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 118692
Total bytes received: 638195567 <tel:638195567>

sent 118692 bytes received 638195567 <tel:638195567> bytes 154163.57
bytes/sec
total size is 201701039047 speedup is 315.99

To compare this, I have a similar system (the old, non-HA system doing
the exact same thing), with an ext3 filesystem. This one holds 6.5
million files, 500 Gb, about 10.000 new files a day. Backup done with
rsync through ssh on a 100 Mbit line takes 400 seconds.

I'd like to know if somebody has encountered similar problems and maybe
has some tips / insights for me?

Kind regards,

Dirk




_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Backup issues with OCFS2 [ In reply to ]
Dirk,

Isn't there an error in your numbers ?
You say that with you non-custered system you transfer 500GB in 400
seconds, which means more than a GB per second, and I can't figure out
how you can achieve that with 100Mb link that can't run faster than
roughly 10MB/s. Or did I miss something ?

Regards,

Pascal.


Le 04/04/2012 12:15, Dirk Bonenkamp - ProActive a écrit :
> Hi,
>
> First off all: I posted a similar thread on the OCFS2 mailing list, but
> I didn't receive a lot of response. This list seems to be busier, maybe
> more luck over here...
>
> I'm having trouble backing up a OCFS2 file system. I'm using rsync and I
> find this way, way slower than rsyncing a 'traditional' file system.
>
> The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on
> hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer
> speed. Read and write speeds are OK on the file system.
>
> Some figures:
>
> My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5
> million files on it in 95 directories. About 3000 new files are added
> each day, few files are changed.
>
> Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this
> filesystem over a Gb connection takes 70 minutes:
>
> Number of files: 1495981
> Number of files transferred: 2944
> Total file size: 201701039047 bytes
> Total transferred file size: 613318155<tel:613318155> bytes
> Literal data: 613292255<tel:613292255> bytes
> Matched data: 25900 bytes
> File list size: 24705311
> File list generation time: 0.001 seconds
> File list transfer time: 0.000 seconds
> Total bytes sent: 118692
> Total bytes received: 638195567<tel:638195567>
>
> sent 118692 bytes received 638195567<tel:638195567> bytes 154163.57
> bytes/sec
> total size is 201701039047 speedup is 315.99
>
> To compare this, I have a similar system (the old, non-HA system doing
> the exact same thing), with an ext3 filesystem. This one holds 6.5
> million files, 500 Gb, about 10.000 new files a day. Backup done with
> rsync through ssh on a 100 Mbit line takes 400 seconds.
>
> I'd like to know if somebody has encountered similar problems and maybe
> has some tips / insights for me?
>
> Kind regards,
>
> Dirk
>
>
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Backup issues with OCFS2 [ In reply to ]
Hi,

On Wed, 04 Apr 2012 12:15:39 +0200, Dirk Bonenkamp - ProActive
<dirk@proactive.nl> wrote:
> Hi,
>
> First off all: I posted a similar thread on the OCFS2 mailing list, but
> I didn't receive a lot of response. This list seems to be busier, maybe
> more luck over here...
>
> I'm having trouble backing up a OCFS2 file system. I'm using rsync and I
> find this way, way slower than rsyncing a 'traditional' file system.

With clustered file systems you have additional slowdown from the locking,
so it is expected.

>
> The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on
> hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer
> speed. Read and write speeds are OK on the file system.
>
> Some figures:
>
> My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5
> million files on it in 95 directories. About 3000 new files are added
> each day, few files are changed.
>
> Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this
> filesystem over a Gb connection takes 70 minutes:
>
> Number of files: 1495981
> Number of files transferred: 2944
> Total file size: 201701039047 bytes
> Total transferred file size: 613318155 <tel:613318155> bytes
> Literal data: 613292255 <tel:613292255> bytes
> Matched data: 25900 bytes
> File list size: 24705311
> File list generation time: 0.001 seconds
> File list transfer time: 0.000 seconds
> Total bytes sent: 118692
> Total bytes received: 638195567 <tel:638195567>
>
> sent 118692 bytes received 638195567 <tel:638195567> bytes 154163.57
> bytes/sec
> total size is 201701039047 speedup is 315.99
>
> To compare this, I have a similar system (the old, non-HA system doing
> the exact same thing), with an ext3 filesystem. This one holds 6.5
> million files, 500 Gb, about 10.000 new files a day. Backup done with
> rsync through ssh on a 100 Mbit line takes 400 seconds.
>
> I'd like to know if somebody has encountered similar problems and maybe
> has some tips / insights for me?
>

If you are using DRBD on top of LVM you can make use of snapshots and
mount the snapshot with local locking. Here is an example from my backup
script:

lvcreate -s -L 100G -n LVMsnapshot /dev/vg0/lv0
tunefs.ocfs2 -y -L LVMsnapshot --cloned-volume /dev/vg0/LVMsnapshot
mount -o ro,localflocks /dev/vg0/LVMsnapshot /mnt/LVMsnapshot/
rsync -a --delete /mnt/LVMsnapshot/ ${BACKUP_LOCATION}
umount /mnt/LVMsnapshot
lvremove -f /dev/vg0/LVMsnapshot

> Kind regards,
>
> Dirk
>
>
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Backup issues with OCFS2 [ In reply to ]
On Wed, 04 Apr 2012 14:12:02 +0200, Pascal BERTON <pascal.berton3@free.fr>
wrote:
> Dirk,
>
> Isn't there an error in your numbers ?
> You say that with you non-custered system you transfer 500GB in 400
> seconds, which means more than a GB per second, and I can't figure out
> how you can achieve that with 100Mb link that can't run faster than
> roughly 10MB/s. Or did I miss something ?
>

yes, _rsync_ does not transfer all 500G of data

> Regards,
>
> Pascal.
>
>
> Le 04/04/2012 12:15, Dirk Bonenkamp - ProActive a écrit :
>> Hi,
>>
>> First off all: I posted a similar thread on the OCFS2 mailing list, but
>> I didn't receive a lot of response. This list seems to be busier, maybe
>> more luck over here...
>>
>> I'm having trouble backing up a OCFS2 file system. I'm using rsync and
I
>> find this way, way slower than rsyncing a 'traditional' file system.
>>
>> The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on
>> hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer
>> speed. Read and write speeds are OK on the file system.
>>
>> Some figures:
>>
>> My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5
>> million files on it in 95 directories. About 3000 new files are added
>> each day, few files are changed.
>>
>> Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this
>> filesystem over a Gb connection takes 70 minutes:
>>
>> Number of files: 1495981
>> Number of files transferred: 2944
>> Total file size: 201701039047 bytes
>> Total transferred file size: 613318155<tel:613318155> bytes
>> Literal data: 613292255<tel:613292255> bytes
>> Matched data: 25900 bytes
>> File list size: 24705311
>> File list generation time: 0.001 seconds
>> File list transfer time: 0.000 seconds
>> Total bytes sent: 118692
>> Total bytes received: 638195567<tel:638195567>
>>
>> sent 118692 bytes received 638195567<tel:638195567> bytes 154163.57
>> bytes/sec
>> total size is 201701039047 speedup is 315.99
>>
>> To compare this, I have a similar system (the old, non-HA system doing
>> the exact same thing), with an ext3 filesystem. This one holds 6.5
>> million files, 500 Gb, about 10.000 new files a day. Backup done with
>> rsync through ssh on a 100 Mbit line takes 400 seconds.
>>
>> I'd like to know if somebody has encountered similar problems and maybe
>> has some tips / insights for me?
>>
>> Kind regards,
>>
>> Dirk
>>
>>
>>
>>
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Backup issues with OCFS2 [ In reply to ]
Ah Ok!


Le 04/04/2012 14:19, Kaloyan Kovachev a écrit :
> On Wed, 04 Apr 2012 14:12:02 +0200, Pascal BERTON<pascal.berton3@free.fr>
> wrote:
>> Dirk,
>>
>> Isn't there an error in your numbers ?
>> You say that with you non-custered system you transfer 500GB in 400
>> seconds, which means more than a GB per second, and I can't figure out
>> how you can achieve that with 100Mb link that can't run faster than
>> roughly 10MB/s. Or did I miss something ?
>>
> yes, _rsync_ does not transfer all 500G of data
>
>> Regards,
>>
>> Pascal.
>>
>>
>> Le 04/04/2012 12:15, Dirk Bonenkamp - ProActive a écrit :
>>> Hi,
>>>
>>> First off all: I posted a similar thread on the OCFS2 mailing list, but
>>> I didn't receive a lot of response. This list seems to be busier, maybe
>>> more luck over here...
>>>
>>> I'm having trouble backing up a OCFS2 file system. I'm using rsync and
> I
>>> find this way, way slower than rsyncing a 'traditional' file system.
>>>
>>> The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on
>>> hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer
>>> speed. Read and write speeds are OK on the file system.
>>>
>>> Some figures:
>>>
>>> My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5
>>> million files on it in 95 directories. About 3000 new files are added
>>> each day, few files are changed.
>>>
>>> Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this
>>> filesystem over a Gb connection takes 70 minutes:
>>>
>>> Number of files: 1495981
>>> Number of files transferred: 2944
>>> Total file size: 201701039047 bytes
>>> Total transferred file size: 613318155<tel:613318155> bytes
>>> Literal data: 613292255<tel:613292255> bytes
>>> Matched data: 25900 bytes
>>> File list size: 24705311
>>> File list generation time: 0.001 seconds
>>> File list transfer time: 0.000 seconds
>>> Total bytes sent: 118692
>>> Total bytes received: 638195567<tel:638195567>
>>>
>>> sent 118692 bytes received 638195567<tel:638195567> bytes 154163.57
>>> bytes/sec
>>> total size is 201701039047 speedup is 315.99
>>>
>>> To compare this, I have a similar system (the old, non-HA system doing
>>> the exact same thing), with an ext3 filesystem. This one holds 6.5
>>> million files, 500 Gb, about 10.000 new files a day. Backup done with
>>> rsync through ssh on a 100 Mbit line takes 400 seconds.
>>>
>>> I'd like to know if somebody has encountered similar problems and maybe
>>> has some tips / insights for me?
>>>
>>> Kind regards,
>>>
>>> Dirk
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> drbd-user mailing list
>>> drbd-user@lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Backup issues with OCFS2 [ In reply to ]
On Wed, 04 Apr 2012 14:12:02 +0200
Pascal BERTON <pascal.berton3@free.fr> wrote:

> Dirk,
>
> Isn't there an error in your numbers ?
> You say that with you non-custered system you transfer 500GB in 400
> seconds, which means more than a GB per second, and I can't figure out
> how you can achieve that with 100Mb link that can't run faster than
> roughly 10MB/s. Or did I miss something ?
>
> Regards,
>
> Pascal.
>
>

Pascal,

for your information, rsync doesn't transfer everything each time as it uses a
delta-transfer algorithm to reduce the amount of data transfered!


> Le 04/04/2012 12:15, Dirk Bonenkamp - ProActive a écrit :
> > Hi,
> >
> > First off all: I posted a similar thread on the OCFS2 mailing list, but
> > I didn't receive a lot of response. This list seems to be busier, maybe
> > more luck over here...
> >
> > I'm having trouble backing up a OCFS2 file system. I'm using rsync and I
> > find this way, way slower than rsyncing a 'traditional' file system.
> >
> > The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on
> > hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer
> > speed. Read and write speeds are OK on the file system.
> >
> > Some figures:
> >
> > My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5
> > million files on it in 95 directories. About 3000 new files are added
> > each day, few files are changed.
> >
> > Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this
> > filesystem over a Gb connection takes 70 minutes:
> >
> > Number of files: 1495981
> > Number of files transferred: 2944
> > Total file size: 201701039047 bytes
> > Total transferred file size: 613318155<tel:613318155> bytes
> > Literal data: 613292255<tel:613292255> bytes
> > Matched data: 25900 bytes
> > File list size: 24705311
> > File list generation time: 0.001 seconds
> > File list transfer time: 0.000 seconds
> > Total bytes sent: 118692
> > Total bytes received: 638195567<tel:638195567>
> >
> > sent 118692 bytes received 638195567<tel:638195567> bytes 154163.57
> > bytes/sec
> > total size is 201701039047 speedup is 315.99
> >
> > To compare this, I have a similar system (the old, non-HA system doing
> > the exact same thing), with an ext3 filesystem. This one holds 6.5
> > million files, 500 Gb, about 10.000 new files a day. Backup done with
> > rsync through ssh on a 100 Mbit line takes 400 seconds.
> >
> > I'd like to know if somebody has encountered similar problems and maybe
> > has some tips / insights for me?
> >
> > Kind regards,
> >
> > Dirk
> >
> >
> >
> >
> > _______________________________________________
> > drbd-user mailing list
> > drbd-user@lists.linbit.com
> > http://lists.linbit.com/mailman/listinfo/drbd-user
>
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Backup issues with OCFS2 [ In reply to ]
Hi,

Thank you for your reply!

Op 4-4-2012 14:16, Kaloyan Kovachev schreef:
> Hi,
>
> On Wed, 04 Apr 2012 12:15:39 +0200, Dirk Bonenkamp - ProActive
> <dirk@proactive.nl> wrote:
>> Hi,
>>
>> First off all: I posted a similar thread on the OCFS2 mailing list, but
>> I didn't receive a lot of response. This list seems to be busier, maybe
>> more luck over here...
>>
>> I'm having trouble backing up a OCFS2 file system. I'm using rsync and I
>> find this way, way slower than rsyncing a 'traditional' file system.
> With clustered file systems you have additional slowdown from the locking,
> so it is expected.
I do expect some kind of slowdown, but this a lot... I also experience
this slowdown with only 1 node on-line, this should make the locking a
lot easier / faster for DLM I guess? I'm not sure how the internal of
DLM work.

>> The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on
>> hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer
>> speed. Read and write speeds are OK on the file system.
>>
>> Some figures:
>>
>> My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5
>> million files on it in 95 directories. About 3000 new files are added
>> each day, few files are changed.
>>
>> Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this
>> filesystem over a Gb connection takes 70 minutes:
>>
>> Number of files: 1495981
>> Number of files transferred: 2944
>> Total file size: 201701039047 bytes
>> Total transferred file size: 613318155 <tel:613318155> bytes
>> Literal data: 613292255 <tel:613292255> bytes
>> Matched data: 25900 bytes
>> File list size: 24705311
>> File list generation time: 0.001 seconds
>> File list transfer time: 0.000 seconds
>> Total bytes sent: 118692
>> Total bytes received: 638195567 <tel:638195567>
>>
>> sent 118692 bytes received 638195567 <tel:638195567> bytes 154163.57
>> bytes/sec
>> total size is 201701039047 speedup is 315.99
>>
>> To compare this, I have a similar system (the old, non-HA system doing
>> the exact same thing), with an ext3 filesystem. This one holds 6.5
>> million files, 500 Gb, about 10.000 new files a day. Backup done with
>> rsync through ssh on a 100 Mbit line takes 400 seconds.
>>
>> I'd like to know if somebody has encountered similar problems and maybe
>> has some tips / insights for me?
>>
> If you are using DRBD on top of LVM you can make use of snapshots and
> mount the snapshot with local locking. Here is an example from my backup
> script:
>
> lvcreate -s -L 100G -n LVMsnapshot /dev/vg0/lv0
> tunefs.ocfs2 -y -L LVMsnapshot --cloned-volume /dev/vg0/LVMsnapshot
> mount -o ro,localflocks /dev/vg0/LVMsnapshot /mnt/LVMsnapshot/
> rsync -a --delete /mnt/LVMsnapshot/ ${BACKUP_LOCATION}
> umount /mnt/LVMsnapshot
> lvremove -f /dev/vg0/LVMsnapshot
Unfortunatly, I don't have my DRBD on top of LVM at this time. I might
try it if there are no other options.

Cheers,

Dirk
Re: Backup issues with OCFS2 [ In reply to ]
Thanks for the tip guys, I'm used to unison, not rsync...
So, no idea, then... :-)

Le 04/04/2012 14:20, Tristan de CACQUERAY a écrit :
> On Wed, 04 Apr 2012 14:12:02 +0200
> Pascal BERTON<pascal.berton3@free.fr> wrote:
>
>> Dirk,
>>
>> Isn't there an error in your numbers ?
>> You say that with you non-custered system you transfer 500GB in 400
>> seconds, which means more than a GB per second, and I can't figure out
>> how you can achieve that with 100Mb link that can't run faster than
>> roughly 10MB/s. Or did I miss something ?
>>
>> Regards,
>>
>> Pascal.
>>
>>
> Pascal,
>
> for your information, rsync doesn't transfer everything each time as it uses a
> delta-transfer algorithm to reduce the amount of data transfered!
>
>
>> Le 04/04/2012 12:15, Dirk Bonenkamp - ProActive a écrit :
>>> Hi,
>>>
>>> First off all: I posted a similar thread on the OCFS2 mailing list, but
>>> I didn't receive a lot of response. This list seems to be busier, maybe
>>> more luck over here...
>>>
>>> I'm having trouble backing up a OCFS2 file system. I'm using rsync and I
>>> find this way, way slower than rsyncing a 'traditional' file system.
>>>
>>> The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on
>>> hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer
>>> speed. Read and write speeds are OK on the file system.
>>>
>>> Some figures:
>>>
>>> My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5
>>> million files on it in 95 directories. About 3000 new files are added
>>> each day, few files are changed.
>>>
>>> Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this
>>> filesystem over a Gb connection takes 70 minutes:
>>>
>>> Number of files: 1495981
>>> Number of files transferred: 2944
>>> Total file size: 201701039047 bytes
>>> Total transferred file size: 613318155<tel:613318155> bytes
>>> Literal data: 613292255<tel:613292255> bytes
>>> Matched data: 25900 bytes
>>> File list size: 24705311
>>> File list generation time: 0.001 seconds
>>> File list transfer time: 0.000 seconds
>>> Total bytes sent: 118692
>>> Total bytes received: 638195567<tel:638195567>
>>>
>>> sent 118692 bytes received 638195567<tel:638195567> bytes 154163.57
>>> bytes/sec
>>> total size is 201701039047 speedup is 315.99
>>>
>>> To compare this, I have a similar system (the old, non-HA system doing
>>> the exact same thing), with an ext3 filesystem. This one holds 6.5
>>> million files, 500 Gb, about 10.000 new files a day. Backup done with
>>> rsync through ssh on a 100 Mbit line takes 400 seconds.
>>>
>>> I'd like to know if somebody has encountered similar problems and maybe
>>> has some tips / insights for me?
>>>
>>> Kind regards,
>>>
>>> Dirk
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> drbd-user mailing list
>>> drbd-user@lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Backup issues with OCFS2 [ In reply to ]
Hi,

I've changed my setup, I know have:

Disk -> DRBD -> LVM -> OCFS2

I have clvm and lvm2 running (both clones) and things seem to work fine.
(After some fiddeling, I'm new to LVM. I needed cLVM to get things
working smooth).

I can create a snapshot volume, but I fail to succesfully change the
UUID / volume label:

root@scan1:~# lvcreate -s -L 50G -n SS /dev/r0/scans
Logical volume "SS" created
root@scan1:~# tunefs.ocfs2 --cloned-volume -L SS /dev/r0/SS
Updating the UUID and label on cloned volume "/dev/r0/SS".
DANGER: THIS WILL MODIFY THE UUID WITHOUT ACCESSING THE CLUSTER
SOFTWARE. YOU MUST BE ABSOLUTELY SURE THAT NO OTHER NODE IS USING THIS
FILESYSTEM BEFORE MODIFYING ITS UUID.
Update the UUID and label? y
tunefs.ocfs2: OCFS2 inode is not a directory while opening device
"/dev/r0/SS"

Anyone familiar with this setup / error message?

Thank you in advance,

Dirk

Op 4-4-2012 14:16, Kaloyan Kovachev schreef:
> If you are using DRBD on top of LVM you can make use of snapshots and
> mount the snapshot with local locking. Here is an example from my backup
> script:
>
> lvcreate -s -L 100G -n LVMsnapshot /dev/vg0/lv0
> tunefs.ocfs2 -y -L LVMsnapshot --cloned-volume /dev/vg0/LVMsnapshot
> mount -o ro,localflocks /dev/vg0/LVMsnapshot /mnt/LVMsnapshot/
> rsync -a --delete /mnt/LVMsnapshot/ ${BACKUP_LOCATION}
> umount /mnt/LVMsnapshot
> lvremove -f /dev/vg0/LVMsnapshot
>
>
Re: Backup issues with OCFS2 [ In reply to ]
Please Ignore my last post.

I recreated my vg with the --clustered option, now I get:

Clustered snapshots are not yet supported.

I read some things like that, but was under the impression that this was
added in a more recent release.

I think I'll have to change my setup to:

Disk -> LVM -> DRBD -> OCFS2 (As in Kaloyan's post: DRBD on top op LVM,
not the other way around... :-))

Cheers,

Dirk


Op 5-4-2012 18:32, Dirk Bonenkamp - ProActive schreef:
> Hi,
>
> I've changed my setup, I know have:
>
> Disk -> DRBD -> LVM -> OCFS2
>
> I have clvm and lvm2 running (both clones) and things seem to work
> fine. (After some fiddeling, I'm new to LVM. I needed cLVM to get
> things working smooth).
>
> I can create a snapshot volume, but I fail to succesfully change the
> UUID / volume label:
>
> root@scan1:~# lvcreate -s -L 50G -n SS /dev/r0/scans
> Logical volume "SS" created
> root@scan1:~# tunefs.ocfs2 --cloned-volume -L SS /dev/r0/SS
> Updating the UUID and label on cloned volume "/dev/r0/SS".
> DANGER: THIS WILL MODIFY THE UUID WITHOUT ACCESSING THE CLUSTER
> SOFTWARE. YOU MUST BE ABSOLUTELY SURE THAT NO OTHER NODE IS USING
> THIS FILESYSTEM BEFORE MODIFYING ITS UUID.
> Update the UUID and label? y
> tunefs.ocfs2: OCFS2 inode is not a directory while opening device
> "/dev/r0/SS"
>
> Anyone familiar with this setup / error message?
>
> Thank you in advance,
>
> Dirk
>
> Op 4-4-2012 14:16, Kaloyan Kovachev schreef:
>> If you are using DRBD on top of LVM you can make use of snapshots and
>> mount the snapshot with local locking. Here is an example from my backup
>> script:
>>
>> lvcreate -s -L 100G -n LVMsnapshot /dev/vg0/lv0
>> tunefs.ocfs2 -y -L LVMsnapshot --cloned-volume /dev/vg0/LVMsnapshot
>> mount -o ro,localflocks /dev/vg0/LVMsnapshot /mnt/LVMsnapshot/
>> rsync -a --delete /mnt/LVMsnapshot/ ${BACKUP_LOCATION}
>> umount /mnt/LVMsnapshot
>> lvremove -f /dev/vg0/LVMsnapshot
>>
>>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

--
<http://www.proactive.nl>
T 023 - 5422299
F 023 - 5422728

www.proactive.nl <http://www.proactive.nl>
Re: Backup issues with OCFS2 [ In reply to ]
Hi All,

I'm still having issues with backing up this file system. I've followed
the advice and used LVM under my DRBD device (Disk -> LVM -> DRBD -> OCFS2).

I can create a snapshot (I have to run a fsck.ocfs2 on this snaphot
after cloning the volume every time). I mount the snapshot read only
with local filelocks.

The performance of this snapshot volume is even worse than the original
volume.... Performance (on the original volume and the snapshot volume)
seems to degrade when the numbers of files in a directory rise. When I
say performance, I mean 'operations where every file needs to be
checked' like rsync or 'find . -mtime -1 print'. Performance for my
application is great (writing a couple of thousand files a day and
reading a couple of 100.000 a day) dd tests give me 200 MB/s writes and
600 MB/s reads.

Am I missing something here, or will this setup just never work for my
backups...?

Kind regards,

Dirk

PS: I understand that this is more an OCFS2 issue than a DRBD thing...


Op 4-4-2012 14:16, Kaloyan Kovachev schreef:
> Hi,
>
> On Wed, 04 Apr 2012 12:15:39 +0200, Dirk Bonenkamp - ProActive
> <dirk@proactive.nl> wrote:
>> Hi,
>>
>> First off all: I posted a similar thread on the OCFS2 mailing list, but
>> I didn't receive a lot of response. This list seems to be busier, maybe
>> more luck over here...
>>
>> I'm having trouble backing up a OCFS2 file system. I'm using rsync and I
>> find this way, way slower than rsyncing a 'traditional' file system.
> With clustered file systems you have additional slowdown from the locking,
> so it is expected.
>
>> The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on
>> hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer
>> speed. Read and write speeds are OK on the file system.
>>
>> Some figures:
>>
>> My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5
>> million files on it in 95 directories. About 3000 new files are added
>> each day, few files are changed.
>>
>> Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this
>> filesystem over a Gb connection takes 70 minutes:
>>
>> Number of files: 1495981
>> Number of files transferred: 2944
>> Total file size: 201701039047 bytes
>> Total transferred file size: 613318155 <tel:613318155> bytes
>> Literal data: 613292255 <tel:613292255> bytes
>> Matched data: 25900 bytes
>> File list size: 24705311
>> File list generation time: 0.001 seconds
>> File list transfer time: 0.000 seconds
>> Total bytes sent: 118692
>> Total bytes received: 638195567 <tel:638195567>
>>
>> sent 118692 bytes received 638195567 <tel:638195567> bytes 154163.57
>> bytes/sec
>> total size is 201701039047 speedup is 315.99
>>
>> To compare this, I have a similar system (the old, non-HA system doing
>> the exact same thing), with an ext3 filesystem. This one holds 6.5
>> million files, 500 Gb, about 10.000 new files a day. Backup done with
>> rsync through ssh on a 100 Mbit line takes 400 seconds.
>>
>> I'd like to know if somebody has encountered similar problems and maybe
>> has some tips / insights for me?
>>
> If you are using DRBD on top of LVM you can make use of snapshots and
> mount the snapshot with local locking. Here is an example from my backup
> script:
>
> lvcreate -s -L 100G -n LVMsnapshot /dev/vg0/lv0
> tunefs.ocfs2 -y -L LVMsnapshot --cloned-volume /dev/vg0/LVMsnapshot
> mount -o ro,localflocks /dev/vg0/LVMsnapshot /mnt/LVMsnapshot/
> rsync -a --delete /mnt/LVMsnapshot/ ${BACKUP_LOCATION}
> umount /mnt/LVMsnapshot
> lvremove -f /dev/vg0/LVMsnapshot
>
>> Kind regards,
>>
>> Dirk
>>
>>
>>
>>
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Backup issues with OCFS2 [ In reply to ]
On Fri, Apr 13, 2012 at 08:09:06AM +0200, Dirk Bonenkamp - ProActive wrote:
> Hi All,
>
> I'm still having issues with backing up this file system. I've followed
> the advice and used LVM under my DRBD device (Disk -> LVM -> DRBD -> OCFS2).
>
> I can create a snapshot (I have to run a fsck.ocfs2 on this snaphot
> after cloning the volume every time). I mount the snapshot read only
> with local filelocks.
>
> The performance of this snapshot volume is even worse than the original
> volume.... Performance (on the original volume and the snapshot volume)
> seems to degrade when the numbers of files in a directory rise. When I
> say performance, I mean 'operations where every file needs to be
> checked' like rsync or 'find . -mtime -1 print'. Performance for my
> application is great (writing a couple of thousand files a day and
> reading a couple of 100.000 a day) dd tests give me 200 MB/s writes and
> 600 MB/s reads.
>
> Am I missing something here, or will this setup just never work for my
> backups...?

stat() can be a costly syscall.
even more so on cluster file systems.

Hope you have already mounted -o noatime?

Even readdir respectively getdents is typically more expensive
on cluster file systems. keeping the number of files per directory
small-ish (whatever that may be for your context) may help,
introducing hirachical "hashing" subdirectories can help with doing so.

And I'm not even speaking of stat()ing cache-cold,
while some other random IO plus streaming IO happens....
(adding more RAM helps for this one, as does tuning
vm.vfs_cache_pressure and maybe swappiness)

Nothing of this has anything to do with DRBD,
or with streaming IO "performance".

All of this should have been amoung the first hits
when searching for "OCFS2 slow" ...

Why do you think you need/want a cluster file system again?

--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Backup issues with OCFS2 [ In reply to ]
Op 17-4-2012 22:48, Lars Ellenberg schreef:
> On Fri, Apr 13, 2012 at 08:09:06AM +0200, Dirk Bonenkamp - ProActive wrote:
>> Hi All,
>>
>> I'm still having issues with backing up this file system. I've followed
>> the advice and used LVM under my DRBD device (Disk -> LVM -> DRBD -> OCFS2).
>>
>> I can create a snapshot (I have to run a fsck.ocfs2 on this snaphot
>> after cloning the volume every time). I mount the snapshot read only
>> with local filelocks.
>>
>> The performance of this snapshot volume is even worse than the original
>> volume.... Performance (on the original volume and the snapshot volume)
>> seems to degrade when the numbers of files in a directory rise. When I
>> say performance, I mean 'operations where every file needs to be
>> checked' like rsync or 'find . -mtime -1 print'. Performance for my
>> application is great (writing a couple of thousand files a day and
>> reading a couple of 100.000 a day) dd tests give me 200 MB/s writes and
>> 600 MB/s reads.
>>
>> Am I missing something here, or will this setup just never work for my
>> backups...?
> stat() can be a costly syscall.
> even more so on cluster file systems.
>
> Hope you have already mounted -o noatime?
Yes, I have, from the start.
> Even readdir respectively getdents is typically more expensive
> on cluster file systems. keeping the number of files per directory
> small-ish (whatever that may be for your context) may help,
> introducing hirachical "hashing" subdirectories can help with doing so.
Keeping the number of files small-ish is not really an option here...
Few directories with tons of files due to the nature of the application
it runs.
>
> And I'm not even speaking of stat()ing cache-cold,
> while some other random IO plus streaming IO happens....
> (adding more RAM helps for this one, as does tuning
> vm.vfs_cache_pressure and maybe swappiness)
>
> Nothing of this has anything to do with DRBD,
> or with streaming IO "performance".
>
> All of this should have been amoung the first hits
> when searching for "OCFS2 slow" ...
Well, my application runs (very) fine on OCFS2. Backups do not (for my
situation) as I found out. Searching for rsync + OCFS2 gives you a lot
of hits on OCFS2 as a replacement for rsync, but surpisingly few about
'my' issue.
> Why do you think you need/want a cluster file system again?
To run 2 active nodes, which works fine. But, since backups are
essential, I went back to the drawing board. I now have a test-setup
with active/passive DRDB with ext4. The master node is NFS server and
the slave mounts the NFS share with the ext4 filesystem. This works just
fine, I did quite some testing the past few days and haven't been able
to 'wreck' it. And backups are a breeze again. So far, so good...

Thanks for your input.

Dirk

--
<http://www.proactive.nl>
T 023 - 5422299
F 023 - 5422728

www.proactive.nl <http://www.proactive.nl>