Mailing List Archive

running out of inodes problem
Hello Toasters,

Just wanted to do a quick check, what the standard practise is
when running out of inodes on a volume.

I have several flex volumes in one aggregate.
One of the volumes only at 80% full ran out of inodes.

df -i will show number of inodes used and inodes free.

This is a 100G volume with 3458831 inodes.

According to now.netapp.com, there are two solutions,

increase inodes with the 'maxfiles' command,
or add more disk space to the volume.

Has anybody had experience with this and which way did you go?
RE: running out of inodes problem [ In reply to ]
It depends on why you are running out of inodes. If your dataset
uses lots of little files, then increasing the disk space probably
won't help much because you'll end up having a lot of space sitting
idle.
If there are just a few places in the data that have lots of inodes, but
it's the exception not the rule, then adding space will probably do the
trick.

The only caveat with adding inodes is to add them as you need them.
Don't
massively over-add inodes as you'll increase some structures in the
filesystem
that could slow down your performance unecessarily. Also keep in mind
that
once you increase the inodes in a volume, they cannot be decreased.

Just some thoughts on the topic.


-- Adam Fox
adamfox@netapp.com


-----Original Message-----
From: Magnus Swenson [mailto:magnuss@cadence.com]
Sent: Tuesday, May 22, 2007 10:38 AM
To: toasters@mathworks.com
Subject: running out of inodes problem

Hello Toasters,

Just wanted to do a quick check, what the standard practise is when
running out of inodes on a volume.

I have several flex volumes in one aggregate.
One of the volumes only at 80% full ran out of inodes.

df -i will show number of inodes used and inodes free.

This is a 100G volume with 3458831 inodes.

According to now.netapp.com, there are two solutions,

increase inodes with the 'maxfiles' command, or add more disk space to
the volume.

Has anybody had experience with this and which way did you go?
Re: running out of inodes problem [ In reply to ]
This has happened quite a few times to me. Coming from an EDA
environment as well, it's not uncommon for relatively small volumes to
have huge numbers of files (20 million files on a 450GB volume).

maxfiles is what I use, typically adding a million files at a time. I'm
not exactly sure what the algorithm Netapp uses to add inodes as you
increase volume size, so I just take the more direct route. Plus I
don't want to just throw space at engineers who will consume it "just
because." Remember, after adding inodes, you can't decrease the number
and they consume space from the volume.

--
/* wes hardin */
UNIX System Admin
Dallas Semiconductor/Maxim Integrated Products


Magnus Swenson wrote:
> Hello Toasters,
>
> Just wanted to do a quick check, what the standard practise is
> when running out of inodes on a volume.
>
> I have several flex volumes in one aggregate.
> One of the volumes only at 80% full ran out of inodes.
>
> df -i will show number of inodes used and inodes free.
>
> This is a 100G volume with 3458831 inodes.
>
> According to now.netapp.com, there are two solutions,
>
> increase inodes with the 'maxfiles' command,
> or add more disk space to the volume.
>
> Has anybody had experience with this and which way did you go?
Re: running out of inodes problem [ In reply to ]
Just add more with maxfiles, and ask about netapps plan to adopt
dynamic inode allocation. Which there may not be, but one can hope.
:) We have a data set that constantly runs out of inodes, we just
keep a close eye on it, and add more inodes when needed. We've not
had an issue with mysterious loss of performance when adding inodes
using maxfiles.

hope that helps,

-Blake

On 5/22/07, Fox, Adam <Adam.Fox@netapp.com> wrote:
> It depends on why you are running out of inodes. If your dataset
> uses lots of little files, then increasing the disk space probably
> won't help much because you'll end up having a lot of space sitting
> idle.
> If there are just a few places in the data that have lots of inodes, but
> it's the exception not the rule, then adding space will probably do the
> trick.
>
> The only caveat with adding inodes is to add them as you need them.
> Don't
> massively over-add inodes as you'll increase some structures in the
> filesystem
> that could slow down your performance unecessarily. Also keep in mind
> that
> once you increase the inodes in a volume, they cannot be decreased.
>
> Just some thoughts on the topic.
>
>
> -- Adam Fox
> adamfox@netapp.com
>
>
> -----Original Message-----
> From: Magnus Swenson [mailto:magnuss@cadence.com]
> Sent: Tuesday, May 22, 2007 10:38 AM
> To: toasters@mathworks.com
> Subject: running out of inodes problem
>
> Hello Toasters,
>
> Just wanted to do a quick check, what the standard practise is when
> running out of inodes on a volume.
>
> I have several flex volumes in one aggregate.
> One of the volumes only at 80% full ran out of inodes.
>
> df -i will show number of inodes used and inodes free.
>
> This is a 100G volume with 3458831 inodes.
>
> According to now.netapp.com, there are two solutions,
>
> increase inodes with the 'maxfiles' command, or add more disk space to
> the volume.
>
> Has anybody had experience with this and which way did you go?
>
Re: running out of inodes problem [ In reply to ]
> This has happened quite a few times to me. Coming from an EDA
> environment as well, it's not uncommon for relatively small volumes to
> have huge numbers of files (20 million files on a 450GB volume).
>
> maxfiles is what I use, typically adding a million files at a time. I'm
> not exactly sure what the algorithm Netapp uses to add inodes as you
> increase volume size, so I just take the more direct route. Plus I
> don't want to just throw space at engineers who will consume it "just
> because." Remember, after adding inodes, you can't decrease the number
> and they consume space from the volume.

Looks like by default you get 1 inode for every filesystem data
block (4K block size). This would be plenty if each file consumed
at least one 4K block. But files 64 bytes or smaller are stored
entirely in the inode and therefore do not consume any data blocks.
Rather than allocate an entire data block for so little data,
WAFL places the data in the inode where the pointers to the file's
data blocks are ordinarily stored.

So if you have a lot of files under 65 bytes long, then you need
inodes for them, but no data blocks, so increase maxfiles. (Often
symbolic links are short enough to fit in the inode.) You may
still want to grow the volume a little to provide room for the
new inodes. The inode table is stored in an invisible "meta file"
within the volume. The root of the entire volume is the inode for the
inode file. The location of everything else in the volume is stored
in the inode file.


Steve Losen scl@virginia.edu phone: 434-924-0640

University of Virginia ITC Unix Support
Re: running out of inodes problem [ In reply to ]
scl@sasha.acc.virginia.edu (Stephen C. Losen) writes:

> Looks like by default you get 1 inode for every filesystem data
> block (4K block size).

That was the old scheme, where the default was 1 inode / 4KB and
the minimum allowed 1 inode / 32KB (roughly). These days the
default (and minimum) for a flexible volume is the old minimum.
That's probably why people run out of inodes more often than
they used to.

Adam.Fox@netapp.com (Adam Fox) writes:

> The only caveat with adding inodes is to add them as you need them.
> Don't massively over-add inodes as you'll increase some structures in the
> filesystem that could slow down your performance unecessarily.

I think the word that should be emphasized there is "massively".
It's no more sensible to have your inode metafile always nearly full
than to have your aggregates/traditional-volumes in that state.
There are overheads that increase if you do.

There is a hidden 5% reserve in the inode metafile (i.e. it is really
20/19 times the maxfiles value, as you can see by looking at the
inode numbers actually used) which is meant to stop inode allocation
going exponential on you (like the 10% space reserve in an aggregate).
That doesn't mean that operating at the extreme limit allowed is ideal.

There was a major change to the inode allocation algorithm (sometime
early in ONTAP 6.x, I think) which substantially improved the performance
with the inode metafile nearly full. (It had some deleterious effects
in other contexts, though, as I might get around to posting about one
of these years.) But there's no point in stressing it unnecessarily.

--
Chris Thompson
Email: cet1@cam.ac.uk