So, perhaps you’ve seen this message, but when you check the space usage, it’s just fine. It’s usually because your file system is using a whole bunch of inodes. So, if you’re using your drive space for backups, and those backups require using a lot of inodes, then you’ll use up your disk very quickly.
inodes are the pointers at the beginning of the disk, which point to each datablock. They also contain extra information like file permissions, user, group, etc
Well, let’s create a file system that has more inodes, so that this doesn’t happen as quickly.
For my backups, I use rsync, for doing incremental, differential, and full backups, in a single backup. It does this via hard links, which ends up using inodes, but not very much actual block usage. I believe this happens because directories cannot be hardlinked, and use up inodes every time one is created. So, we need a lot more inodes if we’re going to do that. By default, usually your inodes will point to 8192 byte blocks on the disk. When you reduce the amount of space for each block, you need an inversely proportional increase in inodes to compensate. But, this can also reduce throughput, and increase fragmentation, so I don’t recommend doing this for standard file systems. I do this only for my backup drive.
mkfs.ext4 -b 2048 -i 2048 /dev/3GBackup2/safe
Basically, after reading this, you should be creating file systems based on your needs. If you are running software that likes creating huge numbers of small files, then you’re going to need smaller datablocks and more inodes. If you are running software that creates very large files, then you’re wasting space by having lots of inodes, so you should use a much larger -b and -i option to mkfs.
Let’s say you have a file system that has 8192 data blocks, and those data blocks are 1024 bytes each. That’s a file system of 8 MB. In order to create this file system, you need 2048 inodes (2 x 512 byte sectors per block) as well, because you just might want to store one file (<= 1024 bytes) in each block. Well, in order to have pointers to those files, to be able to access them, you also need 2048 inodes at the beginning of the disk. Some of these inodes will be implicitly eaten up by the inode table itself; so you never really get the full 8M. Now let’s look at some scenarios. The default command will create such a file system.
Scenario 1 Lots of small files
If you create a bunch of files no bigger than the bytes per inode, you’ll use up all of your inodes at roughly the same time as the data block usage.
Scenario 2 Large files
You can also create a single large file, and use only 1 of the inodes. This would cause wasted space in the inode section, at the beginning of the disk; which isn’t really that big of a deal, but you should try to optimize your settings based on needs. The command below will result in a single file being created, and only one extra inode will be used. If you use dumpe2fs to dump the inode usage count before and after, you can see the difference.
dd if=/dev/zero of=blah.bin
Scenario 3 Hard links
Hard links use virtually no space. From what I can tell, I was able to create 64999 links to one file…
dd if=/dev/zero of="0.bin" bs=2048 count=12 count=1; while true; do ln "0.bin" "$count.bin"; if [ $? -ne 0 ]; then break; fi; ((count++)); done; ls -1tr | sort -n