Linux – e2fsck on large filesystem fails with Error : Memory Allocation Failed

fscklinuxmemorypartition

I'm trying to use e2fsck on a large raid array that is 2TB large and uses GPT for partitioning (because of size).

There is only 1GB of ram installed on the system and I do not have the ability to add more at this time.

The problem is, shortly after starting the fsck on the device, I get an error that says:

Error storing directory block information (inode=115343515, block=0, num=108120142): Memory allocation failed
e2fsck: aborted

After a little bit of online searching and research I ran across this posting below:

Running out of memory running fsck on large filesystems

I followed the advice after looking up the MAN page for e2fsck.conf and basically created a /etc/e2fsck.conf file that looks like:

[scratch_files]
directory = /var/cache/e2fsck

and tried the fsck again after making sure to create the /var/cache/e2fsck directry. After starting the fsck again and watching the available memory, cpu usage, and teh directory size of /var/cashe/e2fsck… i can say it definetely helped a lot… but it still eventually failed giving the same error. Basically it helped slow down the memory consumption but did not negate it all together.

So i tried playing with the addition flags for the e2fsck.conf file using:

dirinfo = false 

or

dirinfo = true

and

icount = false

or

icount = true

neither of which seemed to have much of an effect as the same error happens after a short while.

Am i missing something? I'm ok with the fsck taking a long time… But i need it to actually complete instead of erroring out.

Best Answer

If you can, add some swap space to the system. The fsck will take an insanely long time, but it will eventually complete. Next time, chop your filesystem into smaller pieces.