From:
http://linux.die.net/man/8/fsck.ext3
"Note that in general it is not safe to run e2fsck
on mounted filesystems. The only exception is if the -n
option is specified, and -c
, -l
, or -L
options are not specified. However, even if it is safe to do so, the results printed by e2fsck
are not valid if the filesystem is mounted. If e2fsck
asks whether or not you should check a filesystem which is mounted, the only correct answer is ''no''. Only experts who really know what they are doing should consider answering this question in any other way. "
The answer is found in the Linux source, specifically, /usr/src/linux/mm/shmem.c
, starting around line 70 on my system (Gentoo 2.6.31-ish):
/*
* The maximum size of a shmem/tmpfs file is limited by the maximum size of
* its triple-indirect swap vector - see illustration at shmem_swp_entry().
*
* With 4kB page size, maximum file size is just over 2TB on a 32-bit kernel,
* but one eighth of that on a 64-bit kernel. With 8kB page size, maximum
* file size is just over 4TB on a 64-bit kernel, but 16TB on a 32-bit kernel,
* MAX_LFS_FILESIZE being then more restrictive than swap vector layout.
One-eighth of 2 TB is exactly 256 GB. Larger sizes are possible with a 32-bit kernel, as you discovered with your 32-bit FC6 test system.
It appears that changing the page size may be related to enabling HugeTLB filesystem support in the kernel. However, I don't know enough about the guts of the kernel to say how or why, or what steps you need to take to take advantage of it, or what other implications it might have. To enable it, run make menuconfig
, navigate to File systems, and then Pseudo filesystems. The option in question is HugeTLB file system support. The online help for it says:
CONFIG_HUGETLBFS:
hugetlbfs is a filesystem backing for HugeTLB pages, based on
ramfs. For architectures that support it, say Y here and read
<file:Documentation/vm/hugetlbpage.txt> for details.
If unsure, say N.
It might be worth running this by StackOverflow, too. I hope this helps.
Best Answer
Usually (ex:
ext2
,ext3
,ext4
,ufs
), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.Some filesystems like
xfs
have the ratio of space used by inodes a tunable so it can be increased at any time.Modern file systems like
ZFS
orbtrfs
have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.Edit: narrowing the answer to the updated question.
With
tmpfs
, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created ontmpfs
. If you are in that case, the best practice is to adjust thenr_inodes
parameter to a value large enough for all the files to fit but not use0
(=unlimited).tmpfs
documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:However, it is unclear how this could happen given the fact
tmpfs
RAM usage is by default limited to 50% of the RAM:Many people will be more concerned about the default amount of memory to an amount that matches with what their application demands.