I'm using the following to count the number of files in a directory, and its subdirectories:
find . -type f | wc -l
But I have half a million files in there, and the count takes a long time.
Is there a faster way to get a count of the number of files, that doesn't involve piping a huge amount of text to something that counts lines? It seems like an inefficient way to do things.
Best Answer
If you have this on a dedicated file-system, or you have a steady number of files overhead, you may be able to get a rough enough count of the number of files by looking at the number of inodes in the file-system via "df -i":
On my test box above I have 75,885 inodes allocated. However, these inodes are not just files, they are also directories. For example:
NOTE: Not all file-systems maintain inode counts the same way. ext2/3/4 will all work, however btrfs always reports 0.
If you have to differentiate files from directories, you're going to have to walk the file-system and "stat" each one to see if it's a file, directory, sym-link, etc... The biggest issue here is not the piping of all the text to "wc", but seeking around among all the inodes and directory entries to put that data together.
Other than the inode table as shown by "df -i", there really is no database of how many files there are under a given directory. However, if this information is important to you, you could create and maintain such a database by having your programs increment a number when they create a file in this directory and decrement it when deleted. If you don't control the programs that create them, this isn't an option.