Linux – Performance impact of running different filesystems on a single Linux server

ext4filesystemslinuxperformancexfs

The book "HBase: The definitive guide" states that

Installing different filesystems on a single server is not recommended.
This can have adverse effects on performance as the kernel may have to
split buffer caches to support the different filesystems. It has been reported that, for certain operating systems, this can have a devastating
performance impact.

Does this really apply to Linux? I have never seen the buffer cache bigger than 300 Mbytes and most modern servers have gigabytes of RAM so splitting the buffer cache between different filesystems should not be an issue. Am I missing something else?

Best Answer

Splitting the buffer cache is detrimental, but the effect it has is minimal. I'd guess that it's so small that it is basically impossible to measure.

You have to remember that data between different mount points is unshareable too.

While different file systems use different allocation buffers, it's not like the memory is allocated just to sit there and look pretty. Data from slabtop for a system running 3 different file systems (XFS, ext4, btrfs):

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME 
 42882  42460  99%    0.70K   1866       23     29856K shmem_inode_cache
 14483  13872  95%    0.90K    855       17     13680K ext4_inode_cache
  4096   4096 100%    0.02K     16      256        64K jbd2_revoke_table_s
  2826   1136  40%    0.94K    167       17      2672K xfs_inode
  1664   1664 100%    0.03K     13      128        52K jbd2_revoke_record_
  1333    886  66%    1.01K     43       31      1376K btrfs_inode_cache
(many other objects)

As you can see, any really sizeable cache has utilisation level of over 90%. As such, if you're using multiple file systems in parallel, the cost is about equal to to loosing 5% of system memory, less if the computer is not a dedicated file server.