I traced this issue back to a discussion about a commit to the XFS source tree from December 2010. The patch was introduced in Kernel 2.6.38 (and obviously, later backported into some popular Linux distribution kernels).
The observed fluctuations in disk usage are a result of a new feature; XFS Dynamic Speculative EOF Preallocation.
This is a move to reduce file fragmentation during streaming writes by speculatively allocating space as file sizes increase. The amount of space preallocated per file is dynamic and is primarily a function of the free space available on the filesystem (to preclude running out of space entirely).
It follows this schedule:
freespace max prealloc size
>5% full extent (8GB)
4-5% 2GB (8GB >> 2)
3-4% 1GB (8GB >> 3)
2-3% 512MB (8GB >> 4)
1-2% 256MB (8GB >> 5)
<1% 128MB (8GB >> 6)
This is an interesting addition to the filesystem as it may help with some of the massively fragmented files I deal with.
The additional space can be reclaimed temporarily by freeing the pagecache, dentries and inodes with:
sync; echo 3 > /proc/sys/vm/drop_caches
The feature can be disabled entirely by defining an allocsize
value during the filesystem mount. The default for XFS is allocsize=64k
.
The impact of this change will probably be felt by monitoring/thresholding systems (which is how I caught it), but has also affected database systems and could cause unpredictable or undesired results for thin-provisioned virtual machines and storage arrays (they'll use more space than you expect).
All in all, it caught me off-guard because there was no clear announcement of the filesystem change at the distribution level or even in monitoring the XFS mailing list.
Edit:
Performance on XFS volumes with this feature is drastically improved. I'm seeing consistent < 1% fragmentation on volumes that previously displayed up to 50% fragmentation. Write performance is up globally!
Stats from the same dataset, comparing legacy XFS to the version in EL6.3.
Old:
# xfs_db -r -c frag /dev/cciss/c0d0p9
actual 1874760, ideal 1256876, fragmentation factor 32.96%
New:
# xfs_db -r -c frag /dev/sdb1
actual 1201423, ideal 1190967, fragmentation factor 0.87%
rsync is a program designed to be a client and server. The server reads and the client writes. Imagine that instead of a single computer, you had computers over network, I'm sure it's lot more clear if you think that way.
Then there is the controller. As IO operations tend to come with certain amount of risk, an IO issue shouldn't cause total blocking or a crash. So, it creates a fork for each connection and sits in the background.
Best Answer
The very, very bottom of the kernel documentation on blkio controller includes the note:
Practically, this means that write operations will appear in blkio.throttle.io_service_bytes only if they bypass kernel buffering.
The tool
fio
can illustrate this very easily. Direct, unbuffered writes should be reported in blkio.throttle.io_service_bytes:Whereas with the opposite direct & buffered options, there is nothing reported in blkio.throttle.io_service_bytes, because writes pass through the kernel buffer cache and are scheduled later.
Additionally, this thread with a RedHat engineer who works on cgroups reiterates the point that once a write has passed to the write cache inside the kernel, "Due to this extra layer of cache, we lose the context information by the time IO reaches the device." And, so no accounting can occur by blkio.