I am working with large files and writing directly to disk is slow. Because the file is large I cannot load it in a TMemoryStream.
TFileStream is not buffered so I want to know if there is a custom library that can offer buffered streams or should I rely only on the buffering offered by OS. Is the OS buffering reliable? I mean if the cache is full an old file (mine) might be flushed from cache in order to make room for a new file.
My file is in the GB range. It contains millions of records. Unfortunately, the records are not of fix size. So, I have to do millions of readings (between 4 and 500 bytes). The reading (and the writing) is sequential. I don't jump up and down into the file (which I think is ideal for buffering).
In the end, I have to write such file back to disk (again millions of small writes).
A word of praise for David Heffernan!
David provided a GREAT piece of code that provides buffered disk access.
PEOPLE YOU HAVE TO HAVE HIS BufferedFileStream! It is gold. And don't forget to upvote.
Thanks David.
Speed tests:
Input file: 317MB.SFF
Delphi stream: 9.84sec
David's stream: 2.05sec
______________________________________
More tests:
Input file: input2_700MB.txt
Lines: 19 millions
Compiler optimization: ON
I/O check: On
FastMM: release mode
**HDD**
Reading: **linear** (ReadLine) (PS: multiply time with 10)
We see clear performance drop at 8KB. Recommended 16 or 32KB
Time: 618 ms Cache size: 64KB.
Time: 622 ms Cache size: 128KB.
Time: 622 ms Cache size: 24KB.
Time: 622 ms Cache size: 32KB.
Time: 622 ms Cache size: 64KB.
Time: 624 ms Cache size: 256KB.
Time: 625 ms Cache size: 18KB.
Time: 626 ms Cache size: 26KB.
Time: 626 ms Cache size: 1024KB.
Time: 626 ms Cache size: 16KB.
Time: 628 ms Cache size: 42KB.
Time: 644 ms Cache size: 8KB. <--- no difference until 8K
Time: 664 ms Cache size: 4KB.
Time: 705 ms Cache size: 2KB.
Time: 791 ms Cache size: 1KB.
Time: 795 ms Cache size: 1KB.
**SSD**
We see a small improvement as we go towards higher buffers. Recommended 16 or 32KB
Time: 610 ms Cache size: 128KB.
Time: 611 ms Cache size: 256KB.
Time: 614 ms Cache size: 32KB.
Time: 623 ms Cache size: 16KB.
Time: 625 ms Cache size: 66KB.
Time: 639 ms Cache size: 8KB. <--- definitively not good with 8K
Time: 660 ms Cache size: 4KB.
______
Reading: **Random** (ReadInteger) (100000 reads)
SSD
Time: 064 ms. Cache size: 1KB. Count: 100000. RAM: 13.27 MB <-- probably the best buffer size for ReadInteger is 4bytes!
Time: 067 ms. Cache size: 2KB. Count: 100000. RAM: 13.27 MB
Time: 080 ms. Cache size: 4KB. Count: 100000. RAM: 13.27 MB
Time: 098 ms. Cache size: 8KB. Count: 100000. RAM: 13.27 MB
Time: 140 ms. Cache size: 16KB. Count: 100000. RAM: 13.27 MB
Time: 213 ms. Cache size: 32KB. Count: 100000. RAM: 13.27 MB
Time: 360 ms. Cache size: 64KB. Count: 100000. RAM: 13.27 MB
Conclusion: don't use it for "random" reading
Update 2021:
When reading character by character, the new System.Classes.TBufferedFileStream seems to be 70% faster.
Best Answer
Windows file caching is very effective, especially if you are using Vista or later.
TFileStream
is a loose wrapper around the WindowsReadFile()
andWriteFile()
API functions and for many use cases the only thing faster is a memory mapped file.However, there is one common scenario where
TFileStream
becomes a performance bottleneck. That is if you read or write small amounts of data with each call to the stream read or write functions. For example if you read an array of integers one item at a time then you incur a significant overhead by reading 4 bytes at a time in the calls toReadFile()
.Again, memory mapped files are an excellent way to solve this bottleneck, but the other commonly used approach is to read a much larger buffer, many kilobytes say, and then resolve future reads of the stream from this in memory cache rather than further calls to
ReadFile()
. This approach only really works for sequential access.From the use pattern described in your updated question, I think you may find the following classes would improve performance for you: