Nginx – optimize nginx for large file downloading

lighttpdnginxoptimization

Hey, I'm wondering what are some general options I should look into for optimizing an nginx server for large file downloading (typically 100mb to 6gb). I just migrated from lighttpd and I'm noticing that during downloads, speeds fluctuate a lot very quickly. I'm familiar with fluctuating speeds, but not at this rate, lighttpd didn't fluctuate nearly as much. I was wondering if there were some general things I should look into, being new to nginx. Should I up the worker pool count, etc.

I was going through the wiki page for the HttpCoreModule and I found something such as the directio option:

The directive enables use of flags O_DIRECT (FreeBSD, Linux), F_NOCACHE (Mac OS X) or directio() function (Solaris) for reading files with size greater than specified. This directive disables use of sendfile for this request. This directive may be useful for big files

would that be an option to try out? Thanks guys, I appreciate the help.

I know my question may be pretty broad, but like I said, being new to nginx I'm wondering what kind of options I can look towards to optimize the server for file downloading. I know a variety of things play a part, but I also know lighttpd didn't fluctuate as much on the exact same server.

Thanks!

Best Answer

How much ram do you have? What kind of workload in your server? Does is serve only big files, or serving smaller files and/or proxying requests as well?

DirectIO is usefull when set of active files is larger then RAM, so they won't fit in cache, and caching them is useless - it's better to read them directly from disk and leave cache for something else.

As for flucations - this is probably caused by nginx workers locking on disk operations (by default they are synchronous). Try increasing number of workers or try using async i/o (aio on). But be carefull: too asyncronous io, or large number of workers might cause much bigger seek ratio, so overall speed might decrease dramatically.