I'm trying to use pg_dump on my localhost to dump a 4GB PostgreSQL database on a remote machine. My localhost reports 15GB of space available. I'm piping the output to gzip. However, after about 15 minutes of processing, pg_dump aborts, stating "pg_dump: [tar archiver] could not write to output file: No space left on device". I continually monitor the amount of free disk space on my machine, and it always remains in the ~10GB range. Why is pg_dump failing prematurely due to low disk space, even though there's still plenty of space?
My command looks like:
pg_dump -c --host=${HOST} --username=${DATABASEUSER} --blobs --format=t ${DATABASE} | gzip -c > /tmp/db-backup.tar.gz
Best Answer
I suggest to change dump format to custom (-Fc, --format c) and avoid tar format at all. AFAIK there are no advantages of using tar format instead of custom (both works with pg_restore).
from http://postgresql.1045698.n5.nabble.com/Out-of-space-making-backup-td1904089.html
If your installation comes from package, then probably you have custom format (zlib) support "out of the box". You can control compression level with -Z option (default value is 6) from 0 (no compression) to 9.
BTW check about your -c option. According to http://www.postgresql.org/docs/9.0/static/app-pgdump.html
BTW2 For convenience you can use automatic PostgreSQL environment variables such as PGHOST, PGUSER, PGDATABASE, PGPORT.