Linux – s3cmd fails too many times

amazon s3backuplinux

It used to be my favorite backup transport agent but now I frequently get this result from s3cmd on the very same Ubuntu server/network:

root@server:/home/backups# s3cmd put bkup.tgz s3://mybucket/
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      36864 of 2711541519     0% in    1s    20.95 kB/s  failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.00)
WARNING: Waiting 3 sec...
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      36864 of 2711541519     0% in    1s    23.96 kB/s  failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.01)
WARNING: Waiting 6 sec...
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      28672 of 2711541519     0% in    1s    18.71 kB/s  failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.05)
WARNING: Waiting 9 sec...
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      28672 of 2711541519     0% in    1s    18.86 kB/s  failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.25)
WARNING: Waiting 12 sec...
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      28672 of 2711541519     0% in    1s    15.79 kB/s  failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=1.25)
WARNING: Waiting 15 sec...
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      12288 of 2711541519     0% in    2s     4.78 kB/s  failed
ERROR: Upload of 'bkup.tgz' failed too many times. Skipping that file.

This happens even for files as small as 100MB, so I suppose it's not a size issue. It also happens when I use put with –acl-private flag (s3cmd version 1.0.1)

I appreciate if you suggest some solution or a lightweight alternative to s3cmd.

Best Answer

There are a few common problems that result in s3cmd returning the error you mention:

  • A non-existent (e.g. mistyped bucket name; or a bucket that hasn't yet been provisioned)
  • Trailing spaces on your authentication values (key/id)
  • An inaccurate system clock. It is possible to use Wireshark (over an http - not https connection) to see how your system clock lines up with S3's clock - they should match within a few seconds. Consider using NTP to sync your clock if this is an issue.

Alternatives to s3cmd:

  • s3cp - a Java based script that offers good functionality for transferring files to S3, and more verbose error messages than s3cmd
  • aws - a Perl based script, written by Tim Kay, that provides easy access to most AWS (including S3) functions, and is quite popular.

If you wish to write your own script, you can use the Python Boto library which has functions for performing most AWS operations and has many examples available online. There is a project which exposes some of the boto functions on the command line - although, a very small set of functions are currently available.