This worked for me (Amazon Linux AMI v2012.03; us-east-1):
Download and extract latest s3curl.zip
wget http://s3.amazonaws.com/doc/s3-example-code/s3-curl.zip
unzip s3-curl.zip
Make executable:
cd s3-curl
chmod +x s3-curl
Create a credential file (.s3curl):
%awsSecretAccessKeys = (
# personal account
personal => {
id => 'XXXXXXXXXXXXXXXXXXXX',
key => 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
},
);
Restrict credentials permissions:
chmod 600 .s3curl
Create POST request body (multidelete.xml):
<?xml version="1.0" encoding="UTF-8"?>
<Delete>
<Object>
<Key>file1.txt</Key>
</Object>
<Object>
<Key>file2.txt</Key>
</Object>
</Delete>
Calculate base64 encoded MD5 sum of POST body:
cat multidelete.xml | openssl dgst -md5 -binary | base64
cD8Q8KTug5P8Hj7oyOW8iQ==
Construct request, and enable verbose display for curl (I have included the command above inline, instead of the MD5 sum itself):
./s3curl.pl --id=personal --post multidelete.xml --contentMd5 `cat multidelete.xml | openssl dgst -md5 -binary | base64` -- -v http://s3.amazonaws.com/BUCKET?delete
Notes:
Trying to use PUT (instead of POST) results in a 405 Method Not Allowed
:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>MethodNotAllowed</Code><Message>The specified method is not allowed against this resource.</Message><ResourceType>MULTI_OBJECT_DELETE</ResourceType><Method>PUT</Method><RequestId>XXXXXXXXXXXXXXXX</RequestId><HostId>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</HostId></Error>
Trying to use POST without the Content-MD5 header results in a 400 Bad Request
:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidRequest</Code><Message>Missing required header for this request: Content-MD5</Message><RequestId>XXXXXXXXXXXXXXXX</RequestId><HostId>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</HostId></Error>
Using a hexadecimal md5 sum results in a 400 Bad Request
:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidDigest</Code><Message>The Content-MD5 you specified was invalid.</Message><RequestId>XXXXXXXXXXXXXXXX</RequestId><Content-MD5>703f10f0a4ee8393fc1e3ee8c8e5bc89</Content-MD5><HostId>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</HostId></Error>
A valid request/response looks like:
./s3curl.pl --id=personal --post multidelete.xml --contentMd5 `cat multidelete.xml | openssl dgst -md5 -binary | base64` -- -v http://s3.amazonaws.com/BUCKET?delete
* About to connect() to s3.amazonaws.com port 80 (#0)
* Trying 72.21.194.31... connected
* Connected to s3.amazonaws.com (72.21.194.31) port 80 (#0)
> POST /BUCKET?delete HTTP/1.1
> User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.12.9.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2
> Host: s3.amazonaws.com
> Accept: */*
> Date: Thu, 05 Apr 2012 01:50:53 +0000
> Authorization: AWS XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX=
> Content-MD5: cD8Q8KTug5P8Hj7oyOW8iQ==
> Content-Length: 172
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
< x-amz-id-2: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
< x-amz-request-id: XXXXXXXXXXXXXXXX
< Date: Thu, 05 Apr 2012 01:50:55 GMT
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Server: AmazonS3
<
<?xml version="1.0" encoding="UTF-8"?>
* Connection #0 to host s3.amazonaws.com left intact
* Closing connection #0
<DeleteResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Deleted><Key>file1.txt</Key></Deleted><Deleted><Key>file2.txt</Key></Deleted></DeleteResult>
Using <Quiet>true</Quiet>
results in the following body returned from a successful deletion:
<DeleteResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"></DeleteResult>
The only way I could replicate your error message was by using an invalid secret key in my credentials file. However, with that the bucket listing did not work either, unlike yours.
Best Answer
That's why you have a production and a staging environment. The staging environment is supposed to be "Free for all" and then your production environment is for when your code is all nice and dandy.
If you want to allow more granularity you should consider source code control for your SaltStack code/yaml. (git, mercurial, etc) Every user could have his/her own branch and then when push comes to shove, you have to merge everything into a 'staging' branch and then eventually deploy to 'production'
The behavior that you are seeing with S3 is the default behavior for "eventual consistency" Basically, you will get the copy of the last updated files and won't get anything new until the 'other user' has finally pushed his/her change.
This is what the docs say on the AWS site about S3 data consistency:
Q: What data consistency model does Amazon S3 employ? Amazon S3 buckets in the US West (Oregon), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney) and South America (Sao Paulo) Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES. Amazon S3 buckets in the US Standard Region provide eventual consistency.
And you can read about 'eventual consistency' here:
http://en.wikipedia.org/wiki/Eventual_consistency