I have a Linode (Ubuntu 16.04) from which I am trying to copy files to an AWS S3 bucket via a cron job calling a script, but the log outputs:
upload failed: ../path/file.ext to s3://bucket/prefix/file.ext Unable to locate credentials
The script tar
s a directory, and then uploads that tar to my s3 bucket
- The script works start to finish if I call it directly via
sudo
- As a root cron job, the
tar
works, but theaws
upload doesn't (with error noted above) - As a [user] cron job, the
tar
fails (intentional permissions related), but theaws
upload succeeds. - When I installed AWS CLI, I forget exactly how it was worded, but I chose to have it installed for all users
Things I've Tried
- Having my script call
aws
directly at/usr/local/bin/aws
- Adding
/usr/local/bin/aws
to the PATH in crontab, and also in my script - Adding
AWS_CONFIG_FILE="/home/[user]/.aws/config"
in crontab, and also in my script - Re-running
aws configure
as root - Following this tip, and comparing cron and interactive environments. My env.cron
PATH
includes everything listed in my env.interactivePATH
, plus a few more, even some duplicates – is that bad?
There are many more statements in my env.interactive (1810 lines) compared to my env.cron (36 lines). It must be something in my environment differences, right? I've searched my env.interactive for any instance of aws
, but there is none, even though that env works just fine. Any tips on other specific items to look for in there?
Any ideas and help are appreciated! Thanks!
Best Answer
If you want to run a specific command using a
user
with sudo, and have config read from its home directory rather than yours, then you have run it withsudo -H -u user ...
for sudo to update the HOME variable to the called user automatically.In this case that will imply a valid value for AWS_CONFIG_FILE and AWS_CREDENTIALS will be automatically generated.