This tool is correct, a domain is only allowed to have one TXT/SPF record.
There is no way to fix this correctly for you, you need to contact Amazon to fix their records.
These need to be merged (and similar for the v=spf2
):
amazonses.com. 900 IN TXT "v=spf1 ip4:199.255.192.0/22 ip4:199.127.232.0/22 ~all"
amazonses.com. 900 IN TXT "v=spf1 ip4:199.255.192.0/22 ip4:199.127.232.0/22 54.240.0.0/18 ~all"
Note that the 54.240.0.0/18
part is also wrong, should be ip4:54.240.0.0/18
.
You can of course remove your include:amazonses.com
and add the IP ranges manually.
But if those ranges change, it will fail again.
I have read about the versioning feature for S3 buckets, but I cannot seem to find if >recovery is possible for files with no modification history. See the AWS docs here on >versioning:
I've just tried this. Yes, you can restore from the original version. When you delete the file it makes a delete marker and you can restore the version before that, i.e: the single, only, revision.
Then, we thought we may just backup the S3 files to Glacier using object lifecycle >management:
But, it seems this will not work for us, as the file object is not copied to Glacier but >moved to Glacier (more accurately it seems it is an object attribute that is changed, but >anyway...).
Glacier is really meant for long term storage, which is very infrequently accessed. It can also get very expensive to retrieve a large portion of your data in one go, as it's not meant for point-in-time restoration of lots of data (percentage wise).
Finally, we thought we would create a new bucket every month to serve as a monthly full >backup, and copy the original bucket's data to the new one on Day 1. Then using something >like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every >night.
Don't do this, you can only have 100 buckets per account, so in 3 years you'll have taken up a third of your bucket allowance with just backups.
So, I guess there are a couple questions here. First, does S3 versioning allow recovery of >files that were never modified?
Yes
Is there some way to "copy" files from S3 to Glacier that I have missed?
Not that i know of
Best Answer
S3 Select is focused on retrieving data from S3 using SQL:
Redshift Spectrum enable quering S3 data directly from your AWS Redshift Cluster:
Athena is focused on extract, transform and load (ETL) data from S3 and has a good integration with AWS Glue:
References: Athena, Spectrum and S3 Select