I was able to create a snapshots of my m1.small caches but the backup button is disabled when I select my t2.medium. Is there a way for me to enable backup of a t2.medium redis elasticache?
How to enable backup(snapshot) on redis elasticache t2.medium
amazon-elasticacheamazon-web-servicesredissnapshot
Related Solutions
Remember that you can resize snapshots on the fly, e.g. with lvextend. So you can give them a sensible initial size and then grow them whenever they get too ful.
This can even be done automatically: Using dmeventd and setting this in lvm.conf:
# 'snapshot_autoextend_threshold' and 'snapshot_autoextend_percent' define
# how to handle automatic snapshot extension. The former defines when the
# snapshot should be extended: when its space usage exceeds this many
# percent. The latter defines how much extra space should be allocated for
# the snapshot, in percent of its current size.
#
# For example, if you set snapshot_autoextend_threshold to 70 and
# snapshot_autoextend_percent to 20, whenever a snapshot exceeds 70% usage,
# it will be extended by another 20%. For a 1G snapshot, using up 700M will
# trigger a resize to 1.2G. When the usage exceeds 840M, the snapshot will
# be extended to 1.44G, and so on.
#
# Setting snapshot_autoextend_threshold to 100 disables automatic
# extensions. The minimum value is 50 (A setting below 50 will be treated
# as 50).
snapshot_autoextend_threshold = 50
snapshot_autoextend_percent = 50
Autoextending does not work instantly, it takes a few seconds for dmeventd to react ... and 50% fill grade and 50% growth is pretty harsh, but for testing with a very small snapshot (and thus rapidly filling the snapshot with data) they are needed.
# lvcreate -n TEST_LV -L 1G /dev/base_vg
Logical volume "TEST_LV" created
# mke2fs -t ext4 /dev/base_vg/TEST_LV
mke2fs 1.42.5 (29-Jul-2012)
[...]
Writing superblocks and filesystem accounting information: done
# mount /dev/base_vg/TEST_LV /mnt
no need to be root to write files
# cd /mnt
# chown USER .
#
$ for i in 1 2 3 4 5 6 7 8 9 10 11 12 ; do
dd if=/dev/urandom bs=1024k count=10 > /mnt/File$i
done
$
# lvcreate -n TEST_LV-SNAP -s /dev/base_vg/TEST_LV -L 25M
Rounding up size to full physical extent 28.00 MiB
Logical volume "TEST_LV-SNAP" created
# lvs /dev/base_vg/TEST_LV-SNAP; \
while true; do
lvs /dev/base_vg/TEST_LV-SNAP |
grep -v Origin
sleep 1
done | uniq
while this is running, start
$ for i in 1 2 3 4 5 6 7 8 9 10 11 12 ; do
dd if=/dev/urandom bs=1024k count=10 > /mnt/File$i
sleep 10
done
The sleep in the writing is needed to let dmeventd catch up --- IIRC it only checks every 10 seconds.
Back to our output:
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
TEST_LV-SNAP base_vg swi-a-s- 28.00m TEST_LV 0.04
TEST_LV-SNAP base_vg swi-a-s- 28.00m TEST_LV 0.04
TEST_LV-SNAP base_vg swi-a-s- 28.00m TEST_LV 35.90
TEST_LV-SNAP base_vg swi-a-s- 28.00m TEST_LV 36.01
TEST_LV-SNAP base_vg swi-a-s- 28.00m TEST_LV 71.86
TEST_LV-SNAP base_vg swi-a-s- 44.00m TEST_LV 45.82
TEST_LV-SNAP base_vg swi-a-s- 44.00m TEST_LV 68.63
TEST_LV-SNAP base_vg swi-a-s- 68.00m TEST_LV 44.46
TEST_LV-SNAP base_vg swi-a-s- 68.00m TEST_LV 59.22
TEST_LV-SNAP base_vg swi-a-s- 104.00m TEST_LV 38.75
TEST_LV-SNAP base_vg swi-a-s- 104.00m TEST_LV 48.40
TEST_LV-SNAP base_vg swi-a-s- 104.00m TEST_LV 48.43
TEST_LV-SNAP base_vg swi-a-s- 156.00m TEST_LV 38.74
TEST_LV-SNAP base_vg swi-a-s- 156.00m TEST_LV 45.17
TEST_LV-SNAP base_vg swi-a-s- 156.00m TEST_LV 45.19
TEST_LV-SNAP base_vg swi-a-s- 156.00m TEST_LV 51.63
TEST_LV-SNAP base_vg swi-a-s- 156.00m TEST_LV 51.65
TEST_LV-SNAP base_vg swi-a-s- 236.00m TEST_LV 34.14
TEST_LV-SNAP base_vg swi-a-s- 236.00m TEST_LV 38.39
TEST_LV-SNAP base_vg swi-a-s- 236.00m TEST_LV 38.40
TEST_LV-SNAP base_vg swi-a-s- 236.00m TEST_LV 42.66
TEST_LV-SNAP base_vg swi-a-s- 236.00m TEST_LV 42.67
TEST_LV-SNAP base_vg swi-a-s- 236.00m TEST_LV 46.92
TEST_LV-SNAP base_vg swi-a-s- 236.00m TEST_LV 46.94
TEST_LV-SNAP base_vg swi-a-s- 236.00m TEST_LV 51.19
TEST_LV-SNAP base_vg swi-a-s- 236.00m TEST_LV 51.20
TEST_LV-SNAP base_vg swi-a-s- 356.00m TEST_LV 33.94
Watch it grow ...
Did you create a Cache Subnet Group in you custom VPC?
You need to create a cache subnet in your VPC (inside the ElastiCache Management) first - after that your VPC/Subnet will appear for nodes.
Related Topic
- Ssh – Connect remotely to ElastiCache – Redis
- How to troubleshoot redis high CPU usage? And how to limit Redis CPU usage
- How to secure Redis cluster on AWS elasticache
- AWS Redis Encryption in-transit + TLS EC2 Connection
- Terraform AWS – How to Terraform ElastiCache Redis Cluster Provisioning Properly
- Nyway to restore a dump.rdb redis backup file remotely
Best Answer
User Guide (API Version 2014-09-30): https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.html#backups-constraints
You can get around this limitation of 'backup and restore' on certain instance types by creating your Redis Cluster with 'Cluster Mode' enabled.