The "Flush Magento Cache" button will only flush out cache records based on their tags. This uses the Zend_Cache::CLEANING_MODE_MATCHING_ANY_TAG
mode when calling clean on the cache backend.
The "Flush Cache Storage" button will flush the entire cache backing (where the backend supports it), using the Zend_Cache::CLEANING_MODE_ALL
mode when calling clean on the cache backend.
The Cm_Cache_Backend_Redis does differentiate between the two modes and properly handles them both.
What happens in Redis when the "Cache Storage" is flushed:
1380734058.807909 [0 127.0.0.1:61926] "flushdb"
What happens in Redis when the "Magento Cache" is flushed looks something like this...
1380733999.123304 [0 127.0.0.1:61889] "sunion" "zc:ti:541_MAGE"
1380733999.127239 [0 127.0.0.1:61889] "multi"
1380733999.127294 [0 127.0.0.1:61889] "del" "zc:k:541_APP_E4D52B98688947405EDE639E947EE03D" "zc:k:541_CORE_CACHE_OPTIONS" ... etc ...
1380733999.127493 [0 127.0.0.1:61889] "del" "zc:ti:541_MAGE"
1380733999.127523 [0 127.0.0.1:61889] "srem" "zc:tags" "541_MAGE"
1380733999.127547 [0 127.0.0.1:61889] "exec"
1380733999.128596 [0 127.0.0.1:61889] "sunion" "zc:ti:541_CONFIG"
1380733999.131160 [0 127.0.0.1:61889] "multi"
1380733999.131192 [0 127.0.0.1:61889] "del" "zc:k:541_CONFIG_GLOBAL_ADMIN" "zc:k:541_ENTERPRISE_LOGGING_CONFIG" ... etc ...
1380733999.131360 [0 127.0.0.1:61889] "del" "zc:ti:541_CONFIG"
1380733999.131379 [0 127.0.0.1:61889] "srem" "zc:tags" "541_CONFIG"
1380733999.131397 [0 127.0.0.1:61889] "exec"
You'll notice that in the first one a single command is processed by Redis vs the later example where two cache prefixes are used to delete all associated cache records. Based on what I'm seeing here (and in the code) both the '541_MAGE' and '541_CONFIG' prefixes are flushed in separate calls to the cache backend, with the config immediately following the other.
There are two things going on, here: 1) division of Redis functionality across instances and 2) failover of Redis through Sentinel. My team uses load balancers specifically to support item 2.
Here’s how we did this for a production 1.12 cluster in mid-2013:
Multiple Redis Instances
Edit local.xml (Example config) to point <redis_session />
, <cache />
, and <full_page_cache />
at three different Redis instances. My team has chosen to run sessions on port 6382 (32 GB limit), backend cache on port 6383 (48 GB limit), and full page cache on port 6384 (12 GB limit).
This architecture is used to fine tune memory limits and RDB configurations for each cache type and we also can scale Redis to higher aggregate throughputs because Redis is single threaded. Provisioning this Ubuntu 12.04 LTS server required duplication of the /etc/redis/*.conf
configuration files, /etc/init.d/redis*
files, and calling update-rc.d $name defaults
to ensure each Redis instance had its own log files, can independently be signaled, and that all instances are started on system boot.
Load Balancing
The production cluster has one primary server (cache01
) and one failover server (cache02
) with identical system specifications and Redis configurations (3 instances per server; see above).
My team had two highly available NetScaler appliances for which each Redis instance was defined as a service. We defined vservers for each instance on each host, e.g. production_cache_6382_primary
. These vservers are using virtual interfaces for which private IP addresses have been allocated and the local.xml
file points to. The request path is like this:
- Magento application sends request through PHP Redis client
- PHP Redis Client connects to IP address and port configured in
local.xml
- Request is routed through LACP bonded switches
- The switches point to a HA pair of NetScalers
- NetScaler primary points to
production_cache_6382_primary
- The request reaches the configured primary Redis instance
Redis Sentinel
My team has configured Redis Sentinel to use host cache02
as a secondary failover for host cache01
. The NetScaler vserver production_cache_6382_primary
is set to use the Redis service on port 6382
of host cache02
as its failover. Since the NetScaler TCP uptime check is initiated every 6 seconds, Sentinel has a generous window to promote the secondary service to the primary role. This environment allows Magento’s local.xml
to have a static, hardcoded value for IP addresses of Redis instances and it allows our systems to automatically detect and switch which cache server is in service as primary. Here’s how this process works:
- NetScaler performs a TCP monitoring operation against the Redis service on port
6382
of host cache01
, finds it working
- All client requests pointing to the
production_cache_6382_primary
vserver are sent to the above host
- Redis instance
6382
as primary on host cache01
is stopped
- Client connections to
production_cache_6382_primary
start experiencing connection and request failures
- Redis Sentinel independently detects the outage of instance
6382
on host cache01
and promotes the secondary instance on cache02
as primary
- Six seconds after step 1, the NetScaler performs a TCP monitoring operation against the Redis service on port
6382
of host cache01
, finds it down
- The NetScaler performs its failover routine and starts sending client requests to the
6382
service on host cache02
instead of cache01
- Client connections and requests are still pointing at the same vserver IP address and start to experience successful reads and writes
Best Answer
Redis is supported in Magento 1.13 out of box - it is also a direct port of Colin's CE-compatible module.
The below is adapted from Colin's Github for
Cm_Cache_Backend_Redis
, edited for the class names in Enterprise 1.13.This is how you would configure:
An example of Redis session storage would be:
Source: https://github.com/colinmollenhour/Cm_Cache_Backend_Redis
Source: http://www.magentocommerce.com/knowledge-base/entry/ee113-later-release-notes#ee113-11300-highlights