Ehcache is not honoring maxElementsInMemory

ehcachememory

I have a fairly simple cache configuration:

 <cache name="MyCache"
    maxElementsInMemory="200000"
    eternal="false"
    timeToIdleSeconds="43200" 
    timeToLiveSeconds="43200"
    overflowToDisk="false"
    diskPersistent="false"
    memoryStoreEvictionPolicy="LRU" 
    />

I create my cache in the following way:

private Ehcache myCache = 
  CacheManager.getInstance().getEhcache("MyCache");

I use my cache like this:

public MyResponse processRequest(MyRequest request) {
    Element element = myCache.get(request);
    if (element != null) {
        return (MyResponse)element.getValue();
    } else {
        MyResponse response = remoteService.process(request); 
        myCache.put(new Element(request, response));
        return response;
    }
}

Every 10,000 calls to processRequest() method, I log stats about my cache like this:

logger.debug("Cache name: " + myCache.getName());
logger.debug("Max elements in memory: " + myCache.getMaxElementsInMemory());
logger.debug("Memory store size: " + myCache.getMemoryStoreSize());
logger.debug("Hit count: " + myCache.getHitCount());
logger.debug("Miss count: " + myCache.getMissCountNotFound());
logger.debug("Miss count (because expired): " + myCache.getMissCountExpired());

..I see a good amount of hits, which tells me that it's working.

..However, what I'm seeing is that after a couple hours, the getMemoryStoreSize() is starting to exceed getMaxElementsInMemory(). Eventually, it gets bigger and bigger, and renders the jvm unstable because GC is starting to do Full GCs nonstop to reclaim memory (and I have a pretty large cap set). When I profiled the heap, it pointed to the LRU's SpoolingLinkedHashMap taking most of the space.

I do have a lot of requests hitting this cache, and my theory is that ehcache's LRU algorithm is perhaps not keeping up with evicting the elements when it's full. I tried LFU policy and it also caused the memory store to go over maxElements.

I then started looked at the ehcache code to see if I could prove my theory (inside LruMemoryStore$SpoolingLinkedHashMap):

private boolean removeLeastRecentlyUsedElement(Element element) throws CacheException {
        //check for expiry and remove before going to the trouble of spooling it
        if (element.isExpired()) {
            notifyExpiry(element);
            return true;
        }

        if (isFull()) {
            evict(element);
            return true;
        } else {
            return false;
        }
    }

..from here looks ok, then looked at the evict() method:

protected final void evict(Element element) throws CacheException {
    boolean spooled = false;
    if (cache.isOverflowToDisk()) {
        if (!element.isSerializable()) {
            if (LOG.isDebugEnabled()) {
                LOG.debug(new StringBuffer("Object with key ").append(element.getObjectKey())
                        .append(" is not Serializable and cannot be overflowed to disk"));
            }
        } else {
            spoolToDisk(element);
            spooled = true;
        }
    }

    if (!spooled) {
        cache.getCacheEventNotificationService().notifyElementEvicted(element, false);
    }
}

..this looks like it doesn't actually evict (despite the name) but rather relies on the caller to evict. So I looked at the implementation of the put() method and I don't see it calling it. I'm clearly missing something here and would appreciate some help on this.

Thanks!

Best Answer

Your configuration looks fine to me. Only need is to use right key for caching.

Do not put complete request object as your key for cache. Put some unique value from your request object. For example:

MyResponse response = remoteService.process(request); 
myCache.put(new Element(request.getCustomerID(), response));
return response;

This should work for you. The reason your caching is not working is that each time your request object is new object; it never finds the response from cache, so it keeps adding into cache.

Related Topic