I've got Newrelic monitoring our Magento 1.9 website. It is indicating that intermittently there the Cm_RedisSession_Model_Session::read operation is taking 10-20 seconds to complete. I've turned up Redis logging to 7 and correlated the delays I'm seeing with the log but that isn't indicating any problems. The problem only seems to be affecting requests to cms/index/noroute.
This is our redis config:
Do you have any 'save' lines in your Redis config?
/etc/redis/redis.conf may have lines like:
save 900 1 save 300 10 save 60 10000
If you're just using it for cache and session data, you may not care about persisting it to disk, and even though these writes should be non-blocking, you may be seeing a performance penalty from them.
Also, do you have any stats about memory usage and swap usage on the site? In the event that Redis is using too much RAM and has been swapped out to disk then when it is later swapped back in there will be an appreciable delay.
I hope that helps,
Thanks for the response. I do have the following in the config file for the redis session cache:
save 900 1
save 300 10
save 60 10000
However the redis log shows that at the time when the read operation appears to take a long time the background save operation is still occurring in a matter of milliseconds. The redis log files indicate the Redis server is processing everything quite happily. I wonder then if it is actually some bug in the read operation within Cm_redis_Session_cache module. I might try adding some additional logging into the method to see if I can home in on it. Otherwise I'm going to have to ditch it.
We were also facing similar issue with redis session. It was taking ~30 sec due to the issue in the module. Try updating the module from github will fix the issue.
Also be advised that we still face the timing issue randomly but not 30 sec instead we are getting 2*break_after_frontend until today and I am trying to figure out the issue.
Are you able to see if you're hitting locking issues? By default (and for safety) the module will disallow concurrent access to the session so if one of your visitors has hit a slow page and *also* opens a request to another page, then their second request will block within the Redis module.
This is documented in this GitHub issue:
If you're okay with a small risk of session data being overwritten, you could try setting this in your local.xml config in the redis block:
If your slow pages go away after, at least you will know where to look, even if you put the setting back to its default.
I hope that helps,