Hi
After a lot of reading I created these:
Then after I created all these system I decided to test it and it "works" I mean the servers are created automatically and traffic works fine under thousands of users (I used loader.io) to test. but then I realize users can't login and if add anything to the cart it dissapear !
when the architecture go back to normal under 1 or 2 servers everything is normal, but if we get heavy traffic more than 5+ servers become active and no one can login even admin .
What can I do to maintain the same session active? what am guessing is the users are being moved between each instance but am not sure . Help!
How are you storing the sessions?
When you have multiple servers it is best to store the sessions in Redis. You may need a separate instance for Redis that all servers can connect to.
I found something about redis cluster in aws.
Can you expand or send me a guide how to manage the sessions from redis?
hi i have been reading all over but nothing with details just generic information.
I setup a load balancer with auto scaling and aurora DB. then after a couple of servers appeared during the test the site sucessfully stayed online, however users cant add anything to the cart or even stay logged. so I created a redis cluster and updated the env like this:
'session' => [
'save' => 'files',
'redis' => [
'host' => 'redis cluster entry point ',
'port'=>'6379',
'database' => '2',
'password' => '',
'timeout' => '2.5',
'persistent_identifier' => '',
'database' => '2',
'compression_threshold' => '2048',
'compression_library' => 'gzip',
'log_level' => '1',
'max_concurrency' => '20',
'break_after_frontend' => '5',
'break_after_adminhtml' => '30',
'first_lifetime' => '600',
'bot_first_lifetime' => '60',
'bot_lifetime' => '7200',
'disable_locking' => '0',
'min_lifetime' => '60',
'max_lifetime' => '2592000'
]
],
'cache' => [
'frontend' => [
'default' => [
'id_prefix' => '40d_',
'backend' => 'Cm_Cache_Backend_Redis',
'backend_options' => [
'server' => '127.0.0.1',
'database' => '0',
'port' => '6379',
'password' => 'xxx',
'compress_data' => '1',
'compression_lib' => ''
]
],
'page_cache' => [
'id_prefix' => '40d_',
'backend' => 'Cm_Cache_Backend_Redis',
'backend_options' => [
'server' => '127.0.0.1',
'database' => '1',
'port' => '6379',
'password' => 'xxx',
'compress_data' => '0',
'compression_lib' => ''
]
]
]
],
the site stayed working without errors so I recreated this server as the launch template for the scaling group. did the test with 4 servers all with the same env config. same problem: users cart gets emptied and cant login and I cant login as admin either.
What am I missing? do we use the cluster entry point as host or do we use the node end point for each cache? can I leave the page cache in local redis or they need to be pointed to a cluster? how can I point them? all the guides I find all use local server
Login and Add to Cart are handled by the sessions. If they are not working, you have problems with the sessions.
You need to make sure that all of your servers share the same set of sessions.
If you store sessions in Files, you need to make sure these session files are replicated across all your servers.
If you store sessions in Database, you need to make sure the table storing the sessions is accessible by all servers.
If you store sessions in Redis (the recommended setup), you need to make sure that all of your servers are able to connect to the server running the Redis service.
----
In your case, you just need to focus on the Session Storage and let's ignore the Page Cache or Default Cache first.
Make sure that the app/etc/env.php file in every server has the host of the Session Storage set to the server IP which runs the Redis service.
You will also need to test and make sure that these servers are able to connect to the Redis server as Redis will block external connections out of the box.
Hi, am having the same trouble, i just enable the port for redis in the security group am sure that was causing a problem but still.
3 questions:
in the /env.php file do we point the host to the cluster endpoint: redisxxxxxxxxxxxxxx.amazonaws.com
or directly to a node?
second: is the rest of that session configuration correct?
third:
do the page cache and default cache can they be pointed to the same cluster/ node or should they be pointed no a new redis?
Am new into redis so am not sure if this is enough or is it too much:
Clustered Redis | 3 | 9 nodes | cache.r5.large |
Thanks! this would really save my job if I fix this!
@starlyns I suggest that you start a new topic about your issues as it is not exactly related to the issue faced by the OP and thus it can go off-topic very quickly.