Hi Everyone,
I'm looking to get some advise around best practises when configuring a Clustered Hosting Environment on AWS for Magento 2. So far, from reading web forums and watching a few presentations / videos, I've gathered the following information:
- Separate the "Admin" host from the front end servers. Each in their own Autoscaling Group, behind a load balancer. Run the Cron jobs on the Admin host. Use a custom URL to direct admin traffic to the admin host.
- Use AWS RDS for the DB - either MySQL natively, or Aurora.
- Use Elasticache for the three cache's (Backend, Session and Full Page).
- Consider using a CDN - Cloudfront or Cloudflare etc.
- Consider using a separate Varnish server in it's own Autoscaling Group, behind a load balancer, pointing towards the Front End / Application Servers Load Balancer.
All of this makes good sense. But, it leads me to a few questions....:
- What's recommended for Shared Storage? I've seen some people placing their pub/media folders on AWS EFS. I've also seen some people putting their app/etc, var/report, var/log and var/backup folders on EFS too. Any advise either way?
- What's the preferred way to place the whole cluster into Maintenance mode? Is it best to share the whole var folder, or does this slow things down too much?
- I'm trying to automate the deployments from CodeDeploy via a GitHub hook. This works fine for a single host, but I'm not sure of the best way to do this for a whole cluster. I appreciate only the "Admin" host should be running the setup:upgrade command - but if there any resources around the order of the upgrades that's worth looking into?
Appreciate I'm asking quite a lot in this post - hopefully it'll be useful for other folks down the road too.
Thanks,
Rupes
information you have is correct.
you can either create instance with shared drive, or create shared filesystem.
when instance starts it will use configured shared drive or re-sync shared filesystem.
I STRONGLY advice to share on NFS/EFS only the files that are absolutely required to be always synced between all the web nodes: /pub/static and pub/media.
Don't share the "app" directory since it will give you a lot of performance problem, even with php's opcache enabled.
For every upgrade you will need to use a script to rsync all the directories (expect pub/static and pub/media) between the web nodes, this could be tricky but you will gain on the performance side.
Hi Rupert_Finnigan,
Will you be kind enough to share those presentations/videos with me?
I am also working on creating the cluster hosting and figured out most of it including EFS to share "var" and "pub/media".
Just stuck at the docker to keep synching the code/environment with all servers behind LB with "Admin" instance which I am sure I would figure out but I am interested to know how other fellows are managing the Cluster. I might get some other idea.
If I would finish this then I would surely create a complete tutorial and shared with you.
Thanks.
Shared Storage: Use AWS EFS for the pub/media folder, but use local storage for app/etc, var/report, var/log, and var/backup.
Maintenance Mode: Avoid sharing the entire var folder. Instead, create a shared maintenance flag file on EFS and have each instance check its existence.
Deployments and Upgrades: Use a rolling deployment strategy with CodeDeploy to update instances one by one. Run the setup:upgrade command on the admin host first before updating other instances.