I am just looking at changing from AWS to another provider. I noticed that some VPS's offer more cores at a lower clock speed or less cores at a higher clock speed - assume this is part of infrastructure decisions that some hosts have made over time.
For this example, I need 16gb of ram to handle my concurrent visitors.
4 cores @ 3.2ghz = total clock speed 12.8 ghz
6 cores @ 2.4 = total clock speed 14.4 ghz
My site is low traffic, but has a large catalog (30000 simple products) so my goal is to improve uncached performance. With a warm Varnish cache this becomes less of a problem, but I want the uncache performance to be excellent.
Just wondering if anyone has a view / opinion (or even better some stats) on whether more cores is better or higher clock speed for magento2 ?
Solved! Go to Solution.
In general for maximum performance you want as many cores as possible with as high clock speed as you can possibly afford. Obviously, this will be expensive so the question becomes where is the perfect trade-off between number of cores / clock speed.
What matters for single user page load time is the clock speed. Especially in order to improve uncached performance on a low traffic web site you should aim for a high clock speed rather than many cores. Multiple cores (>4) is really only beneficial on a high traffic site where high concurrency is important. As your site grows you will need to increase the number of cores to support more simultaneous users, but CPUs with a higher core count usually has a lower clock frequency per core, which (at least theoretically) will result in slow page loads for each individual user.
In my opinion it is better to scale the web servers horizontally. A couple of load balanced servers with high clock frequency will perform better than one monster server with many cores. One exception is the database server which can be difficult to scale horizontally.
I hope this helps.
I realize that this is a slow response but want you to consider something when picking your cloud vendor. Most VPS or cloud servers are oversubscribed in terms of a host CPU to cloud cpu ratio. For example:
Provider 1 has a host with 12 cores @ 2.2GHz each and 64GB RAM. A normal distribution would be to carve that up into (3) 4-core servers each with 20GB RAM. The reality is that these providers will oversubscribe their CPU ratio so that it can be carved up into (8) 4-core servers each with 8GB of RAM.
This is normally fine since a lot of people do not fully utilize their CPU's at 100% or even past 50%. But if someone were to have a very active site that is consuming one of those (8) slices then they're stealing all the CPU cycles from the other consumers of the (7) other slices. This is known in the cloud world as CPU steal and it's a very accepted practice where that (1) consumer can be a noisy neighbor.
Cloud Spectator did a great review in their cloud vendor benhmark 2015 part 2 of the various cloud providers on who does better with overall performance. They basically said that Digital Ocean and Rackspace have the highest performing cloud compute instances (less features though) while Centurylink, gogrid, google, and internap sat in the middle (all low variabiility offerings), AWS had the lowest performance with the most variability of services. Verizon, Azure, HP, Dimension Data, profit bricks, and Joyent were all slightly better in performing over AWS but has low variability of product offerings.