disable agent normalization, and make sure there is no other cookies set, like "currency" or "store"
all the answers are here in hash section:
sub vcl_hash {
hash_data(req.url);
if (req.http.Host) {
hash_data(req.http.Host);
} else {
hash_data(server.ip);
}
hash_data(req.http.Ssl-Offloaded);
if (req.http.X-Normalized-User-Agent) {
hash_data(req.http.X-Normalized-User-Agent);
}
if (req.http.Accept-Encoding) {
hash_data(req.http.Accept-Encoding);
}
if (req.http.X-Varnish-Store || req.http.X-Varnish-Currency) {
hash_data("s=" + req.http.X-Varnish-Store + "&c=" + req.http.X-Varnish-Currency);
}
if (req.http.X-Varnish-Esi-Access == "private" &&
req.http.Cookie ~ "frontend=") {
hash_data(regsub(req.http.Cookie, "^.*?frontend=([^;]*);*.*$", "\1"));
}
return (hash);
}
and this line was extended:
if (req.http.Cookie !~ "frontend=" && !req.http.X-Varnish-Esi-Method)
TL; DR - On MageStack we use Varnish, Redis (cache), Redis (sessions) and Eaccelerator/Zend OPCache (depending on PHP version)
You've already got most of it understood.
The cache backend, session store, opcode cache, full page cached and reverse proxy cache are all completely different.
You can use different technologies for all and you can use them ALL simultaneously (including Varnish and a FPC)
Cache Backends
- Files (Core) Default
- Memcache (Core)
- APC (Core)
- Redis (<1.9 module courtesy Colin Mollenhour)
- MongoDB (module courtesy Colin Mollenhour)
- Rubic (module courtesy Daniel Sloof)
You can only use one cache backend.
Contrary to popular belief, using a memory based cache will not improve performance. But it will overcome some fatal flaws in Magento's default file based caching.
As of writing this message, Redis is my recommendation.
Session Stores
- Files (Core) Default
- Memcache (Core)
- Redis (<1.9 module courtesy Colin Mollenhour)
- MongoDB (module courtesy Colin Mollenhour)
You can only use one session store.
Contrary to popular belief, using a memory based session store will not improve performance.
As of writing this message, Redis is my recommendation.
OpCode Cache
- APC
- XCache
- Eaccelerator (PHP <5.4)
- Zend OPCache (PHP >5.4)
You can actually install multiple opcode caches, but it's not recommended, nor would I expect to see any gains.
My recommendations are in the brackets above.
No module is required to be installed to leverage this.
Reverse Proxy Cache
- Varnish
- Nginx
- Apache
- … and many more
You can use multiple reverse proxies, and whilst doing so is complex and prone to cache elongation, it can have merits (ie. To prevent stampeding during a cache flush).
Use one when necessary (ie. Not to speed up a slow site, but to reduce resource usage on a fast site).
To leverage a reverse proxy, it needs both enabling server side and needs a module for Magento.
The reason for the module is to help control caching logic (ie. To tell the cache what it should and shouldn't cache) and also to manage cache contents (ie. To trigger purges of the cache).
I don't recommend any unless you have a total understanding of what you are doing. Badly set up reverse proxies can break header information, can cause session loss, session sharing, stale content, apply additional limits to load time/buffers, consume additional resources etc.
Full Page Cache
- EE FPC
- … lots of others (via modules)
Use one when necessary (ie. Not to speed up a slow site, but to reduce resource usage on a fast site).
Contrary to popular belief, you can (and should) use a FPC in conjunction with a reverse proxy cache. The two solve different problems and have different capabilities.
FPCs can leverage more intelligence, because they have direct access to the users session and Magento's core, whereas a reverse proxy is not application aware (it's fairly dumb in the way it works) - so the two complement each other, not compete with each other.
Ie. Don't think Varnish or FPC, think Varnish and FPC.
Best Answer
You shouldn't believe any claims unless you can independently verify them.
That said, it's easy to decipher why LiteSpeed could be somewhat faster than other setups.
Let's take the most performant open source stack which consists of NGINX + Varnish.
NGINX terminates TLS and proxies that over to Varnish, which is the caching layer.
In its turn, Varnish proxies request to the backend (if it is not satisfiable from the cache at the moment), over to NGINX again.
So it is an NGINX sandwich of sorts. The potential performance offenders here is proxy buffering in both NGINX and Varnish.
What proxy buffering means, in short, is that before giving output to the client, NGINX #1 (TLS) will wait to collect a small amount of response data from Varnish (which is the backend to NGINX #1).
This is behavior, which is by default (proxy buffering is enabled). It is primarily useful for ensuring that the slow client will not "burn down" backend connections (which are typically RAM-hungry):
Say you have one really slow client, and NGINX proxy buffering is not enabled. It means that a connection is fully "consumed" by the client, while it's slowly receiving data. NGINX must keep the connection to its backend (normally, it's PHP-FPM in other setups), while synchronously delivering data to the client.
With proxy buffering enabled, NGINX can close the connection to the backend (which is good, especially if it required a good amount of RAM, in case of PHP-FPM as backend), as soon as it got the whole response from it, and then NGINX can release the data to the client, while the backend is free to do other stuff.
However, in the NGINX sandwich kind of stack (Magento 2 case), it is arguable whether you need proxy buffering all the way and you may want to tune it (and it would surely produce better results).
Why is because you would then reduce the time it takes for different components of the stack to "talk" to each other. Again, those are:
NGINX #1 (TLS) -> Varnish -> NGINX -> PHP-FPM.
Keeping buffering in NGINX #1 is not very essential because its backend is a lightweight process (Varnish). The same applies to Varnish -> NGINX link. But buffering there is absolutely required because this is how the caching works in Varnish - it has to get the entire response in order to cache it.
So either way, you can decipher from all of this, that having LiteSpeed combine TLS termination feature of the above stack, and not just that, is already a win because it won't cause extra buffering between different software components.
It really isn't that much faster! Those numbers where NGINX+Varnish performs at 3K requests per second are very good, not to even bother considering other solutions.
Because it's commercial, and you'd rarely if ever, hit the ceiling with the Varnish solution. You need to have an Uber-size traffic Magento 2 instance (literally) to warrant the need for Litespeed, although I would just fine-tune things with proxy buffering as a first step.
Again, these numbers from Litespeed look like they are, and maybe they are times better. But this is more of a marketing thing, e.g.: would you really bother so much if you had a trillion dollars which you can never spend in your entire lifetime but then someone offered you another 2 trillions for doing some extra work? I'd say "meh, I am fine with my trillion, thank you" :)