Varnish Cache Not Working in Magento 2.3 with Nginx Proxy and Apache2 Backend

Apache2cachemagento2.3nginxvarnish

I have a site setup with nginx as proxy serving an apache2 server, I'm trying to implement varnish caching.

nginx is listening on 443 and 80
varnish is configured to listen on 8080
apache2 is configured to listen on 7081 and 8081

My etc/nginx/plesk.conf.d/server.conf

    #ATTENTION!
    #
    #DO NOT MODIFY THIS FILE BECAUSE IT WAS GENERATED AUTOMATICALLY,
    #SO ALL YOUR CHANGES WILL BE LOST THE NEXT TIME THE FILE IS GENERATED.

    include "/etc/nginx/plesk.conf.d/ip_default/*.conf";

    server {
            listen 77.68.81.77:443 ssl;
            listen 127.0.0.1:443 ssl;



            ssl_certificate             /opt/psa/var/certificates/cert8arxYuY;
            ssl_certificate_key         /opt/psa/var/certificates/cert8arxYuY;

            location ^~ /plesk-site-preview/ {
                    proxy_pass http://127.0.0.1:8080;
                    proxy_set_header Host               plesk-site-preview.local;
                    proxy_set_header X-Real-IP          $remote_addr;
                    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
                    proxy_set_header X-Forwarded-Proto  $scheme;
                    proxy_cookie_domain plesk-site-preview.local $host;
                    access_log off;
            }

            location / {
                    proxy_pass http://77.68.81.77:8080;
                    proxy_set_header Host $host;
                    proxy_set_header X-Real-IP $remote_addr;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            }
    }

    server {
            listen 77.68.81.77:80;
            listen 127.0.0.1:80;


            location ^~ /plesk-site-preview/ {
                    proxy_pass http://127.0.0.1:8080;
                    proxy_set_header Host               plesk-site-preview.local;
                    proxy_set_header X-Real-IP          $remote_addr;
                    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
                    proxy_set_header X-Forwarded-Proto  $scheme;
                    proxy_cookie_domain plesk-site-preview.local $host;
                    access_log off;
        }

        location / {
                proxy_pass http://77.68.81.77:8080;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
}

My etc/default/varnish :-

DAEMON _OPTS="-a :8080\
  -T localhost:6082 \
  -f /etc/varnish/default.vcl \
  -s malloc,256m"
  -S /etc/varnish/secret

My etc/varnish/default.vcl file the remainder of which is what is generated via magento admin * NOW IN FULL *:

    # VCL version 5.0 is not supported so it should be 4.0 even though actually used Varnish version is 5
vcl 4.0;

import std;
# The minimal Varnish version is 5.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'

backend default {
    .host = "localhost";
    .port = "8081";
    .first_byte_timeout = 600s;
    .probe = {
        .url = "/pub/health_check.php";
        .timeout = 2s;
        .interval = 5s;
        .window = 10;
        .threshold = 5;
   }
}

acl purge {
    "localhost";
}

sub vcl_recv {
    if (req.method == "PURGE") {
        if (client.ip !~ purge) {
            return (synth(405, "Method not allowed"));
        }
        # To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header
        # has been added to the response in your backend server config. This is used, for example, by the
        # capistrano-magento2 gem for purging old content from varnish during it's deploy routine.
        if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool) {
            return (synth(400, "X-Magento-Tags-Pattern or X-Pool header required"));
        }
        if (req.http.X-Magento-Tags-Pattern) {
          ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
        }
        if (req.http.X-Pool) {
          ban("obj.http.X-Pool ~ " + req.http.X-Pool);
        }
        return (synth(200, "Purged"));
    }

    if (req.method != "GET" &&
        req.method != "HEAD" &&
        req.method != "PUT" &&
        req.method != "POST" &&
        req.method != "TRACE" &&
        req.method != "OPTIONS" &&
        req.method != "DELETE") {
          /* Non-RFC2616 or CONNECT which is weird. */
          return (pipe);
    }

    # We only deal with GET and HEAD by default
    if (req.method != "GET" && req.method != "HEAD") {
        return (pass);
    }

    # Bypass shopping cart, checkout and search requests
    if (req.url ~ "/checkout" || req.url ~ "/catalogsearch") {
        return (pass);
    }

    # Bypass health check requests
    if (req.url ~ "/pub/health_check.php") {
        return (pass);
    }

    # Set initial grace period usage status
    set req.http.grace = "none";

    # normalize url in case of leading HTTP scheme and domain
    set req.url = regsub(req.url, "^http[s]?://", "");

    # collect all cookies
    std.collect(req.http.Cookie);

    # Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression
    if (req.http.Accept-Encoding) {
        if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
            # No point in compressing these
            unset req.http.Accept-Encoding;
        } elsif (req.http.Accept-Encoding ~ "gzip") {
            set req.http.Accept-Encoding = "gzip";
        } elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
            set req.http.Accept-Encoding = "deflate";
        } else {
            # unknown algorithm
            unset req.http.Accept-Encoding;
        }
    }

    # Remove Google gclid parameters to minimize the cache objects
    set req.url = regsuball(req.url,"\?gclid=[^&]+$",""); # strips when QS = "?gclid=AAA"
    set req.url = regsuball(req.url,"\?gclid=[^&]+&","?"); # strips when QS = "?gclid=AAA&foo=bar"
    set req.url = regsuball(req.url,"&gclid=[^&]+",""); # strips when QS = "?foo=bar&gclid=AAA" or QS = "?foo=bar&gclid=AAA&bar=baz"

    # Static files caching
    if (req.url ~ "^/(pub/)?(media|static)/") {
        # Static files should not be cached by default
        return (pass);

        # But if you use a few locales and don't use CDN you can enable caching static files by commenting previous line (#return (pass);) and uncommenting next 3 lines
        #unset req.http.Https;
        #unset req.http.X-Forwarded-Proto;
        #unset req.http.Cookie;
    }

    return (hash);
}

sub vcl_hash {
    if (req.http.cookie ~ "X-Magento-Vary=") {
        hash_data(regsub(req.http.cookie, "^.*?X-Magento-Vary=([^;]+);*.*$", "\1"));
    }

    # For multi site configurations to not cache each other's content
    if (req.http.host) {
        hash_data(req.http.host);
    } else {
        hash_data(server.ip);
    }

    # To make sure http users don't see ssl warning
    if (req.http.X-Forwarded-Proto) {
        hash_data(req.http.X-Forwarded-Proto);
    }

}

sub vcl_backend_response {

    set beresp.grace = 3d;

    if (beresp.http.content-type ~ "text") {
        set beresp.do_esi = true;
    }

    if (bereq.url ~ "\.js$" || beresp.http.content-type ~ "text") {
        set beresp.do_gzip = true;
    }

    if (beresp.http.X-Magento-Debug) {
        set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control;
    }

    # cache only successfully responses and 404s
    if (beresp.status != 200 && beresp.status != 404) {
        set beresp.ttl = 0s;
        set beresp.uncacheable = true;
        return (deliver);
    } elsif (beresp.http.Cache-Control ~ "private") {
        set beresp.uncacheable = true;
        set beresp.ttl = 86400s;
        return (deliver);
    }

    # validate if we need to cache it and prevent from setting cookie
    if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
        unset beresp.http.set-cookie;
    }

   # If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass
   if (beresp.ttl <= 0s ||
       beresp.http.Surrogate-control ~ "no-store" ||
       (!beresp.http.Surrogate-Control &&
       beresp.http.Cache-Control ~ "no-cache|no-store") ||
       beresp.http.Vary == "*") {
        # Mark as Hit-For-Pass for the next 2 minutes
        set beresp.ttl = 120s;
        set beresp.uncacheable = true;
    }

    return (deliver);
}

sub vcl_deliver {
    if (resp.http.X-Magento-Debug) {
        if (resp.http.x-varnish ~ " ") {
            set resp.http.X-Magento-Cache-Debug = "HIT";
            set resp.http.Grace = req.http.grace;
        } else {
            set resp.http.X-Magento-Cache-Debug = "MISS";
        }
    } else {
        unset resp.http.Age;
    }

    # Not letting browser to cache non-static files.
    if (resp.http.Cache-Control !~ "private" && req.url !~ "^/(pub/)?(media|static)/") {
        set resp.http.Pragma = "no-cache";
        set resp.http.Expires = "-1";
        set resp.http.Cache-Control = "no-store, no-cache, must-revalidate, max-age=0";
    }

    unset resp.http.X-Magento-Debug;
    unset resp.http.X-Magento-Tags;
    unset resp.http.X-Powered-By;
    unset resp.http.Server;
    unset resp.http.X-Varnish;
    unset resp.http.Via;
    unset resp.http.Link;
}

sub vcl_hit {
    if (obj.ttl >= 0s) {
        # Hit within TTL period
        return (deliver);
    }
    if (std.healthy(req.backend_hint)) {
        if (obj.ttl + 300s > 0s) {
            # Hit after TTL expiration, but within grace period
            set req.http.grace = "normal (healthy server)";
            return (deliver);
        } else {
            # Hit after TTL and grace expiration
            return (miss);
        }
    } else {
        # server is not healthy, retrieve from cache
        set req.http.grace = "unlimited (unhealthy server)";
        return (deliver);
    }
}

My /lib/systemd/system/varnish.service

[Unit]
Description=Varnish HTTP accelerator
Documentation=https://www.varnish-cache.org/docs/4.1/ man:varnishd

[Service]
Type=simple
LimitNOFILE=131072
LimitMEMLOCK=82000
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :8080 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,1G
ExecReload=/usr/share/varnish/varnishreload
ProtectSystem=full
ProtectHome=true
PrivateTmp=true
PrivateDevices=true

[Install]
WantedBy=multi-user.target

and results of netstat -tulpn

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:25              0.0.0.0:*               LISTEN      1528/master         
tcp        0      0 127.0.0.1:953           0.0.0.0:*               LISTEN      950/named           
tcp        0      0 127.0.0.1:12346         0.0.0.0:*               LISTEN      1528/master         
tcp        0      0 77.68.81.77:443         0.0.0.0:*               LISTEN      3616/nginx: master  
tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      995/sw-cp-server: m 
tcp        0      0 0.0.0.0:58013           0.0.0.0:*               LISTEN      892/rpc.mountd      
tcp        0      0 0.0.0.0:4190            0.0.0.0:*               LISTEN      942/dovecot         
tcp        0      0 127.0.0.1:12768         0.0.0.0:*               LISTEN      957/psa-pc-remote   
tcp        0      0 0.0.0.0:993             0.0.0.0:*               LISTEN      942/dovecot         
tcp        0      0 0.0.0.0:2049            0.0.0.0:*               LISTEN      -                   
tcp        0      0 127.0.1.1:6082          0.0.0.0:*               LISTEN      7220/varnishd       
tcp        0      0 127.0.0.1:6082          0.0.0.0:*               LISTEN      7220/varnishd       
tcp        0      0 0.0.0.0:995             0.0.0.0:*               LISTEN      942/dovecot         
tcp        0      0 0.0.0.0:44997           0.0.0.0:*               LISTEN      892/rpc.mountd      
tcp        0      0 0.0.0.0:53831           0.0.0.0:*               LISTEN      892/rpc.mountd      
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      1372/mysqld         
tcp        0      0 0.0.0.0:587             0.0.0.0:*               LISTEN      1528/master         
tcp        0      0 0.0.0.0:110             0.0.0.0:*               LISTEN      942/dovecot         
tcp        0      0 0.0.0.0:143             0.0.0.0:*               LISTEN      942/dovecot         
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      673/rpcbind         
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      7220/varnishd       
tcp        0      0 77.68.81.77:80          0.0.0.0:*               LISTEN      3616/nginx: master  
tcp        0      0 0.0.0.0:8880            0.0.0.0:*               LISTEN      995/sw-cp-server: m 
tcp        0      0 0.0.0.0:465             0.0.0.0:*               LISTEN      1528/master         
tcp        0      0 0.0.0.0:38897           0.0.0.0:*               LISTEN      -                   
tcp        0      0 77.68.81.77:53          0.0.0.0:*               LISTEN      950/named           
tcp        0      0 127.0.0.1:53            0.0.0.0:*               LISTEN      950/named           
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      786/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1004/sshd           
tcp6       0      0 :::25                   :::*                    LISTEN      1528/master         
tcp6       0      0 :::8443                 :::*                    LISTEN      995/sw-cp-server: m 
tcp6       0      0 :::4190                 :::*                    LISTEN      942/dovecot         
tcp6       0      0 :::993                  :::*                    LISTEN      942/dovecot         
tcp6       0      0 :::2049                 :::*                    LISTEN      -                   
tcp6       0      0 ::1:6082                :::*                    LISTEN      7220/varnishd       
tcp6       0      0 :::995                  :::*                    LISTEN      942/dovecot         
tcp6       0      0 :::36519                :::*                    LISTEN      -                   
tcp6       0      0 :::7081                 :::*                    LISTEN      1193/apache2        
tcp6       0      0 :::37833                :::*                    LISTEN      892/rpc.mountd      
tcp6       0      0 :::106                  :::*                    LISTEN      1066/xinetd         
tcp6       0      0 :::587                  :::*                    LISTEN      1528/master         
tcp6       0      0 :::110                  :::*                    LISTEN      942/dovecot         
tcp6       0      0 :::143                  :::*                    LISTEN      942/dovecot         
tcp6       0      0 :::55215                :::*                    LISTEN      892/rpc.mountd      
tcp6       0      0 :::39791                :::*                    LISTEN      892/rpc.mountd      
tcp6       0      0 :::111                  :::*                    LISTEN      673/rpcbind         
tcp6       0      0 :::8080                 :::*                    LISTEN      7220/varnishd       
tcp6       0      0 :::8880                 :::*                    LISTEN      995/sw-cp-server: m 
tcp6       0      0 :::465                  :::*                    LISTEN      1528/master         
tcp6       0      0 :::8081                 :::*                    LISTEN      1193/apache2        
tcp6       0      0 :::21                   :::*                    LISTEN      1066/xinetd         
tcp6       0      0 :::53                   :::*                    LISTEN      950/named           
tcp6       0      0 :::22                   :::*                    LISTEN      1004/sshd           
udp        0      0 77.68.81.77:53          0.0.0.0:*                           950/named           
udp        0      0 127.0.0.1:53            0.0.0.0:*                           950/named           
udp        0      0 127.0.0.53:53           0.0.0.0:*                           786/systemd-resolve 
udp        0      0 0.0.0.0:68              0.0.0.0:*                           791/dhclient        
udp        0      0 0.0.0.0:111             0.0.0.0:*                           673/rpcbind         
udp        0      0 0.0.0.0:849             0.0.0.0:*                           673/rpcbind         
udp        0      0 0.0.0.0:38403           0.0.0.0:*                           892/rpc.mountd      
udp        0      0 0.0.0.0:34749           0.0.0.0:*                           892/rpc.mountd      
udp        0      0 0.0.0.0:2049            0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:56141           0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:44182           0.0.0.0:*                           892/rpc.mountd      
udp6       0      0 :::53                   :::*                                950/named           
udp6       0      0 :::111                  :::*                                673/rpcbind         
udp6       0      0 fe80::250:56ff:fe07:546 :::*                                836/dhclient        
udp6       0      0 :::849                  :::*                                673/rpcbind         
udp6       0      0 :::58393                :::*                                892/rpc.mountd      
udp6       0      0 :::34424                :::*                                -                   
udp6       0      0 :::2049                 :::*                                -                   
udp6       0      0 :::51490                :::*                                892/rpc.mountd      
udp6       0      0 :::57200                :::*                                892/rpc.mountd

However when I test the response with curl i get:-

curl request is

curl -I https://www.bubit.co.uk

response is :-

HTTP/2 404 
server: nginx
date: Thu, 14 May 2020 15:09:10 GMT
content-type: text/html; charset=UTF-8
x-powered-by: PHP/7.2.30
pragma: cache
cache-control: max-age=86400, public, s-maxage=86400
expires: Fri, 15 May 2020 15:09:09 GMT
x-magento-tags: cat_c,cat_c_3,cat_c_13,cat_c_11,cat_c_12,cat_c_14,store,cms_b,cms_p_1
x-magento-debug: 1
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
x-frame-options: SAMEORIGIN
set-cookie: PHPSESSID=erq8fgjfr2fup10jea83ss5ug4; expires=Thu, 14-May-2020 16:09:09 GMT; Max-Age=3600; path=/; domain=www.bubit.co.uk; HttpOnly
x-ua-compatible: IE=edge

Which isn't showing varnish headers

running :

curl -k -I https://77.68.81.77

returns:-

HTTP/2 302 
server: nginx
date: Fri, 15 May 2020 10:17:19 GMT
content-type: text/html; charset=UTF-8
x-powered-by: PHP/7.2.30
pragma: no-cache
cache-control: max-age=0, must-revalidate, no-cache, no-store
expires: Wed, 15 May 2019 10:17:19 GMT
x-magento-debug: 1
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
x-frame-options: SAMEORIGIN
set-cookie: PHPSESSID=mq5bjb3nt4rk1qgsnmgkq5tnsd; expires=Fri, 15-May-2020 11:17:19 GMT; Max-Age=3600; path=/; domain=77.68.81.77; HttpOnly
location: https://www.bubit.co.uk/?SID=mq5bjb3nt4rk1qgsnmgkq5tnsd
x-ua-compatible: IE=edge
x-powered-by: PleskLin

running:-

curl -I http://77.68.81.77:80

returns:-

HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Fri, 15 May 2020 10:19:26 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Location: https://77.68.81.77/

this:-

curl -I http://77.68.81.77:8081

returns:-

HTTP/1.1 200 OK
Date: Fri, 15 May 2020 10:23:42 GMT
Server: Apache
Last-Modified: Tue, 11 Jun 2019 15:29:12 GMT
ETag: "2aa6-58b0df577dd48"
Accept-Ranges: bytes
Content-Length: 10918
Vary: Accept-Encoding
Content-Type: text/html
above 3 seem to be normal 

however the following:-

curl -I http://77.68.81.77:8080

returns:-

HTTP/1.1 503 Backend fetch failed
Date: Fri, 15 May 2020 10:21:17 GMT
Content-Type: text/html; charset=utf-8
Retry-After: 5
Pragma: no-cache
Expires: -1
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
Connection: keep-alive

does that mean that the backend link to apache isn't working or that there is no cache of the page?

Any help would be greatly appreciated

Chris

Best Answer

More information needed

The curl request only displays the response. I'd like to see request information as well, to figure out which URL you called.

The VCL file is truncated as well. It's not complete and I'm not 100% sure whether or not the Varnish headers are stripped through VCL.

However it seems unlikely that these headers are stripped.

Use Varnishlog

You could run the following command to see if there's traffic coming through Varnish:

sudo varnishlog -g request -i requrl

This command will display the URL of each request that is accepted by Varnish.

Use systemd instead

Your /etc/default/varnish doesn't really matter, because systemd is responsible for the runtime options of Varnish.

This setting will be the active one:

ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :8081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,1G

You can just edit it using sudo systemctl edit --full varnish.service. After you set -a :8080, you save and exit.

In the next step you run sudo systemctl daemon-reload to reload systemd and sudo systemctl restart varnish to make sure Varnish is restarted. After that it should work.

The -F flag

Another piece of advice is to remove the -F flag, because that will run Varnish on the foreground. You might want to have a look at https://gist.github.com/ThijsFeryn/2c4913b96a428cfbd8cde15c03fa12b3 for a typical systemd service file for Varnish.

You can still set the port & memory settings the way you want, but some other details you can borrow from my config.

Socket in use

The error below that you stated is pretty normal:

Error: Could not get socket :80: Address already in use

If you run varnishd -F -f /etc/varnish/default.vcl without any -a settings, Varnish will just accept connections on port 80, which will clash with Nginx that already has port 80 assigned to its process.

What to do next

It seems very weird that a regular request through port 80 or 443 doesn't work. Your process list indicates that all service are running on the desired ports.

What I would do if I were you is the following:

  • Edit the Varnish runtime config via sudo systemctl edit --full varnish
  • Make sure all parameters are tuned correctly
  • sudo systemctl daemon-reload to reload systemd
  • sudo systemctl restart varnish to restart Varnish
  • sudo systemctl restart nginx to restart Nginx

The good old "turning it off & on again" trick.

Testing all ports

Another piece of advice I have for you is to test all related ports on the server, by running a bunch of curl commands.

curl -k -I https://localhost
curl -I http://localhost:80
curl -I http://localhost:8080
curl -I http://localhost:8081

This will allow you to see which ports are listening and what headers you're getting back from each endpoint. Maybe that will help you debug.

Related Topic