Kubernetes install guide

This install guide explains how to deploy Varnish Cache on Kubernetes using the official Helm chart.

Install the Helm chart

You can install the Heml chart on your Kubernetes cluster by running the following helm install command:

helm install varnish oci://docker.io/varnish/varnish-cache

This command will deploy Varnish on the Kubernetes cluster that was configured in the active context of your kubectl client. Run kubectl config current-context to see what that context is.

While there is a set of default configuration values, it does make sense to specify a values.yaml file to override the standard configuration:

helm install varnish -f values.yaml oci://docker.io/varnish/varnish-cache

Set backend host and port

Backends are set in your VCL file, but our Helm chart allows you to set the backend with environment variables:

  • VARNISH_BACKEND_HOST: the host of the service that Varnish fetches from
  • VARNISH_BACKEND_PORT: the port number of the service that Varnish fetches from

These environment variables can be set in the values.yaml file with the following settings:

---
server:
  extraEnvs:
    VARNISH_BACKEND_HOST: "example.default.svc.cluster.local"
    VARNISH_BACKEND_PORT: "80"

This configuration will allow Varnish to connect to a Kubernetes service named example that lives within the default namespace of the Kubernetes cluster.

These environment variables can also be set on the command line. Here’s an example of a helm install command containing the extra environment variables:

helm install varnish \
  oci://docker.io/varnish/varnish-cache \
  --set server.extraEnvs.VARNISH_BACKEND_HOST=example.default.svc.cluster.local \
  --set server.extraEnvs.VARNISH_BACKEND_PORT=80

Custom VCL file

The VARNISH_BACKEND_HOST and VARNISH_BACKEND_PORT are shortcuts to modify the behavior of Varnish and are included in the VCL file. You can even define a custom VCL file and tune the behavior of Varnish to your exact needs.

Inline VCL

You can define inline VCL by setting the server.vclConfig setting in values.yaml. Here’s what that looks like:

---
server:
  vclConfig: |
    vcl 4.1;

    backend default {
      .host = "example.default.svc.cluster.local";
      .port = "80";
    }

    sub vcl_recv {
      if(req.url ~ "^/admin(/|$)") {
        return (pass);
      }
    }

    sub vcl_backend_fetch {
      set bereq.http.Host = "www.example.com";
    }

This VCL configuration lets Varnish connect to the example Kubernetes service through example.default.svc.cluster.local on port 80. The configuration also sends a custom Hosts header to the backend to ensure the www.example.com hostname is matched. And finally the config bypasses the cache if the URL is /admin or a subordinate resource of admin.

Load VCL from a ConfigMap

A cleaner and more flexible way to define a custom VCL file is through a Kubernetes ConfigMap.

Here’s a bash command that creates a default.vcl file:

cat << 'EOF' > default.vcl
vcl 4.1;

backend default {
    .host = "example.default.svc.cluster.local";
    .port = "80";
}

sub vcl_recv {
    if(req.url ~ "^/admin(/|$)") {
        return (pass);
    }
}

sub vcl_backend_fetch {
    set bereq.http.Host = "www.example.com";
}
EOF

Run the following command to store the file as a ConfigMap named external-vcl:

kubectl create configmap external-vcl --from-file=./default.vcl

Add the following values.yaml configuration:

---
server:
  vclConfig: ""  # It is necessary to unset this value to override default.vcl

  extraVolumes:
    - name: external
      configMap:
        name: external-vcl

  extraVolumeMounts:
    - name: external
      mountPath: /etc/varnish/default.vcl
      subPath: default.vcl
  • The extraVolumes config creates a storage volume named external and loads the content of the external-vcl ConfigMap
  • The extraVolumeMounts config mounts the external volume into the pods and stores the default.vcl file from the ConfigMap in /etc/varnish/default.vcl

And finally run the following helm install command to deploy Varnish and mount the custom VCL into the pods:

helm install varnish -f values.yaml oci://docker.io/varnish/varnish-cache

Connecting to Varnish

By default the Varnish Cache Helm chart exposes a NodePort service. However, you can also set the service type to ClusterIP or LoadBalancer.

Changing the service type

If you want to change the service type of the Kubernetes deployment, you can either set it in values.yaml are use a --set option on the command line.

Here’s an example of a ClusterIP service:

---
server:
  service:
    type: "ClusterIP"

Here’s the same setting on the command line:

helm install varnish -f values.yaml \
  oci://docker.io/varnish/varnish-cache \
  --set server.service.type=ClusterIP

Ingress

The Varnish Cache Helm chart offers Ingress support. This mean you expose the Varnish service directly to the outside world using the conventional HTTP and HTTPS ports.

The example below shows a values.yaml configuration that uses the nginx ingress controller to expose Varnish on the varnish.example.com domain. The Prefix path type ensures requests for all paths use the ingress:

---
server:
  ingress:
    enabled: true
    ingressClassName: nginx
    pathType: Prefix
    hosts:
      - host: varnish.example.com

Run the helm install command and load values.yaml to enable the ingress configuration:

helm install varnish -f values.yaml oci://docker.io/varnish/varnish-cache

Run the following curl command to test ingress access Varnish:

curl -H"Host: varnish.example.com" http://localhost