Start United States USA — software Varnish and Docker: First Contact

Varnish and Docker: First Contact

176
0
TEILEN

See what you can do with Varnish and Docker containers working together, including tips on setting up your Dockerfile, configuration, and deployment.
Docker has been on my radar for quite a few years now, but I have to admit, as a C developer, I never really cared about it. I run Arch Linux on my computer, so everything I ever needed was packaged, save for a few exceptions where I could just whip up a custom PKGBUILD and install the resulting package. If I needed another OS, I used a VM.
But recently, I forced myself to try that container thingy (much like you’d try again some food you wrongly discarded as uninteresting when you were a child) , and I found a few use cases for it — related to Varnish, of course. This post is a report of my exploration, so, don’t expect too much new stuff, but since it’s also focused on running Varnish inside a container, there a few specific tricks and questions to be aware of.
Before we begin, know that the whole code from this blog is collected here .
If you’re new to Docker, be aware that this post won’t cover all your needs. Docker is a very nice solution, but it’s a just tool, and what you make of it and how is more important than what it is — same as Git, for example. However, we’ll explore two ways (out of ten bazillions) to build a Varnish+Hitch+Agent image to cache HTTP/HTTPS content and be able to pilot it using a REST API.
Docker is an easy way to produce versioned, all-included system images, but not much more. You’ll still need to care for your machines, configure them and monitor them. In addition, contrary to a VM that has its own kernel, containers use the host’s so you can’t really tune the TCP stack (something we do routinely for high-performance servers) directly from the container for example.
But that can be covered in a later blog post. Let’s get you up and running first.
To build an image, we have to feed Docker a Dockerfile, which is a text file that will explain the various steps required to create said image. It will involve (most of the time) choosing a base image, then modifying it according to our needs. Here we’ll use a Centos image because it’s a solid, widespread distribution and because we have all the right packages available for it via PackageCloud.
One important thing to know is that each action/line of the Dockerfile will generate an intermediary image or „layer“, that is saved (as a delta, to save space) . This avoids rebuilding the full thing over and over, if Docker can find a previous layer matching exactly your image. So, for this to be useful, we’ll tend to put the larger, longer, less-likely-to-change operation towards the top of the file, and the quicker and less dramatic ones near the end.
Here’s the Dockerfile I came up with:
Easy enough, we ask for the Centos image tagged with version 7. The first time we ask for it, Docker will dutifully request it from the DockerHub, and of course, you can choose a different distribution and/or version.
Then there is that bundle/directory that we copy to import two helper scripts (we’ll use them very soon) and more importantly, for now, the PackageCloud repo file to know where to find our packages. The bundle/tree looks like this:
Once it’s in there, we can use yum to first install the epel repository, then the trio of programs we’ll use.
After the yum commands, we progress to the next lines:
Next, we copy the configuration tree. In this setup, the whole configuration tree is copied to the destination image. This allows you to edit the files, rebuild and be ready to go. The minimal tree looks like:
Then we run hitch_gen_conf.sh. Most of the configuration for Hitch will be specified in hitch.conf, and will be about specifying the right certificates so that your container can authenticate itself as a legitimate server. For each certificate effectively copied to the configuration tree, this also involves adding a few lines like:
And since I’m super lazy, I thought we could simply generate the configuration based on what we find in /etc/hitch/pems. To activate it, just place your pem files in conf/etc/hitch/pems and uncomment the „“ line in hitch.conf, and you’ll be good to go.
Obviously, conf/ also contains the VCL file (s) you want to bring along, but there’s nothing out of the ordinary here.
There are also three lines that maybe a bit cryptic, but you probably have an idea of what they are about:
80 and 443 are familiar as the HTTP and HTTPS ports, and the more experienced Varnish users may have recognized 6085 as the varnish-agent port.
But what does the EXPOSE instruction do? Short answer: for us, here, nothing. They simply inform Docker that there are services listening on these ports, that’s notably useful when using the -P switch for „docker run“ as you’ll see below.
Lastly, we tell Docker what command the container should run by default, and here we finally use varnish-suite.sh:
One thing to grasp about Docker is that a container usually runs only one command, and while you can attach another command to a running container, everything should be started from one single command. So, we abide and create a little script that starts all three daemons and then wait forever (note the classic tail trick at the end) to keep the container running.
You can have a look at the script here; it may seem a bit convoluted, but that’s so we can reuse it later.
If you don’t want to bother with cloning the repository explicitly, building the image is as easy as:
Or you can get the repository and build for local data, which will allow you to change the files in the conf/ repository before actually building:
Now, running it is a bit more complicated, but just barely.
Because the container is going to be trapped inside its own little operating bubble, we need to create a few contact points. Namely, we need to have Varnish, its agent, and Hitch reachable from the outside world.
To do this we are simply going to route three ports from the hosts (1080,1443 and 1085) to ones inside the container (80,443 and 6085) using the -p switch of „docker run“:
Putting all this together, we run:
If you try to „curl localhost: 1080“, Varnish should reply with a 503, a sign that it’s running but hasn’t got access to a backend (duh, we never configured one!) .
Using „curl https: //localhost: 1443 -k“ should yield a similar result, sign that Hitch is running too, and you can also check that the agent is up using „curl http: //foo: bar@localhost: 1085 -k“ (authentication is handled via the /etc/varnish/agent_secret file) .
In this post, I made the choice to embed the configuration directly into the image, and people may be outraged because „you need to separate data and code“, but really, image deltas are cheap, both in time and space.
If you really want to externalize configuration, you can reuse the „-v“ switch to mount the directories to their correct place, for example:
The advantage is that you can change the configuration on your host machine and reload Varnish without needing to rebuild and re-run the container. It works beautifully, but now you need to pull your configuration from *somewhere* and manage dependencies between software versions and configuration versions and… well, that’s also a topic for another post.

Continue reading...