docker/nfs/2014-03-29-docker-quicktip-4-remote-volumes.md
2017-05-23 14:34:10 +08:00

3.1 KiB

layout title date categories tags status type published meta author
post Docker Quicktip #4 - Remote volumes 2014-03-29 22:21:40.000000000 +00:00
publish post true
dsq_thread_id _edit_last
2546228828 2
login email display_name first_name last_name
cpuguy83 cpuguy83@gmail.com cpuguy83 Brian Goff

This one builds off the idea of using data-only containers. Let's step into the unknown and add a second host into the mix.

What do you use when you need to share data with containers across hosts? The answer? Well... as you normally would... NFS (or insert your file share service of choice).

First, let's startup an NFS server... it just so happens I created an image for just this purpose. You should check out the github repo if you want the details in how it works... but essentially all you need to do is add each directory you want to to the end of your run command. ** I should note, this nfs server is not secured or optimized, use at your own risk **

docker run -d --name nfs --privileged cpuguy83/nfs-server /tmp /home

Here, the /tmp folder and the /home folder are being shared by NFS. You can add however many dirs you want, but they must exist on the server.

Now let's fire up the nfs client:

docker run -d --link nfs:nfs --privileged -v /mnt cpuguy83/nfs-client /home:/mnt

Here, you specify the s source mount and the mount point in the container as /path/in/server:/mount/to/here. So /home on the nfs-server is mounted to /mnt on the client. We are also linking the containers, what's important is that the internal side is called nfs as we are using the env var generated by this link to get the IP of the nfs server. Now, links don't currently work across docker hosts, so what good does this do? Not much locally (no point in using NFS on a single host)... but you can either use the ambassador pattern or manually provide the env var in the run command (NFS_PORT_2049_TCP_ADDR) with the IP of the nfs server when doing multi-host.

When you combine this with using volumes-from things begin to get a bit more powerful.

# NFS Server
docker run -d -v /tmp ubuntu --name foo bash -c "echo foo > /tmp/foo"
docker run -d --name nfs-server --privileged --volumes-from foo cpuguy83/nfs-server /tmp
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'  nfs-server
10.0.1.100
# Remote NFS Client
docker run -d --name nfs-client --privileged -e NFS_PORT_2049_TCP_ADDR=10.0.1.100 -v /tmp cpuguy83/nfs-client /tmp:/tmp
docker run --rm --volumes-from nfs-client ubuntu cat /tmp/foo
foo

You'll notice you must use --privileged for both the nfs-server and client. In the (near) future Docker will have finer grained control of the capabilities available to a specific container and we can just add the required ones here instead of opening up the full --privileged.