Claim your right for a free disk space, tame the Docker beast (2017 edition)

This is what usually happens - you're composing some badass unrelated blog post and then in Slack, Skype or Microsoft Teams someone raises the Docker disk space issue:

  • Why the heck your friggin Docker took 50GB of my valuable SSD space again?

It is ALWAYS the case for teams just starting playing with Docker and integrating it into their processes, especially when transitioning from VM-based (Vagrant) development workflow.

To be honest, I can't really give a precise answer to this question. The number of reasons can be quite large - running outdated Docker version, writing data to Union filesystem instead of named volumes, usage of images ineffective by design and so on. However, I will try to summon up what we have at our disposal today on this topic to ease the pain of transitioning to Docker.

Another reason for writing this post was the crazy amount of outdated advice floating around on Stack Exchange and elsewhere suggesting to run complex bash commands and scripts to perform very basic routine tasks. Blindly copying and pasting them may not work on the current version of Docker and your OS in the best case, and you have a good chance to screw up something in your setup - in the worst.

Please note that I'm omitting the -f or --force flag for all commands below on purpose. If the command can not be completed because of the blocking attached container or a volume, my best advice is to find the cause and take action accordingly.

Lots of unused images (and containers)

The most popular case when you start playing around with Docker. A lot of docker pull's flooded disk space.

Figure out if you have any dangling (untagged) images first:

docker images --filter "dangling=true"

And if you do (and convinced you don't need them) delete all at once with:

docker rmi $(docker images -f "dangling=true" -q)

Review the remaining images:

docker images -a

And remove those you don't need:

docker rmi [IMAGE ID 1] [IMAGE ID 2] ...

I would suggest not to follow the advice to run "magic" commands to remove all images unused by containers. Keep your house clean by reviewing this list from time to time and get rid of an unused stuff, However, if you feel fancy or have really a LOT of garbage in your images list - remove all of them together with first stopping and removing the containers:

docker stop $(docker ps -a -q)

docker rm $(docker ps -a -q)

and finally the images:

docker rmi $(docker images -q)

When you run a container next time, it will realise the image it's attached to is not here anymore and will pull it back. Nothing to lose but time. Sweet.

Containers should not occupy too much space by design, so keeping the list produced by

docker ps -a

clean and organised is not directly related to a disk space problem. But just in case you can remove those you don't need with

docker rm [CONTAINER NAME OR ID]

Volumes out of control

Orphaned volumes is another case. You had to work on a project with a few GB of images and files (or even better - an extensive database), then switched to another one, stopping and removing all the containers and images? Unlucky you - those gigabytes of data are still hanging out somewhere in your system.

And while a named volume containing project files is rather easy to locate, with a database it gets tricky sometimes. So let's find all volumes first

docker volume ls

I see no point of docker volume inspect [VOLUME NAME], unless you've completely forgotten its designation (true story with unnamed volumes).

Remove volumes we don't need:

docker volume rm [[VOLUME NAME 1, VOLUME NANE 2] ...

And finally, consider removing all unused volumes like so:

docker volume prune

Beware the last command, though - a volume may be detached from all containers, but it doesn't mean it's a waste - think first if you intended to attach it to another container later. Imminent data loss will be a nice side effect.

So again - don't rely on magic commands carelessly, it's always a good practice to keep your house clean in an entirely controlled manner.

In case an unnamed volume was created together with a container, it's a good idea to remove them together:

docker rm -v [CONTAINER NAME OR ID]

Extra reasons

Apart from the two primary reasons described above there a few extra things to check.

Usage of oversized images

The image may do its job well, but it may be inefficiently designed, abusing the "copy on write" technique used by the Docker file system. You can easily spot this by inspecting image's Dockerfile, but if you run docker images and get something like 1GB> used by a trivial image - that's a good sign to look for an alternative.

And pay attention to the "Size" column - it shows the size of the image and all its parents, so if two images from the same repository but with different tags occupy 500MB each, it doesn't mean you have 1GB of space physically wasted. This will only be the case when you export both of them to a local file system. So do not overreact.

Running outdated Docker version

If firing docker version gives you something very different from what you see at the top of this page https://docs.docker.com/docsarchive - consider updating. If you're running native Docker on Mac or Windows - do it without even thinking, as they still have a lot of issues and many are related to resources usage.

Improper usage of run command

By default, a container’s filesystem persists even after the container exits. Which means that if you're running a short-term process (think of this concept as Drush or Drupal Console running as a container) which has something to do with writing to a filesystem layer of the container, and run it many times - the filesystem layers amount will increase by one for each run.

Solution is to run such processes with --rm flag, like so

docker run --rm -v /foo busybox top

After the container's job (a process) is finished, the container and all unnamed volumes associated with it will be automatically destroyed.

If it's too late, however, please refer to the first section of this blog post on how to clean up the containers mess.

This is it for now. Anything I've missed or unclear? Let me know in the comments!

Igor Kandyba's Image
Igor Kandyba
Minsk, Belarus

I'm building Drop Guard - the ultimate continuous security solution for Drupal