As powerful hardware has become more and more of a commodity, the ability to run multiple virtual machines on a single piece of hardware has become an industry norm. From web hosting to cloud computing, many services are run on virtualized environments. As well as desktop virtualization solutions like VirtualBox, there are also quick provisioning solutions like Vagrant. The problem with a virtual machine is that it needs to emulate every aspect of the guest computer including all the system RAM which will be allocated exclusively to the virtual machine. As a result, virtualization can be resource hungry.
At the other end of the spectrum is a “chroot” environment, which changes the underlying root directory for a process so that it runs in its own environment and has no access to other files on the host operating system. However, there is no virtualization at all, just a different directory tree.
The mid-point between these two systems is a container. Containers offer many of the advantages of a virtualized machine but without the high resource overhead. Containers are more functional than chroot environments in that there is some virtualization. For example, processes that are created in a container are assigned IDs (PIDs) separately from those in the host OS. In technical terms, the container has its own PID namespace. In fact, containers have their own namespace for the networking subsystem and for InterProcess Communication (IPC). That means that a container can run network services, like a SSH server or a web server.
From the outside, the container looks like a virtual machine with its own IP address and its own networking services, but on the inside the container uses much more of the host operating system than a virtual machine, more like a “chroot” environment. For example, you can use a CentOS container running on a Ubuntu host. The commands and files are all from CentOS, but if you ask the container which kernel it is running, it will report it is running the Ubuntu kernel, because the container is running on the kernel from the host OS.
Docker is a container based virtualization framework that allows you to create containers holding all the dependencies for an application. Each container is kept isolated from any other, and nothing gets shared.
To install Docker on a 64-bit Ubuntu 14.04 system, run the following commands:
sudo apt-get update sudo apt-get install docker.io sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker
There is an existing Ubuntu package called docker, which is a system tray for KDE3/GNOME2. To avoid confusion, the container runtime is called docker.io. The last command creates a link from “/usr/local/bin/docker” to “/usr/bin/docker.io”, which enables the docker command from the command line rather than docker.io.
Note: Docker.io is also available for other distro. Here is the installation instructions if you are not using Ubuntu.
To execute a shell inside the container, run:
sudo docker run -i -t ubuntu /bin/bash
-i” flag makes the session interactive, and the “
-t” flag tells docker to emulate a terminal session. The “ubuntu” parameter tells docker to run a container based on Ubuntu 14.04 and “/bin/bash” is the command that should be run once the container is up, i.e. run the Bash shell.
When docker runs it checks if the desired base image file has been previously downloaded. If it hasn’t, it will download the image from index.docker.io, which is also the site to use to see which images are officially supported by docker.
To download other images without starting a container, use the “
docker pull” command. For example, to download the CentOS base image use:
sudo docker pull centos
You can also run single commands in a container and then let the container exit. Use the following command to run the “
ps aux” command inside the CentOS container:
sudo docker run centos ps aux
When a container shuts down, all changes are lost. The advantage of this approach is that when a container starts, it is in a known state. This is essential for test environments and for build services, etc. It is also important for running cloud services, as the container can be quickly reset and rebooted into a stable state.
However, it also means that any configuration performed or any files created in the container will be lost. The solution is to create a new image with all your changes. Try the following commands:
sudo docker run ubuntu apt-get install -y nmap sudo docker ps -l
The first command will start a container and install nmap. The second command will list the latest (-l) created container, even if it isn’t running.
You can now create a snapshot of that container and save it to a new image:
sudo docker commit 1b498c2d502c ubuntu-with-nmap
“1b498c2d502c” is the ID of the container as listed by the “
docker ps -l” command. Now if you start a shell for the ubuntu-with-nmap container, it will have the nmap command pre-installed.
sudo docker run -i -t ubuntu-with-nmap /bin/bash
There is plenty of information about docker in docker.io’s documentation, and there are also several example setups explained, including how to perform common tasks like running a python web app and running a SSH service.
If you have any questions about docker, there is a strong docker community which should be able to help. You can also ask questions in the comment section below and we will see if we can help.