Docker has been with us many years now and quickly became a dominant technology.
In this post, we will explain the benefits of using Docker and how it can speed up your development.
This post will be divided into three main topics:
- What is Docker? – a quick introduction to Docker and the problem it solved.
- Why should we use Docker? – concrete reasons why you would benefit from using Docker in your day-to-day work.
- Usage example:
- Setting up a Postgres database in seconds
- Setting up (and accessing) a CentOS machine in seconds
What is Docker?
Docker is a platform that lets you “build, ship, and run any app, anywhere.” It does so by solving one of the most expensive aspects of software: deployment.
Before Docker, a typical development pipeline would have consisted of various technology stacks: VMs, configuration management tools, package management tools, dependency management tools, etc.
The complexity of these systems required specialized teams to build and maintain the pipeline as the involved technology stack was too broad to handle. A typical pipeline would resemble the figure below:
(The above is just an example, you can easily replace vagrant with virtualbox or virsh / virtual manager, Jenkins with TeamCity / TravisCI and Ansible with Chef etc.)
Docker has unified the way we deploy software, turning the above diagram into something more like this:
It probably eliminated the most well-known development phrase: “It works on my machine…”
Why should we use Docker?
- Less error-prone – since you only build once and use that build across all environments, reproducing a problem becomes trivial as you eliminated the chance of unknowns.
- It lets you experiment with new software easily – think virtual environments. If you wish, you can quickly setup a new environment without disrupting your current one.
- It supports offline-work – since Docker containers have low footprint on a system (will be explained in a future post), you can create several containers to emulate a full deployment right on your own laptop and work on the go.
- It saves you money – by replacing VMs, your software footprint becomes extremely small and thus its attached cost goes down.You are able to run much more on your own laptop rather than have dedicated hardware in some lab / cloud.
- It enables continuous delivery (CD) – since you only build once, you have better control of the environment state, builds become easier to reproduce / replicate than the traditional methods (e.g. using ansible / chef). As a result the whole process becomes significantly more manageable allowing you to focus on advanced techniques such as “Green / Blue” deployments.
Before we dive into some concrete examples, it’s important to discuss some key concepts of Docker, mainly “images” and “containers”.
Docker Image – If you are familiar with object oriented principles, images are to containers like classes are to objects. An image can be seen as a class definition where containers are the object of that class. From a Docker image, we can create several unique containers such that changing one container, will have no affect on the others.
Another way to look at it is a program and a process; when you execute a program it creates a process, analogously, a Docker container can be viewed as an execution of a Docker image.
Now that we have a grasp of what are Docker images, it is time to take that knowledge and put it to the test.
There are two ways we can create an image:
- download a pre-made image (current post)
- build an image locally (future post)
Docker has a big community of developers and keeps a public registry of images called “docker hub” for anyone to contribute.
Since it is a public repository where anyone can upload an image, it is best to only use official images, which are images approved by Docker as authentic images created by the maintainers.
Setting up a Postgres database in seconds
Since we will be using docker hub to get our image, lets look for an official postgres image:
Note the “official image” label next to the image, always look for an official label when downloading images from docker hub as unofficial images might contain malicious software.
Next, let’s look at the available versions and choose one, for the sake of this example, I will choose 9.6.
Next, let’s download our postgres 9.6 image.
Docker uses a command called “pull” to download images from a registry. Unless explicitly provided, docker will search docker hub for the image in question:
When pulling images, Docker expects a “tag” to identify what image to use. It is possible to have several tags for the same image. For example, if we look at our postgres image in docker hub we will see many versions, some with several sub-versions on the same line, it means that whoever published the image tagged it with several labels. For instance, if we take the line:
- 9.6.11, 9.6, 9 (9.6/Dockerfile)
This tells me that I can run: “docker pull postgres:9.6.11”, or equally “docker pull postgres:9.6” or “docker pull postgres:9”.
After running the command, you should see something like this:
Note that several “pull” are completed, those correspond to “layers” which is what a Docker image is comprised of (the subject of a future blog post).
Now that we have our image, lets spin up a postgres database:
That’s it! You now have a postgres server running and you can access it on localhost:5432
Note that, every official image on docker hub has a detailed explanation on how to use that image, so if you are looking for more fine grained customization, it’s always good to check out the usage instructions posted on the image page:
Setting up a CentOS machine in seconds
Following our last example, let’s spin up a CentOS machine.
As with the postgres image, we will look for the official CentOS image and run our docker pull command to download it to our computer.
Once we finish downloading the image, lets run the container and access it so we can experiment with installing packages (just like a VM).
Note, that in order to access the container, we have added a new flag “-it” which tells Docker to open an interactive terminal while the “bash” in the end tells docker to run this command (which opens up a shell).
Once you run this command, you should be inside the container and free to explore the OS.
Feel free to install, delete, do whatever you want in the container, Docker creates a sandbox environment for you and once you are done, you can simply exit the container using either the “exit” command or “ctrl + d” on your keyboard.
This post was an introduction meant to give an overview of Docker and why you should use it in your day-to-day development.
In future posts, we will take a deeper dive into Docker images, volumes, networks and give some real-life examples on how to use docker in production.