Definition from Wikipedia
Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating system–level virtualization onLinux. Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting virtual machines.
What about Docker?Docker reminds me old isolation container technology, such as Solaris Zones, *BSD Jails or even Linux OpenVZ. In fact, it is the same technology. Docker uses LXC linux kernel module that derivates from OpenVZ.
Docker is newer, better and simpler than the old alternatives. It provides you with an API, a Container repository and an easy CLI management interface. These points make Docker really powerful and modern.
Docker is very useful for development life cycles, it makes deployments easier and it scales out in distributed computing environments.
Nowadays, Docker only runs Linux applications (over Linux, Windows or OS X) but in the near future, it will run Windows applications over Windows boxes This article talks about Windows containers. We have to follow this technology very close, because it is the future of virtualisation and application life cycle.
Starting with DockerOk, let's start with Docker basics. The best way to start, is using DigitalOcean Docker application where Docker is preinstalled, so the first step is to create an DigitalOcean account and to deploy a Docker application in your preferred location.
The second step is to create a Docker user in http://www.docker.com. You are going to use this user to manage your docker repos, by default there is one private repo for free, but you can create an unlimited public repos. Every five additional private repos cost $7.
Be careful, this page doesn't work well with Safari.
Basic commandspull (retrieve a repo from internet)
docker pull nginx #it is going to install a container and its dependences with Nginx
images (retrieve the images downloaded or created)
run (execute a container)
docker run -p 9000:9000 -i -v /root/ab:/var/www/default -t pollosp/php-fpm:modificado-ww /bin/bash
- -p means to publish a port, in this case we are publishing the port 9000 of the internal machine to 9000 in the host machine.
- -i is the interactive mode, usually we use it to configure the container via bash shell as in the example, but the regular flag here is -d or background mode.
- -v this flag mounts the directory /root/ab into /var/www/default, it is very useful to provide dynamic content such as web files to the container, it is useful to manage the config files of the container, or you can manage the container log files, etc.
- -t allocates a pseudotty for the interactive mode.
- Finally with /bin/bash, we are executing bash shell to have an interactive shell, it is not necessary in -d mode or background.
It kills the container.
Remember that you are going to loose all data in the container.
commitdocker commit d42c13946f3b XXXX/php-fpm:modificadoZZZ
When you kill a container, all the information saved in it disappears, the way to save it, is by doing a commit. So if you do any configuration change o install an update into the container, you have to do a commit before stopping it.
pushdocker push XXXX/php-fpm:modificadoZZZ
It uploads your commit to your personal repo in docker.com
This file is the way to automate and assign parameters to the container. DigitalOcean has good documentation about this file at https://www.digitalocean.com/community/tutorials/docker-explained-using-dockerfiles-to-automate-building-of-images
These commands are just a little sample of all possible combinations that you can do with Docker. I am just trying to show you how Docker works. Anyway, the documentation provided by Docker is good and there is a great community working very hard behind it, so explore :).