From LXC to docker-machine and cloudery

Attention: this post provides a very quick and simplistic (but functional) vision of the promised title.

 

In the beginning

Linux is a fantastic OS, it has more than we imagine and it still manages to get better. There is a feature called cgroups:

which provides a mechanism for easily managing and monitoring system resources, by partitioning things like cpu time, system memory, disk and network bandwidth, into groups, then assigning tasks to those groups

Let’s say we created a cgroup with: 50% of cpu, 20% memory, 2% of disk and a virtual network with 100% of bandwidth, now we can run our application under that cgroups restrictions.

Another cool feature of Linux is LXC (linux-containers):

which combines kernel’s cgroups and support for isolated namespaces to provide an isolated environment for applications

Now we’re able to provide a Linux machine capable of running multiple applications that run in isolation (like if there was an isolated OS for each application). This sounds like something we achieved with virtualization (app-level, os-level, cpu-level and so on) but faster and cheaper and without the overhead of running multiple kernels.

31-containers-vs-traditional-virtualization

Docker

Docker is:

an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux. This is what Docker is but remember, it is not perfect.

The highlighted part is very interesting, docker will provide you a layer of abstraction that allows you to create and deploy your application within a container (an isolated, resource managed place to run processes) in a standardized way.

Docker machine, compose and so on

Life almost always get easier with abstractions, we (developers) don’t worry about how disks works (drivers) or even how a package left your pc and hit another one (we should know how this works :P). Our productivity had increased a lot since we relied on these abstractions.

And this is the same for the docker ecosystem, as we start to use it more often. We create best practices, solve issues with workarounds and etc, some of these will become part of the docker solution.

  • docker-machine: An application needs a machine to run regardless if it’s local, physical, virtual or in the cloud.
  • docker-compose: An application needs a way to declare its dependencies, either packages or distinct services like datastore.

Step 0: get ready

  1. If you’re on MacOS/Windows you’ll need to install VirtualBox or VMWare
  2. If you’re on MacOS/Windows install docker toolbox otherwise apt-get them all

Step 1: create the app

Let’s say we’ll create a rails 4 application with mongo.

Step 2: declare the app and its dependencies

We declare our dependencies by using two files: docker-compose.yaml and Dockerfile. In the Dockerfile we’ll describe how our machine should be (aka: all need packages and stuffs).

Then we can move to its broad services dependencies, like database or even web server. We’ll use mongo as datastore and nginx as the web server.

Step 3: deploy it locally

We need to create a machine for it and then we need to run it.

Step 4: deploy in the cloud

The same way we created a machine to run our app locally ,we can create any number of machines to run this application, even in cloud environment such as digitalocean, aws, azure, google and etc.

That’s it 🙂 for a more explained rails app docker workflow read this great post or yet a fresh new example of docker-compose.yaml.

// TODO: some things

Let’s suppose we just created a staging environment and another developer come to help us, it seems that there is no an official way to share our created machine (amazon, google app engine, azure, digital ocean…) with team members. There are some workarounds but it’ll be nice to see this becoming a feature.

Troubleshooting

  • Useful commands to troubleshooting, exploration and debug:
    • To enter on a machine: $ docker-machine ssh staging (either local or cloud)
    • To enter on a container: $ docker-compose run db bash (either local or cloud)
    • To list files within a container: $ docker-compose run db ls -lah data/db
    • To edit/add/remove data on mongo: $ mongo –host DOCKER_IP
  • If you face any error like E: Failed to fetch … during the docker-compose build try it again
  • If you face any error like “Error creating machine: Error running provisioning: Unable to verify the Docker daemon is listening: Maximum number of retries (10) exceeded” during any deployment, try to download docker-toolbox again and install it.

Google is your friend.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s