11 Nov

Speeding up your development with docker-compose

Hello, my name is Gloria and now I am part of Sentinel.la as Software engineer. This is my very first post and I want to share with you how we are avoiding inconsistencies between environments to speed up our development process.

As we are a startup every opportunity to save time is priceless. We faced a challlenge: How to stop wasting time trying to run our app in different environments?

Let me put this scenario:

– Guillermo has installed some libraries in our dev environment (A virtual machine) and we are programming an application-specific functionality with something that is only available at these libraries.

– Francisco has installed in our staging machine other libraries, because he is working on another project with other code, but Guillermo wants Francisco to execute the code of his application in that different environment. So, Francisco installs the same librares, or the application will fail either.

This scenario disappears with Docker. To run the application, Guillermo creates a Docker container with the application and all the resources needed, and passes that container to Francisco.

Francisco, having Docker installed, can run the application through the container, without having to install anything else. =)

Docker now allow us to focus on developing our code without worrying about whether that code will work on the machine on which it will run.

“Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.”

What are the benefits?

– Rapid application deployment.
– Portability.
– Encapsulate apps control version.
– Maintenance.

Is normal that when a new developer take its first task, to have to install all development tools to work on that task, and it’s usual to waste a lot of time trying to do it.

In this case with docker we are solving the portability, preparing an enviroment to work and do it rapidly, that’s why we use containers, to make portable our system. Our team could be able to work on every OS.

How to achieve it?

This is a simple example to start an environment with Flask, InfluxDB, Rabbit MQ and Celery, using docker-compose.

Source code for this part can be found here on Github.

dockercompose

Docker compose is a tool for docker, It allow us to start multiple containers, writing only a script in yaml format to start your environment using containers, so you can link containers, execute commands, build images, etc. and then docker-compose will help us to prepare a complete environment.

Requirements
Docker 
Docker compose

Directory
This is the structure of my project, everything goes inside the “project-one” directory.

directory

entrypoint.sh file
This file executes at container context, inside WORKDIR.

For example in this case execute celery worker for this application and run app.

#!/bin/bash
echo $PWD
ls
celery -A celery worker --loglevel=info
python -u app.py

exec "$@"

Create a Dockerfile

FROM python:2.7
MAINTAINER Gloria Palma "gloria@sentinel.la"
ADD . /app
WORKDIR /app/ 
RUN pip install -r requirements.txt
ENTRYPOINT ["./entrypoint.sh"]

Create a docker-compose.yml

version: "2"
services:
   db:
     image: postgres
     environment:
       - POSTGRES_USER=admin
       - POSTGRES_PASSWORD=admin
   rabbitmq:
     image: rabbitmq:3.5.3-management
     ports:
       - "8080:15672"  # management port (guest:guest)
       - "5672:5672"   # amqp port
       - "25672:25672" # cluster port
     environment:
       - RABBITMQ_NODENAME=rabbit
       - RABBITMQ_DEFAULT_USER=admin
       - RABBITMQ_DEFAULT_PASS=admin
   influxdb:
     image: tutum/influxdb:latest
     container_name: influxdb
     environment:
       - ADMIN_USER=admin
       - INFLUXDB_INIT_PWD=admin
       - PRE_CREATE_DB=sentinella
     ports:
       - "8083:8083"
       - "8086:8086"
       - "8090:8090"
   api:
     build:
      context: .
      dockerfile: Dockerfile
     ports:
       - "5000:5000"
     depends_on:
       - db
       - rabbitmq
       - influxdb
     links:
       -  db:db
       -  rabbitmq:amq
       -  influxdb:influx

Run, build and ship

Dockerfile is a script for create a custom image, is necessary fisrt build the image when use docker-compose for build image run this command :

$ docker-compose build

For  run containers:
$ docker-compose up

docker inspect for look for the ip

$ docker inspect name-of-container
we can see the ip on the Network section.

Open you browser and see your app running, in my case it’s a simple flask app.

Source code for this part can be found here on Github.

Conclusion

Containers are a fast solution to mount development environments. Whenever there’s no need to use these environments, then you can destroy them and build them again later.

Share this

Leave a reply