31 Mar 2017

Brand new Sentinel.la plugins to control your entire stack

 

Sentinel.la is a fast way to manage OpenStack, helping you to reduce OpenStack’s learning curve.

Yes, we do love and specialize on OpenStack, but what happens if you need to do more? If you need help making easier your journey as a DevOps, Sysadmin, Developer or a person interested in getting different data about your server or application, well, we were thinking about it, how can we help you? And then the Plugins idea was born.

The plugins are components in python and with them, you can drop some lines of code with python logic (at this moment) to extract server metrics by processes/components. It’s also possible to monitor within a server and transform it into just one piece of data, for example, the number of volumes in a Docker deployment, Ceph health check, is MySQL server running?, etc.. this information will be displayed into the Sentinella App.

In this way, our users just only get and save the information related to their particular needs.

 

 

 

Sentinel.la is seeking not only into bringing you OpenStack, but to also be an efficient tool. We’ve seen OpenStack deployments that use different components like PostgreSQL, RabbitMQ, Apache, etc.. all that being required to be monitored, to get a faster and better troubleshooting.

This is the reason why now you can make your own plugins and share them with the community.

 

How do plugins work in Sentinel.la?

Sentinel.la takes your metrics and saves this information using our API, all through Sentinella Agent.

Our components are an abstract of the Sentinel.la functionality to adapt new features.

 

 

The plugin uses a task in Sentinella Agent to push new metrics and sends to Sentinella API, when arriving at Sentinella API will apply validation to know if is a valid plugin, the next diagram show you the internal workflow.

 

 

 

Sentinel.la has a process to plugin evaluation, this process starts when registering a plugin release.

Is necessary ensure that all Sentinel.la plugins follow the rules, to keep control and quality this is the reason why we have a process to approving but is simple.

  1. Register release.
  2. Evaluatión.
  3. Approve.

The point 2 consists in the review of code, check if the rules have been applied.

What steps do I need to follow to add my plugin into Sentinel.la?

1.- Register Plugin at Sentinella.
2.- Get your plugin_key.
3.- Download the plugin template.
4.- Put your code logic into your plugin, following the specs.
5.- Make a release.
6.- Install plugin into your server with the Sentinella Agent

For more information click here.

Also, you can take other plugins done by Sentinella Community.

 

What are the rules?

  1.  Plugins must register into Sentinel.la.
  2. Follow the documentation to make Sentinel.la Plugin.
  3. Enjoy.

How do I install a plugin?

Piece of cake:

$ sentinella install <plugin_name> <plugin_version>

How do I configure a plugin?

When the plugin is already installed, open this file /etc/sentinella/sentinella.conf this file has a configuration section for the plugins, it’s one object called plugins.
In this section, you must add your plugin.

Section example:

"plugins": {
        "sentinella.openstack_logs": [
            "get_openstack_events"
        ],
        "sentinella.metrics": [
            "get_server_usage_stats"
        ],
        "sentinella.test": [
            "get_stats"
        ],
        "sentinella.sentinella-docker": [ <----- Name of package
            "docker_stats" <----- Name method.
        ]
    },

If you have any questions about the package name, class name, etc.. you can go to /usr/share/python/sentinella/lib/python2.7/site-packages/sentinella/ here are all installed plugins.

Doubts?

Please, contact us 🙂

Share this
11 Nov 2016

Speeding up your development with docker-compose

Hello, my name is Gloria and now I am part of Sentinel.la as Software engineer. This is my very first post and I want to share with you how we are avoiding inconsistencies between environments to speed up our development process.

As we are a startup every opportunity to save time is priceless. We faced a challlenge: How to stop wasting time trying to run our app in different environments?

Let me put this scenario:

– Guillermo has installed some libraries in our dev environment (A virtual machine) and we are programming an application-specific functionality with something that is only available at these libraries.

– Francisco has installed in our staging machine other libraries, because he is working on another project with other code, but Guillermo wants Francisco to execute the code of his application in that different environment. So, Francisco installs the same librares, or the application will fail either.

This scenario disappears with Docker. To run the application, Guillermo creates a Docker container with the application and all the resources needed, and passes that container to Francisco.

Francisco, having Docker installed, can run the application through the container, without having to install anything else. =)

Docker now allow us to focus on developing our code without worrying about whether that code will work on the machine on which it will run.

“Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.”

What are the benefits?

– Rapid application deployment.
– Portability.
– Encapsulate apps control version.
– Maintenance.

Is normal that when a new developer take its first task, to have to install all development tools to work on that task, and it’s usual to waste a lot of time trying to do it.

In this case with docker we are solving the portability, preparing an enviroment to work and do it rapidly, that’s why we use containers, to make portable our system. Our team could be able to work on every OS.

How to achieve it?

This is a simple example to start an environment with Flask, InfluxDB, Rabbit MQ and Celery, using docker-compose.

Source code for this part can be found here on Github.

dockercompose

Docker compose is a tool for docker, It allow us to start multiple containers, writing only a script in yaml format to start your environment using containers, so you can link containers, execute commands, build images, etc. and then docker-compose will help us to prepare a complete environment.

Requirements
Docker 
Docker compose

Directory
This is the structure of my project, everything goes inside the “project-one” directory.

directory

entrypoint.sh file
This file executes at container context, inside WORKDIR.

For example in this case execute celery worker for this application and run app.

#!/bin/bash
echo $PWD
ls
celery -A celery worker --loglevel=info
python -u app.py

exec "$@"

Create a Dockerfile

FROM python:2.7
MAINTAINER Gloria Palma "gloria@sentinel.la"
ADD . /app
WORKDIR /app/ 
RUN pip install -r requirements.txt
ENTRYPOINT ["./entrypoint.sh"]

Create a docker-compose.yml

version: "2"
services:
   db:
     image: postgres
     environment:
       - POSTGRES_USER=admin
       - POSTGRES_PASSWORD=admin
   rabbitmq:
     image: rabbitmq:3.5.3-management
     ports:
       - "8080:15672"  # management port (guest:guest)
       - "5672:5672"   # amqp port
       - "25672:25672" # cluster port
     environment:
       - RABBITMQ_NODENAME=rabbit
       - RABBITMQ_DEFAULT_USER=admin
       - RABBITMQ_DEFAULT_PASS=admin
   influxdb:
     image: tutum/influxdb:latest
     container_name: influxdb
     environment:
       - ADMIN_USER=admin
       - INFLUXDB_INIT_PWD=admin
       - PRE_CREATE_DB=sentinella
     ports:
       - "8083:8083"
       - "8086:8086"
       - "8090:8090"
   api:
     build:
      context: .
      dockerfile: Dockerfile
     ports:
       - "5000:5000"
     depends_on:
       - db
       - rabbitmq
       - influxdb
     links:
       -  db:db
       -  rabbitmq:amq
       -  influxdb:influx

Run, build and ship

Dockerfile is a script for create a custom image, is necessary fisrt build the image when use docker-compose for build image run this command :

$ docker-compose build

For  run containers:
$ docker-compose up

docker inspect for look for the ip

$ docker inspect name-of-container
we can see the ip on the Network section.

Open you browser and see your app running, in my case it’s a simple flask app.

Source code for this part can be found here on Github.

Conclusion

Containers are a fast solution to mount development environments. Whenever there’s no need to use these environments, then you can destroy them and build them again later.

Share this

All rights reserved© 2017 Sentinelle Labs.  Terms and conditions | Privacy Policy

Click Me