17 Feb 2016

Openstack Survey: The price of not knowing is unpredictable

The price of not knowing is unpredictable. The difference between data and information is that information is useful data. Knowing the air temperature in New York is a fact, knowing what your customers expect from you is information. Your  participation as community  in the survey is not only a way for the OpenStack Foundation to gain information about the community and the OpenStack environment , but also a way for us, members of the community to send information about our organizations, services and priorities, so we (as community) are able to define our path, roadmap and strategy.

In short, the OpenStack Foundation is open and listening to know what the community want. We all will gain with the results while the OpenStack foundation will get helpful information too, to do their job and to figure out how to satisfy the priorities of the community.

Do you want to have the opportunity to influence the Openstack roadmap? It should only take about  only 10 minutes to complete the survey at https://www.openstack.org/user-survey/survey-2016-q1/landing

All of the information you provide is confidential to the Foundation (unless you specify otherwise).

Share this
15 Feb 2016

Sentinel.la App’s Server View Panel: Get insight into your OpenStack servers.

This’s part of a serie of posts describing pieces of our amazing app to monitor OpenStack.

The following screenshot belongs to the server view panel. This panel starts showing an overview of the usage and availability of server’s resources , vital signs, openstack services running on it, opened and closed alerts and important log events collected over the last 24 hours.

sentinel.la openstack monitoring healthcheck service nova neutron heat cinder ceilometer monasca.000

 

The App will collect information from logs, processes and system upon the agent’s installation. This information will help to auto-detect and check the status of OpenStack services running on the server. Once the info is collected, Sentinel.la classify services among OpenStack projects: Nova, Neutron, Cinder, Heat, Glance, Keystone and Ceilometer.

Server View panel shows the OpenStack version running. It shows system information like processor type, memory, kernel version, storage device and capacity. You can identify the server by name and you will able to see the status (i.e. maintenance). Cloud group and location is display under the name of the server.

Note you can still have access to push notification from all your geographically distributed cloud groups at the high right corner of your console. Also, you have the option to add more servers hitting the “+ New” button next to the name of the server.

sentinel.la openstack monitoring healthcheck service nova neutron heat cinder ceilometer monasca.001

You have three buttons to change your server’s status into your overall OpenStack service:

  • Toggle Maintenance Mode: Hit this button if you need to do important maintenance tasks or changes to your server (i.e. Change openstack version). Or do it before to remove it from the App (You will be able to remove the server 10min after the App stops receiving data from it). Your overall uptime will not be affected in case the server stop sending data or removal.
  • Toggle Blackout Mode: Hit this button if you need to do minor changes for troubleshooting on the server. The idea is to stop sending unnecessary notifications. The server is under control and in fixing activities. Uptime indicator is still affected under this mode to estimate the impact of the current event being handled.
  • Classify Server: use this button to be re-group the server into other cloud system

sentinel.la openstack monitoring healthcheck service nova neutron heat cinder ceilometer monasca.002

 

This view has other options to get a better insight of the services, log events and vital signs. Those can be accessed through the menu bellow the server’s description:

  • Overview: This option get you back to the server´s dashboard
  • Alerts: This option get you to a panel with alert’s information over the last 24 hours (The panel shows only the last 5 opened alerts). You will be able to see what alerts has been closed and ones are still opened in a chronological order
  • Vital Signs: Get vital signs’ details of the server over the last 24 hours
  • OpenStack Services: Get better insight of the OpenStack’s services running on the server and their heath.
  • OpenStack Logs: It gets you to a panel with all the important events collected over the last 24 hours. Important events are errors, critical and warnings. This information will help you the get a better understanding of any issue and use it for troubleshooting purposes. The panel brings events in a chronological order and online search options to group events by keywords.

At the right side, you see information of the amount of the alerts are still opened, the server’s uptime and the current server’s load average over the last 24 hours.

sentinel.la openstack monitoring healthcheck service nova neutron heat cinder ceilometer.003

A chart showing the amount of warnings, errors and critical events over the last 24 hours has been located under the menu options. This brings you a sample of much activity you are having into the server.

Server vita signs are also shown under the log events chart. The average of CPU, Memory and Disk utilization over the last 24 hours. Even the amount of alerts that have been closed over the last 24 hours.

sentinel.la openstack monitoring healthcheck service nova neutron heat cinder ceilometer monasca.004

Information regarding last alerts has been located next to the last panel. A column with the last 5 alerts has been posted with some details in regards to the OpenStack processes and the subject of the event that causes it.

Counters showing the current status of CPU, Memory and Disk usage is also displayed. Next to this counters, you find the “OpenStack services status” bringing a fast snap of the amount of inactive processes out of the every OpenStack’sservice in the server.

sentinel.la openstack monitoring healthcheck service nova neutron heat cinder ceilometer monasca.005

 

 

Share this
09 Feb 2016

JSON Web Tokens for dummies

Authentication is one of the most important decisions when we are bulding a web application. Modern web apps are One-page apps, built on top of technologies like AngularJS . In that case, we don’t want to waste time building markup and representation layers, instead of that we are building APIs that our front-end consume. So these changes have led to new ways of implementing authentication into this modern applications.

There are basically two different ways of implementing server side authentication for apps that consist of a frontend and an API. The most adopted one is the traditional Session Based Authentication. In traditional authentication, the user logs in with its credentials, then the server validates the information and stores it in session objects, either in memory or on disk.

Common Session based Authentication problems

This Session based Authentication has the following problems:

Overload

When a user is authenticated, the server needs to remind somehow, usually keeping the information in memory. When many people are online, server overhead increases.

Scalability

From the moment we keep user information stored on session, there are scalability issues. Imagine that our application needs autoscale to meet peak demands and load balancing across multiple servers. While we are keeping the user on the server session if there’s a new request sent to another node then user needs to log in again. This can be solved using the technique known as Sticky Session but even with that solution it doesn’t feel optimal. With token-based authentication, this is solved naturally, as we’ll review.

CORS

Most often we want our data to be consumed from multiple mobile devices (tablets, smartphones, etc …). In this scenario it’s important to worry about what is called CORS: cross-origin resource sharing or sharing of resources from different sources. When we use AJAX to retrieve data from our application, we can find us getting unauthorized requests because modern browsers don’t allow this type of behaviour for security concerns.

JSON Web Tokens

One of the best known solutions to authentication problems for APIs are JSON Web Tokens or JWT. The token-based authentication is stateless. This means that you do not save any user information on the server or the session. This kind of authentication sends a token for each request through the HTTP headers instead of keeping authentication information sessions or cookies. Thus, no state is saved between different requests of our application and our data can be consumed from different types of clients.

As tokens are stored on the client side, we don have to care about status information or session so our app application becomes completely scalable. We can use the same API for different apps likeWeb, Mobile, Android, iOS so we just worry about sending the data in JSON format and generate and decrypt authentication tokens through a middleware.

From user’s perspective, there’s no different between login in into an application that uses JWT and an application that uses traditional authentication. The user enters their credentials checked against the server’s storage or service, but rather to create a session and return a cookie, it will return a JSON object containing JWT.  The JWT  needs to be stored in client side, which is usually done in the local storage The JWT must be sent to the server to access protected routes. The token is generally sent via an HTTP Authorization Header.

You can think of this token as your hotel’s room card. When yo visit a hotel, you go to the hotel’s desk and you give your credentials so they give you a card. In this case, the card is your token and you’ll be able to access to your room with it. We can go into our room but we can’t use our card to go into other room. Who has the card? Does the hotel have it? No, we do have it, as we have the JWT and the server doesn’t store a session for us. When we leave, if we don’t return the card, we will have a useless piece of plastic. That means the tokens has an expiration.

In the next post, we will review the structure of a JSON web token and how can we implement it in AngularJS.

Share this
21 Jan 2016

OpenStack international growth

The OpenStack Foundation has been following the correct way during all these years, but what follows is an international leap. Some international companies adopting OpenStack are helping with that. For example, eBay website runs on an OpenStack private cloud platform. Four years ago, eBay’s website was operated fully on its on-premises datacenter infrastructure. “Today, 95% of eBay marketplace traffic is powered by our OpenStack cloud” said Suneet Nandwani, Sr. Director, Cloud Engineering at eBay Inc.

But this international leap won’t happen without some challenges. One of OpenStack main problems is its steep learning curve, you must first achieve a successfully installation and once done, when you think you have done it, things could get ugly. There’s no definite strategy to operate an OpenStack deployment, but with the right tools and the right information on a timely manner, this could be done without pain.

That’s our commitment at Sentinel.la: “Reduce the operational pain of Cloud Administrator/SysOp Teams providing quick, concise and relevant information to solve the problems related to a real OpenStack deployment”.

The other point of inflection to achieve a succesfully international leap are containers and projects such as Murano and Magnum “People are really excited to see how frameworks like Docker and Kubernetes enable companies to bring containers in and make use of them with the networking and security frameworks that they already have,” said Jonathan Bryce, OpenStack Foundation Executive Director.

Take a look to the interview from theCUBE to Jonathan Bryce and Lauren Sell at the Openstack Day Seattle 2015 – theCUBE

Read the complete history: “OpenStack Foundation ready for international growth | #OpenStackSeattle” from Silicon Angle

Share this
19 Jan 2016

OpenStack services on a Time-Series database

“Measure what is measurable, and make measurable what is not so.”

Galileo Galilei

 

At Sentinel.la, one of the services we provide is the centralization of data & statistics with a OpenStack centered approach, from OpenStack services (nova-* , neutron-* , keystone and so on…) to even get performance and status of vital server resources. All this information is acquired using an nondependent role server architecture (All-in-Ones, dedicated Controller/Compute/Storage deployments, Converged deployments, we must support and fetch data from all those types of deployments.)

Managing all this information requires a very flexible way of organization and handling. Our first Proof of Concept attempt was to create an agent that gathers all the server information at Operating System level, so the basic information was being captured: CPU, Disk Usage, Memory Usage and Load Average. All this information was being stored on a Relational Database.

The problem with Relational Databases is that they are not optimal for handling large amounts of data. Instead of unleashing the power of having such great information you feel like playing Jenga with it, like that with every new row that is added you can’t help but feeling like losing a little bit of performance and scalability. Imagine having millions of rows with CPU data from thousands of servers… that won’t end well.

Jenga_distorted
“Oh yeah, INSERT INTO measurements…”

What about using a NoSQL database? Well, standard NoSQL databases help a lot managing large chunks of document data, but time series is different: imagine that instead of growing vertical rows, your data grows sideways and it depends heavily on the time when the data was saved. So, if not a standard NoSQL, what should we use to save our metrics? And what about if instead just 5 metrics we want to capture “n” metrics for “n” services on “n” devices?

This is where a Time-Series database is useful. On this type of database you have a timestamp that is the equivalent of the Id, so your values are always associated with it. Those values are organized in series, which are a collection of a measurement (CPU usage, disk usage, etc.) and the tags that you employ to identify that measurement (server name, cloud id, server location, etc.)

Having the data stored on a Time-Series database enables you to think of the information as points, which are easy to identify, search, display and graph. You have many functions to manipulate the data and get the right information. In our case we realized that we could use some aggregations and transformations functions to get things like behavior over time with great precision and accuracy.

For this purposes we chose InfluxDB as our time-series database because Monasca uses it and while we were playing with Monasca we found out that it was perfect for what we do. Also InfluxDB can be used “as a service”, the same guys from InfluxData that created the product offer it as a service. This way we can use (and love) InfluxDB features with High Availability without having to operate it and thus we can focus in our core business.

We feel very fortunate to coincide our development with InfluxDB lifecycle. We started using it at the very moment when the 0.9 version was released. This version was a turning point because it added support for tags. Also it’s a little bit different in terms of syntax and other functionalities like a new thresholding and alerting component (Kapacitor) which was introduced the very same week we were researching and developing our metrics alerting engine!

A whole new world

After solving the database backend and having no limits with performance and reliability now comes the sweet part: we can store all the measurements that we want. We began getting I/O values from servers, and started having OpenStack service related information at first. How much CPU does nova-api use? Is nova-scheduler having peaks of memory? What’s the uptime of nova-compute process? The limit is only our (OpenStack) imagination.

nova

References:

Influx Data https://influxdata.com/

Share this
14 Jan 2016

Can OpenStack trust OpenStack? (Monasca & Ceilometer)

“It all begins and ends in your mind. What you give power to has power over you”

– Leon Brown

Many users are hanging on your service at this very moment. Users don’t bear failures as much as a couple of seconds to make them start googling other options. You are willing to invest as much as you need to keep them up to your revenue. On other hand, you have to reduce your operation costs to survive.

OpenStack is an amazing start to get agility and savings. Once you have what you need you’d need to keep that up. Rely their performance level on ceilometer and monasca. Both projects bring important features to get the required insight into your app’s infrastructure: memory/cpu usage at every instance, disk capacity/operations at every volume or network traffic.

Ceilometer was the first taste

Projects like heat use ceilometer to trigger additional instances at your service. It brings sweet ways to auto-scale your service depending on customer’s demands. Stay prepared to the unpredictable (check this yaml file out as a good example). It may wake up your hunger for such use cases. Unimaginable ways out of heat may be used thanks to Ceilometer API. Create scripts to automate your apps own your own. Do it with python.

An agent, a Notifications bus, a Collector and MongoDB form what we call Ceilometer. Agent brings metrics from a bunch of projects like nova, glance, swift and cinder.  Some projects bring its own through the notification bus (RabbitMQ). Others have to be directly taken through a polling process.

Ceilometer Monasca Ceilosca Openstack Sentinel.la Monitoring Alert 01

The Collector finally takes the data from the Agent through the Ceilometer bus. MongoDB is storing all what it gets, waiting to be called from the ceilometer API.  This API could be called directly to get a better understanding of your platform at this very moment.

However, Ceilometer doesn’t scale at the way you growth and the information that you can get it´s still limited. Also, queries take much time to get them done, which can make your service less responsive to your expectations.

Monasca arose from higher expectations

Monasca bring a multi-tenant monitoring as a service model based on keystone authentication (self-service). A multi-purpose monitoring project, which can look out not only Openstack resources. Efforts can be appreciated into the alarm/thresholding engine. Many plugins available can be easily deployed. Libvirt is an example of them, which helps to get better insight of what is happening inside the hypervisor. It’s already done to run Nagios plugin. System active checks (HTTP, ping, ssh) and response time measures are part its basic features.    

Ceilometer Monasca Ceilosca Openstack Sentinel.la Monitoring Alert 02

An essential element in Monasca is Kafka. Kafka brings a more scalable and faster message queue than RabbitMQ. Monasca use resources like InfluxDB to efficiently store time-series. It brings data retention policies for later analysis and real-time anomaly detection.

Ceilometer and Monasca have teamed into Ceilosca

Ceilosca is a smart combination of the best properties of both projects. Ceilometers is widely used and has an important progress getting metrics from several openstack project. On the other side, Monasca is bringing a scalable way to collect, process and present metrics.

Cisco and HP have joined forces around a project called Ceilosca (ceilometer + monasca). Fabio Giannetti (Cisco) took this to the light in the last summit. He showed how ceilosca has out-performed ceilometer more than 2 or 3 times. How ceilometer degrades depending as many tenant you have. Much less data is being stored through ceilosca for the same amount of queries.

Ceilometer Monasca Ceilosca Openstack Sentinel.la Monitoring Alert 03

Ceilosca keeps ceilometer to get data´s metrics to the Monasca API. Replacing the Ceilomoter bus and its collector. MongoDB disappears. Monasca API then take the data to Kafka and so on.

Who looks after those who look after themselves?

Simple and powerful question. Ceilometer and Monasca are amazing tools, which are going beyond to just monitor your App’s assets.  However, who looks after them. Who looks out those APIs? Are these openstack’s projects actually being monitored? Who looks after their schedulers, processes, logs, files? Who looks after their availability? Who checks if their trustworthy enough to run critical services on top? Is there any single point of failure? Is there any risk to scale-out further to run out of resources? Are my logs or schedulers about to run out of disk? Are their databases resilient enough?

A monitoring service on top monasca could be a good start. Define the metrics and the thresholds. The question is: Do you really have the experience to do that? Do you have enough insight into every openstack’s project? Do you have the time to do that? Wouldn’t it take you so far from your core responsibility?

Would you really trust your OpenStack configuration? Issues into your App resources are easily detected with tools like ceilometer or monasca. However, Issues into Openstack projects could be out of your league. Or, like I’ve just said, It would take you so far of your core business.

I don’t have any doubt you’d have the skills (If you are reading this post at this point, I’m sure you have them). However, your company needs you to look after their apps and services. Not just build and keep openstack up and running.

And we are committed to support you on this duty. And we are sure you’ll find a lot of sense to rely this responsibility on us. Also, you’ll have fun. OpenStack is not an out of the box solution. You won’t find the same configuration twice. Our service just brings the building blocks. And as openstack does, you can take it all, or just take the parts that make you more sense. On one way or the other, you will save a lot of time, And you’ll be having fun creating your own stuff to get more advantage of our service.

Why Sentinel.la is not using Monasca or Ceilometer?

As we’ve told in our previous posts.  We are committed to bring a hyper-scalable service. We’ve put also to the on-demand staff pillar as any other ExO.

We’d have liked Monasca to be our platform. However, our core is not to operate Monasca. “Our mission is to help any mortal to unleash OpenStack at every corner of the universe. Help him/her to do it with confidence”. As we expect from our customers to use just the component they like most from our service. We use the components that can get us closer to this mission. Some components like InfluxDB are part of our solution. However, other components like Kafka aren’t.

Besides Kafka, InfluxDB can be contracted and use on demand. Operate and maintain Kafka would take a lot of energy and focus of our development team. Kafka also is developed in Java. That brings another challenge to manage it and operate it. You can’t be expert in every tech. And we’ve decided to stay in python to get more fluency around the openstack community. RabbitMQ can be paid as you go and you can find many options to hire in the market. Our modular design will let us to change any component in the future, even the MQ or the DB, with no disruption.

That’s being said, Monasca API will be supported into our service in the mid-term. You will be able to take/grab system’s data to/from it. You might choose our dashboard/engine to pull some Monasca’s monitored resources (through Monasca API); or just choose Monasca’s alarm/notification engine instead ours. Part of the benefit of being flexible. Don’t you think?

Share this
11 Jan 2016

Mastering the Openstack logs

how fast do you detect a problem in your deployment? no problem is so serious as it seems when talking about Openstack errors. The secret is in mastering the logs.Openstack and their components that run on top of it can generate all different types of messages, which are recorded in various log files.

Whenever a problem occurs in an Openstack deployment, the first place you should look up is in the logs. Analyzing the information provided in the logs, you may be able to detect what the problem is and where the error occurs. A lot of time the user interface only shows “An error occurred” but all the information regarding that error resides in the log file. You can use these messages for troubleshooting and monitoring system events.

Openstack has several services, and each of them have a log file, so there are a large number of log files. A good DevOps team that are managing an Openstack deployment -no matter the size- should need to locate the logs and learn how to work with them track the status and health of their deployment.

Where are the Openstack logs?

The Openstack services use a common location for their logs, in a default configuration of an openstack deployment, log files are located in subdirectories of /var/log directory:

 

Captura de pantalla de 2016-01-07 10:24:42

 

Table from: http://docs.openstack.org/openstack-ops/content/logging_monitoring.html#openstack-log-locations

OpenStack uses the following logging levels: DEBUG, INFO, AUDIT, WARNING, ERROR, CRITICAL, and TRACE.  What do each level mean?

  • Debug: Shows everything and is likely not suitable for normal production operation due to the sheer size of logs generated
  • Info: Usually indicates successful service start/stop, versions and such non-error related data. This should include largely positive units of work that are accomplished (such as starting a compute, creating a user, deleting a volume, etc.)
  • Audit: REMOVE – (all previous Audit messages should be put as INFO)
  • Warning: Indicates that there might be a systemic issue; potential predictive failure notice
  • Error: An error has occurred and an administrator should research the event
  • Critical: An error has occurred and the system might be unstable; immediately get administrator assistance

from: http://stackoverflow.com/questions/2031163/when-to-use-log-level-warn-vs-error/2031209#2031209

 

So the messages in the log files only appear if they are more “severe” than the log level that is set. For example using DEBUG we are allowing all log statements through. If you set DEBUG flag as FALSE only debug messages will be discarded. If you don’t want to see your logs polluted by INFO messages saying “hey, I’m here asking for somewhat!” then you can set VERBOSE as FALSE making WARNING the default. 

*Those configurations are by service, so you need to change it on every conf file (of each service).

As you may know, there are logs by non-openstack components. OpenStack uses a lot of libraries, which do have their own definitions of logging. This logs can be wildly different because each one has their own definitions(MySQL, SQLAlchemy, KVM, OVS, Ceph,etc).

How do an Openstack log record  looks?

The following is an example of a DEBUG log:

2016-01-04 22:41:36.297 DEBUG oslo_db.sqlalchemy.engines [req-af32b586-0aab-4846-b097-12604699d5ec None None] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:256  

Managing your logs

It is good practice to centralize events(logs) from our systems on a server, this server will collect the logs of our systems, classify them and then store them. So, there are two popular log collectors, Fluentd written in CRuby, used in Kubernetes and maintained by Treasure Data Inc and Logstash, written in JRuby and maintained by elastic.co. They have similar features. Both collectors have their own transport protocol, failure Detection and Fallback. Logstash uses Lumberjack protocol, and is Active-Standby only, in other hand Fluentd uses forward protocol and can be deployed as an Active-Active service (load balancing) or Active-Standby. You can read more about Logstash and Fuentd on their sites.

Whatever your decision is, you will need parse the Openstack logs to manipulate them. Yes, regex strikes back, welcome to the regex hell.

memeregex

To save you a little time, we want to share the regular expression to parse the Openstack logs that Sentinel.la DevOps team wrote for such purposes:

Source: https://github.com/Sentinel-la/OpenstackRegexLog

 

OpenstackRegexLog

Regular expression to parse openstack logs

Example

1.- The following DEBUG log:

2016-01-04 22:41:36.297 DEBUG oslo_db.sqlalchemy.engines [req-af32b586-0aab-4846-b097-12604699d5ec None None] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:256

Parsed and stored as json:

{
"time" : "2016-01-04 22:41:36.297",
"description" : "[req-af32b586-0aab-4846-b097-12604699d5ec None None] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:256",
"level" : "DEBUG",
"log_id" : null,
"component" : "oslo_db.sqlalchemy.engines ",
}

2.- The following WARNING log:

2016-01-04 22:41:35.221 19090 WARNING oslo_config.cfg [-] Option "username" from group "keystone_authtoken" is deprecated. Use option "user-name" from group "keystone_authtoken".

Parsed and stored as json:

{
"time" : "2016-01-04 22:41:35.221",
"description" : "[-] Option "username" from group "keystone_authtoken" is deprecated. Use option "user-name" from group "keystone_authtoken",
"level" : "WARNING",
"log_id" : 19090,
"component" : "oslo_config.cfg",
}

 

As you can see, it is very important understand and manage correctly the log files to run an Openstack environment. How are you managing your Openstack logs?

Share this
04 Jan 2016

A crowded universe of OpenStack

“Attributes Staff on Demand Staff on Demand is a necessary characteristic for speed, functionality and flexibility in a fast-changing world. Rather than ‘owning’ employees, ExOs (Exponential Organizations) leverage external people for simple to complex work – even for mission critical processes”

– Salim Ismail, Exponential Organizations

Yes. Crowded universe. Distributed compute even among planets and space stations. Why not? Getting compute resources is many times much easier than some years ago. 3D printers will evolve to massively manufacture open compute everywhere. Bandwidth and latency will not be an issue anymore. Energy will be much cheaper – space stations show us that you can use sun energy to keep them working forever –

Along with all this evolution we‘ll evolve into other areas. Software is still pulling the innovation’s strings for hardware. And it seems that software depends on an engaged community to define its best course. OpenStack is the best example of it. Million lines of code sourced by a community of believers. Believers that code from everywhere – You must feel the Force around you; here, between you, me, the tree, the rock, everywhere (Yoda’s quote)-

Anyone should be capable to use OpenStack.

OpenStack will bring you the power to orchestrate whatever you want. Customers love that. People love accuracy and responsiveness. OpenStack is innovating cloud (orchestration, automation, geographical distribution…) much faster that any of the best of the breed products. However, “with great power comes great responsibility”.

OpenStack can’t be treated as a vendor out of the box solution. OpenStack changes constantly. More projects are added, some other dropped over the time. It needs a dedicated and committed talented team in love of technology and change.

Anyone, with a bit of experience installing OpenStack, knows it’s not perfect. Just getting insight into a project like Nova and Neutron you can see they were developed from different teams – functions and objects that are relatively doing the same things have absolutely different naming conventions between both projects –

OpenStack is a solid solution. Big use cases among companies like Walmart, Paypal, Bloomberg and BMW are evidence of that. However, not everybody can spend millions to keep a dream team supporting it and even collaborating code.

We are believers that we can take this amazing solution to companies that maybe can’t afford it so far. We believe all this interoperability that OpenStack is showing off can be used to increase its adoption among smaller or less specialized users. It doesn’t mean you will get less committed to it. It means you will speed up its implementation and you will be more confident to operate it.

Sentinel.la is a pledge of supporting people to adopt OpenStack.

We love how OpenStack is turning down conventional IT operation. Let´s us help you to discover its full potential and live what we love.

Our target is to get you into the most important value of open cloud platforms. A value that could bring unprecedented savings or additional revenue streams. The value of an agile response to an unpredicted demand. Boost your apps and IT services to bring what customers deserve to get.

We’ll bring you the confidence to start using OpenStack right away – just give us some weeks to get our first version online –

Sentinel.la will be securely and efficiently gathering relevant information of critical OpenStack components. Sentinel.la will bring an online status of every implemented project. What is actually happening inside? Analytics to understand what to do in case of any event or failure.

We’ll help you to get the best OpenStack experience. And in order to do so, we pledge to:

  • Deliver a hyper-scalable Every ingredient in our service is based on components with not limit to scale. From the databases to the very front-end.
  • Bring new features a fixes timely using as part of our design pillar the use of on-demand resources, which can go from online public services to open source. The objective is “how to write as less code as we can”. Of course OpenStack will be one of the core opensource projects in our service.
  • Boost interoperability. We love everything about DevOps. DevOps expect to have full management of any of our feature through RESTfull APIs. We didn´t expect less than this from any other product. Also, python will be our first choice of coding.

And because we love DevOps’ way to do things. We pledge to build our project on the following values:

  • Meritocracy. The different roles in our team are based on meritocracy. Performance will be based on actually work and commitment. The rule number one is “Have fun and love what you do”.
  • Trustworthy. Every member of this team is family and will be treated like this. If anyone run into issues or make a mistake trying to do its best, it will primarily find understanding and support from the team to work this out.
  • Obsessive Agility. Customers deserve a timely response to any problem and doubt. Any person of the team will manage the required information to bring this obsessive agility right away.

 

Finally, our mission is to help any mortal to unleash OpenStack at every corner of the universe. Help him/her to do it with confidence. Help her/him to get the sponsorship of their bosses and companies to show what OpenStack is capable of. Help her/him to do what we like most.

Share this
01 Jan 2016

Alameda: Our journey has begun, spread the word

“Would you tell me, please, which way I ought to go from here?’
‘That depends a good deal on where you want to get to,’ said the Cat.
‘I don’t much care where -‘ said Alice.
‘Then it doesn’t matter which way you go,’ said the Cat.
― Lewis Carroll, Alice in Wonderland

The first version of our release Alameda is almost cooked. It´s being an interesting journey from just a piece of paper with some written ideas to getting a real plan witnessed by some coffee shops between Coyoacán and Polanco (BTW, those are in Mexico city)

You have the IDEA. What’s next now?

Vision is everything. If you don´t know where to get to, then the path you take is irrelevant and all the energy spent would be wasted. Definitely, you must define a vision and its supporting values. Thanks that, our product roadmap was built in matters of days.

 

Company´s pillars have been successfully fulfilled so far.

Product’s design decisions have been taken to keep our company´s pillars: hyper-scalability, interoperability and a smart use of staff on-demand. Avoid any performance bottleneck, even against the unpredicted demand of resources. Leverage any need through on-demand resources. In fact, we’ve got our image and logo through designcrowd.com. Check out my previous post for details about our company´s values and foundations.

Our core service (codename: Medusa) has not been developed from scratch. Specific opensource projects and online platforms have been chosen to stick together. We just code the glue between these blocks – python in the best of the cases.

Collect and manage time-series data is the underlying support for any monitoring service. Create a core platform to do that in an effectively and scalable way, it would have taken forever. Influxdata.com seems to have all what we were looking for. It meets features of being an on-demand resource – we don’t want to spend resources operating and getting tuned a database like this. It would draw us away from our core purpose. Someone else can do it for you.– InfluxDB has Kapacitor, a data processing engine: “Kapacitor lets you define custom logic to process alerts with dynamic thresholds, match metrics for patterns…” Kapacitor makes perfect match with our alert system needs.

We bring three types of thresholds: BINARY (“up” or “down”), TAILABLE (logging information) and GAUGE (for measurements). The alert is evaluated and triggered through Kapacitor (InfluxDB). However, we’ve decided also to use Capped collections (MongoDB).

MongoDB have been added to the equation. It’s delivered in a monthly subscription mode for an important bunch of providers. MongoDB scales out amazingly following the not a recent trend of NoSQL databases in response to the demands presented in building new applications. Its capped collection feature makes data “automatically ages out”. Capped will help us to manage tons of information over the time in the most simple and efficient way.

Figure: Dashboard’s mockup

Our dashboard is being built with AngularJS (Javascript), which is maintained by Google. It offers great portability and flow in applications. A “one-page” experience, so it’s never needed to reload the page. We handle the Model–view–controller (MVC) pattern, which facilitates the addition of components in the future and also provides maintainability – developers independently manage core service and dashboard programming – This is the best example of our flexibility – one of our three pillars – Users could even create their own dashboards to interface to our core monitoring system.

Figure: A glimpse of our dashboard

We are working hard to bring lightweight, secure and highly functional agents. They must be installed on-premises at every openstack’s node, which is the only piece of code to install into your infrastructure – They will remotely reach out our online core service, sending data at every configurable interval of time. Authentication must be strong between agents and our core solution. We’ve decided to leverage this important part of our development on JSON Web Tokens (JWT).

JWT has been used to pass the identity of authenticated users between an identity provider and a service provider. Which is Web Dashboard (Apollo) to RESTful API (Medusa) in our case. The browser doesn’t store sessions, making login functionality fully compatible with mobile devices, without any other change or effort needed – we are preparing ourselves to release Sentinella Android/iOS App in the mid-term – that way you don’t need t manage sensible APIs Keys on premises or into devices – mobile devices could be easily stolen -. APIs don’t expire and change them got a lot of management issues and security concerns. JWT expires constantly and changes transparently to the user.

All these components are talking through an external and secure MQ service – BTW, That could be also externalized to a service provider – The idea to do that is to avoid hard dependencies on every component. That helps us, for example, to make modification to the database´s structures or even change DB´s provider or the DB itself with no service interruption.

What have we got in Alameda?

Our MVP (minimal valuable product) is covering monitoring for Nova and Glance so far. Of course, we´ll add more openstack projects like Keystone and Neutron into the next versions. They way how we are managing versioning is exposed into the next picture.

Figure: Alameda’s roadmap overview

The first version will manage all the components we´ve just mentioned. We are excited to take this online as soon as possible. We are sure this will be an important contribution to the community.

Share this

All rights reserved© 2017 Sentinelle Labs.  Terms and conditions | Privacy Policy

Click Me