04 Nov 2016

How we built Sentinel.la (II) – AngularJS and Netlify

“Apollo” is the codename of our UI. It was built with AngularJS because we wanted a single-page application (SPA). This is how it looks like:

captura-de-pantalla-de-2016-11-03-12-51-01

 

 

AngularJS and Single-Page Apps.

An SPA ( Single-Page Apps) is a web application (or a website) that fits into a single page in order to give a more seamless experience to users, like a desktop application. Single Page Applications are becoming more popular for a good reason. SPAs can provide an experience that feels almost like a native app in the web, instead of sending a full page of markup, you can send a payload of data and then turn it into markup at the client side.

We decided to build an hybrid SPA. By hybrid I mean that, instead of treating the entire application as a single page application, we divide it into logical units of work or paths through the system. You end up with certain areas that result in a full page refresh, but the key interactions of a module take place in the same page without refreshing.  For example, administration might be one “mini” SPA app while configuration is another.

Another advantage of AngularJS is that it uses the MVC approach, which is familiar to developers who have used Django or another MVC web development framework. Angular implements MVC by asking you to split your app into MVC components, then Angular manages those components for you and also serves as the pipeline that connects them.

Another advantage is that it puts the DOM manipulations where they belong. Traditionally, the view modifies the DOM to present data and manipulates the DOM (or invokes jQuery) to add behavior. With Angular, DOM manipulation code should be inside directives and not in the view. Angular takes the view just as another HTML page with placeholders for data.

Every SPA needs an API to consume. We built an API with Flask, that I’ll be reviewing in further posts,

 

real_angular_flask__

 

 

Do not handle webservers, JAM Stack and Netlify.

Once we finished the UI and the API, we launched it over an nginx web servers farm. Also we decided to use a CDN to improve user experience in terms of speed and also to help us prevent site crashes in the event of traffic usage peaks. So, in this case, the CDN would help us to distribute bandwidth across multiple servers, instead of having to use just one server handling all the traffic.

While searching the internet looking for a good and affordable CDN we discovered Netlify and the JAMStack.

“JAM stands for JavaScript, APIs and Markup. It’s the fastest growing stack for building websites and apps: no more servers, host all your front-end on a CDN and use APIs for any moving parts[…] The JAMstack uses markup languages like HTML, CSS and Markdown to format and style our content, client-side Javascript to make it interactive and engaging and APIs to add persistence, real-time sync, real-world interactions, comments, shopping carts, and so on.“ – https://jamstack.org/

“Netlify is a unified platform that automates your code to create high-performant, easily maintainable sites and web apps” . – https://www.netlify.com/

 

captura-de-pantalla-de-2016-11-03-12-50-03

 

So, we decided to use Netlify. It allow us to bring an automated platform to deploy our AngularJS app, because it follows the JAMStack, and also provide us with the best practices like CDN distribution, caching and continuous deployment with a single click – or command. That means not handling webservers, therefore less effort and work. For a startup like us that’s very valuable.

With Netlify we just push our site to their CDN because it pairs naturally with Git. That’s why you only need to pull, change and push the code to manage your site. After a git push, the newest version is immediately available everywhere and with their integrations you can set up any number of outbound webhooks so you can receive an e-mail or Slack notifications to notify for deploy information or build failures.

Netlify has a lot of features like DDoS protection, Snapshots, Versioning & Rollbacks, Instant Cache Invalidation, DNS Hosting, Domain Registration, etc. You can check more Netlify features here: https://www.netlify.com/features/

In the next post I’ll review the API and the AMQP topics about how we built our Software. Stay tuned.

Share this
13 Oct 2016

How build a SaaS around OpenStack (I)

How does Sentinelle Labs build apps?  What pieces interact in our platform in order to successfully capture and process agent’s data to monitor, backtrace and send notifications if something goes wrong in an OpenStack deployment?

We’ve decided that it’s time to share more details around this topic. In this series, we’ll describe our architecture and technologies used to go from source code to a deployed service to show you how your OpenStack deployment is working. You can expect this to be the first of many posts detailing the architecture and the challenges of building and deploying a SaaS to enhance OpenStack skills,  reduce OpenStack learning curve  and Backtrace it much faster. Sentinel.la is a fastest way to OpenStack.

 

High level design

The High-level design (HLD) explains the architecture used for developing a software product. The architecture diagram provides an overview of the entire system, identifying the main components that will be developed for the product along with their interfaces.

sentinella-hld-001

 

Decoupled architecture

As you can see, we’ve used a decoupled architecture approach. This is a type of architecture that enables components/ layers to execute independently so they can interact with each other using well-defined interfaces rather than depending tightly on each other.

 

API

The first step in order to address a decoupled architecture is to build an API. There’s nothing more important than the application program interface to link components with each other. Our API is called “Medusa” and is built with Flask. An API is a great way to expose an application’s functionality to external applications in a safe and secure way. In our case that external app is “Apollo”, our UI, which will be reviewed later.

 

MQ

Sentinel.la uses a Message queuing system (in this case RabbitMQ). The MQ acts as middleware that allows the software to communicate the different layers/pieces that are being used. The systems can remain completely autonomous and unaware of each other. Instead of building one large application, it’s a better practice to decouple different parts of your application and only communicate between them asynchronously with messages.

We use Celery, a task queue with batteries included written in Python to manage our queue, as I mentioned above, our broker is RabbitMQ and also it does manage the workers that consume all the tasks/messages.

 

UI

Our UI is called “Apollo”. It’s built in AngularJS, and is an ”API-Centric” Web Application. Basically, it executes all its functionalities through API calls. For example, to log in a user, we send its credentials to the API, and then the API will return a result indicating if the user provided the correct user-password combination. Also we are following the JAM stack conventions. JAM stack is an ideal way of building static websites, in later posts we’ll explain it but the basic idea is that JAM, that stands for JavaScript, APIs and Markup, is to avoid managing servers, but to host all your front-end over a CDN and use APIs for any moving parts.

 

Datastore

Behind scenes, all data is stored in 4 different databases. We use InfluxDB, a scalable datastore for metrics, events, and real-time analytics. Also we use Rethinkdb the open-source database for the realtime web. One of the components that we use also need MongoDB, an open source database that uses a document-oriented data model. Our relational database is PostgreSQL, an open source relational database management system (DBMS).

 

Agents

Our platform uses all the information generated on the differents OpenStack. To address that we’ve built an agent 100% written in python (available at https://github.com/Sentinel-la/sentinella-agent/). In order to install it, there are .deb for Ubuntu/Debian and .rpm for RedHat/CentOS packages. Also we have a pip package to install it in SuSE https://pypi.org/project/sentinella/

 

Data processing engines

To evaluate all the thresholds we developed 2 different daemons, one for InfluxDB (called “Chronos”) and another one for RethinkDB (called “Aeolus”). These pieces have all the rules and logic to rise an alert when something wrong is detected.

 

Alerta.io

Obviously we need a component that manages all the alerts risen by Chronos and Aeolus. We are proudly using Alerta.io to consolidate all the alerts an also perform de-duplication, simple correlation and trigger the notifications.

 

Notifications delivery

We send 3 different types of notifications for an alert. First, we send an email (we use Mandrill, the transactional email as a service from Mailchimp). We’ve decided not to maintain an SMTP server. Second, we send slack alerts using their webhooks integrations. Third, of course, we notificate users on Sentinel.la Dashboard pushing alerts to Apollo. In order to accomplish that we use Thunderpush, a Tornado and SockJS based push service. It provides a Beaconpush inspired HTTP API and client.

So far, these are the main components that work together in order to deliver Sentinel.la service. In further posts we’ll do a deeper review of all of them. Next post will be about Apollo, our UI, and the JAM stack.

Thanks for reading, your comments are very welcome.

Share this
23 Apr 2016

Keep Austin Nerd! We’re in, are you? #OpenStackSummit

IMAG0141 (1)

 

The Openstack Summit is attended by thousands of people from the whole world. This time the thirteenth release of OpenStack, codenamed Mitaka is ready.  This version has improved user experience, manageability, and scalability.  To see the whole agenda: https://www.openstack.org/summit/austin-2016/summit-schedule#day=2016-04-25

In the OpenStack Summit you will plan your cloud strategy and hear about market opportunities, latest products, tools and services like Sentinel.la, from the OpenStack ecosystem. We are ready to learn about operational experience directly from users.

So, our team will be attending the OpensStack Summit with this  t-shirts, do you like it?  what about getting one?

IMAG0139 (1)

 

Please just send us a tweet to @The_sentinella to obtain a t-shiirt and a sticker and meet us at the event, we’ll be attending sessions and being around the Marketplace. See you at the Openstack Summit!

Share this
17 Feb 2016

Openstack Survey: The price of not knowing is unpredictable

The price of not knowing is unpredictable. The difference between data and information is that information is useful data. Knowing the air temperature in New York is a fact, knowing what your customers expect from you is information. Your  participation as community  in the survey is not only a way for the OpenStack Foundation to gain information about the community and the OpenStack environment , but also a way for us, members of the community to send information about our organizations, services and priorities, so we (as community) are able to define our path, roadmap and strategy.

In short, the OpenStack Foundation is open and listening to know what the community want. We all will gain with the results while the OpenStack foundation will get helpful information too, to do their job and to figure out how to satisfy the priorities of the community.

Do you want to have the opportunity to influence the Openstack roadmap? It should only take about  only 10 minutes to complete the survey at https://www.openstack.org/user-survey/survey-2016-q1/landing

All of the information you provide is confidential to the Foundation (unless you specify otherwise).

Share this
09 Feb 2016

JSON Web Tokens for dummies

Authentication is one of the most important decisions when we are bulding a web application. Modern web apps are One-page apps, built on top of technologies like AngularJS . In that case, we don’t want to waste time building markup and representation layers, instead of that we are building APIs that our front-end consume. So these changes have led to new ways of implementing authentication into this modern applications.

There are basically two different ways of implementing server side authentication for apps that consist of a frontend and an API. The most adopted one is the traditional Session Based Authentication. In traditional authentication, the user logs in with its credentials, then the server validates the information and stores it in session objects, either in memory or on disk.

Common Session based Authentication problems

This Session based Authentication has the following problems:

Overload

When a user is authenticated, the server needs to remind somehow, usually keeping the information in memory. When many people are online, server overhead increases.

Scalability

From the moment we keep user information stored on session, there are scalability issues. Imagine that our application needs autoscale to meet peak demands and load balancing across multiple servers. While we are keeping the user on the server session if there’s a new request sent to another node then user needs to log in again. This can be solved using the technique known as Sticky Session but even with that solution it doesn’t feel optimal. With token-based authentication, this is solved naturally, as we’ll review.

CORS

Most often we want our data to be consumed from multiple mobile devices (tablets, smartphones, etc …). In this scenario it’s important to worry about what is called CORS: cross-origin resource sharing or sharing of resources from different sources. When we use AJAX to retrieve data from our application, we can find us getting unauthorized requests because modern browsers don’t allow this type of behaviour for security concerns.

JSON Web Tokens

One of the best known solutions to authentication problems for APIs are JSON Web Tokens or JWT. The token-based authentication is stateless. This means that you do not save any user information on the server or the session. This kind of authentication sends a token for each request through the HTTP headers instead of keeping authentication information sessions or cookies. Thus, no state is saved between different requests of our application and our data can be consumed from different types of clients.

As tokens are stored on the client side, we don have to care about status information or session so our app application becomes completely scalable. We can use the same API for different apps likeWeb, Mobile, Android, iOS so we just worry about sending the data in JSON format and generate and decrypt authentication tokens through a middleware.

From user’s perspective, there’s no different between login in into an application that uses JWT and an application that uses traditional authentication. The user enters their credentials checked against the server’s storage or service, but rather to create a session and return a cookie, it will return a JSON object containing JWT.  The JWT  needs to be stored in client side, which is usually done in the local storage The JWT must be sent to the server to access protected routes. The token is generally sent via an HTTP Authorization Header.

You can think of this token as your hotel’s room card. When yo visit a hotel, you go to the hotel’s desk and you give your credentials so they give you a card. In this case, the card is your token and you’ll be able to access to your room with it. We can go into our room but we can’t use our card to go into other room. Who has the card? Does the hotel have it? No, we do have it, as we have the JWT and the server doesn’t store a session for us. When we leave, if we don’t return the card, we will have a useless piece of plastic. That means the tokens has an expiration.

In the next post, we will review the structure of a JSON web token and how can we implement it in AngularJS.

Share this
21 Jan 2016

OpenStack international growth

The OpenStack Foundation has been following the correct way during all these years, but what follows is an international leap. Some international companies adopting OpenStack are helping with that. For example, eBay website runs on an OpenStack private cloud platform. Four years ago, eBay’s website was operated fully on its on-premises datacenter infrastructure. “Today, 95% of eBay marketplace traffic is powered by our OpenStack cloud” said Suneet Nandwani, Sr. Director, Cloud Engineering at eBay Inc.

But this international leap won’t happen without some challenges. One of OpenStack main problems is its steep learning curve, you must first achieve a successfully installation and once done, when you think you have done it, things could get ugly. There’s no definite strategy to operate an OpenStack deployment, but with the right tools and the right information on a timely manner, this could be done without pain.

That’s our commitment at Sentinel.la: “Reduce the operational pain of Cloud Administrator/SysOp Teams providing quick, concise and relevant information to solve the problems related to a real OpenStack deployment”.

The other point of inflection to achieve a succesfully international leap are containers and projects such as Murano and Magnum “People are really excited to see how frameworks like Docker and Kubernetes enable companies to bring containers in and make use of them with the networking and security frameworks that they already have,” said Jonathan Bryce, OpenStack Foundation Executive Director.

Take a look to the interview from theCUBE to Jonathan Bryce and Lauren Sell at the Openstack Day Seattle 2015 – theCUBE

Read the complete history: “OpenStack Foundation ready for international growth | #OpenStackSeattle” from Silicon Angle

Share this
04 Jan 2016

A crowded universe of OpenStack

“Attributes Staff on Demand Staff on Demand is a necessary characteristic for speed, functionality and flexibility in a fast-changing world. Rather than ‘owning’ employees, ExOs (Exponential Organizations) leverage external people for simple to complex work – even for mission critical processes”

– Salim Ismail, Exponential Organizations

Yes. Crowded universe. Distributed compute even among planets and space stations. Why not? Getting compute resources is many times much easier than some years ago. 3D printers will evolve to massively manufacture open compute everywhere. Bandwidth and latency will not be an issue anymore. Energy will be much cheaper – space stations show us that you can use sun energy to keep them working forever –

Along with all this evolution we‘ll evolve into other areas. Software is still pulling the innovation’s strings for hardware. And it seems that software depends on an engaged community to define its best course. OpenStack is the best example of it. Million lines of code sourced by a community of believers. Believers that code from everywhere – You must feel the Force around you; here, between you, me, the tree, the rock, everywhere (Yoda’s quote)-

Anyone should be capable to use OpenStack.

OpenStack will bring you the power to orchestrate whatever you want. Customers love that. People love accuracy and responsiveness. OpenStack is innovating cloud (orchestration, automation, geographical distribution…) much faster that any of the best of the breed products. However, “with great power comes great responsibility”.

OpenStack can’t be treated as a vendor out of the box solution. OpenStack changes constantly. More projects are added, some other dropped over the time. It needs a dedicated and committed talented team in love of technology and change.

Anyone, with a bit of experience installing OpenStack, knows it’s not perfect. Just getting insight into a project like Nova and Neutron you can see they were developed from different teams – functions and objects that are relatively doing the same things have absolutely different naming conventions between both projects –

OpenStack is a solid solution. Big use cases among companies like Walmart, Paypal, Bloomberg and BMW are evidence of that. However, not everybody can spend millions to keep a dream team supporting it and even collaborating code.

We are believers that we can take this amazing solution to companies that maybe can’t afford it so far. We believe all this interoperability that OpenStack is showing off can be used to increase its adoption among smaller or less specialized users. It doesn’t mean you will get less committed to it. It means you will speed up its implementation and you will be more confident to operate it.

Sentinel.la is a pledge of supporting people to adopt OpenStack.

We love how OpenStack is turning down conventional IT operation. Let´s us help you to discover its full potential and live what we love.

Our target is to get you into the most important value of open cloud platforms. A value that could bring unprecedented savings or additional revenue streams. The value of an agile response to an unpredicted demand. Boost your apps and IT services to bring what customers deserve to get.

We’ll bring you the confidence to start using OpenStack right away – just give us some weeks to get our first version online –

Sentinel.la will be securely and efficiently gathering relevant information of critical OpenStack components. Sentinel.la will bring an online status of every implemented project. What is actually happening inside? Analytics to understand what to do in case of any event or failure.

We’ll help you to get the best OpenStack experience. And in order to do so, we pledge to:

  • Deliver a hyper-scalable Every ingredient in our service is based on components with not limit to scale. From the databases to the very front-end.
  • Bring new features a fixes timely using as part of our design pillar the use of on-demand resources, which can go from online public services to open source. The objective is “how to write as less code as we can”. Of course OpenStack will be one of the core opensource projects in our service.
  • Boost interoperability. We love everything about DevOps. DevOps expect to have full management of any of our feature through RESTfull APIs. We didn´t expect less than this from any other product. Also, python will be our first choice of coding.

And because we love DevOps’ way to do things. We pledge to build our project on the following values:

  • Meritocracy. The different roles in our team are based on meritocracy. Performance will be based on actually work and commitment. The rule number one is “Have fun and love what you do”.
  • Trustworthy. Every member of this team is family and will be treated like this. If anyone run into issues or make a mistake trying to do its best, it will primarily find understanding and support from the team to work this out.
  • Obsessive Agility. Customers deserve a timely response to any problem and doubt. Any person of the team will manage the required information to bring this obsessive agility right away.

 

Finally, our mission is to help any mortal to unleash OpenStack at every corner of the universe. Help him/her to do it with confidence. Help her/him to get the sponsorship of their bosses and companies to show what OpenStack is capable of. Help her/him to do what we like most.

Share this
01 Jan 2016

Alameda: Our journey has begun, spread the word

“Would you tell me, please, which way I ought to go from here?’
‘That depends a good deal on where you want to get to,’ said the Cat.
‘I don’t much care where -‘ said Alice.
‘Then it doesn’t matter which way you go,’ said the Cat.
― Lewis Carroll, Alice in Wonderland

The first version of our release Alameda is almost cooked. It´s being an interesting journey from just a piece of paper with some written ideas to getting a real plan witnessed by some coffee shops between Coyoacán and Polanco (BTW, those are in Mexico city)

You have the IDEA. What’s next now?

Vision is everything. If you don´t know where to get to, then the path you take is irrelevant and all the energy spent would be wasted. Definitely, you must define a vision and its supporting values. Thanks that, our product roadmap was built in matters of days.

 

Company´s pillars have been successfully fulfilled so far.

Product’s design decisions have been taken to keep our company´s pillars: hyper-scalability, interoperability and a smart use of staff on-demand. Avoid any performance bottleneck, even against the unpredicted demand of resources. Leverage any need through on-demand resources. In fact, we’ve got our image and logo through designcrowd.com. Check out my previous post for details about our company´s values and foundations.

Our core service (codename: Medusa) has not been developed from scratch. Specific opensource projects and online platforms have been chosen to stick together. We just code the glue between these blocks – python in the best of the cases.

Collect and manage time-series data is the underlying support for any monitoring service. Create a core platform to do that in an effectively and scalable way, it would have taken forever. Influxdata.com seems to have all what we were looking for. It meets features of being an on-demand resource – we don’t want to spend resources operating and getting tuned a database like this. It would draw us away from our core purpose. Someone else can do it for you.– InfluxDB has Kapacitor, a data processing engine: “Kapacitor lets you define custom logic to process alerts with dynamic thresholds, match metrics for patterns…” Kapacitor makes perfect match with our alert system needs.

We bring three types of thresholds: BINARY (“up” or “down”), TAILABLE (logging information) and GAUGE (for measurements). The alert is evaluated and triggered through Kapacitor (InfluxDB). However, we’ve decided also to use Capped collections (MongoDB).

MongoDB have been added to the equation. It’s delivered in a monthly subscription mode for an important bunch of providers. MongoDB scales out amazingly following the not a recent trend of NoSQL databases in response to the demands presented in building new applications. Its capped collection feature makes data “automatically ages out”. Capped will help us to manage tons of information over the time in the most simple and efficient way.

Figure: Dashboard’s mockup

Our dashboard is being built with AngularJS (Javascript), which is maintained by Google. It offers great portability and flow in applications. A “one-page” experience, so it’s never needed to reload the page. We handle the Model–view–controller (MVC) pattern, which facilitates the addition of components in the future and also provides maintainability – developers independently manage core service and dashboard programming – This is the best example of our flexibility – one of our three pillars – Users could even create their own dashboards to interface to our core monitoring system.

Figure: A glimpse of our dashboard

We are working hard to bring lightweight, secure and highly functional agents. They must be installed on-premises at every openstack’s node, which is the only piece of code to install into your infrastructure – They will remotely reach out our online core service, sending data at every configurable interval of time. Authentication must be strong between agents and our core solution. We’ve decided to leverage this important part of our development on JSON Web Tokens (JWT).

JWT has been used to pass the identity of authenticated users between an identity provider and a service provider. Which is Web Dashboard (Apollo) to RESTful API (Medusa) in our case. The browser doesn’t store sessions, making login functionality fully compatible with mobile devices, without any other change or effort needed – we are preparing ourselves to release Sentinella Android/iOS App in the mid-term – that way you don’t need t manage sensible APIs Keys on premises or into devices – mobile devices could be easily stolen -. APIs don’t expire and change them got a lot of management issues and security concerns. JWT expires constantly and changes transparently to the user.

All these components are talking through an external and secure MQ service – BTW, That could be also externalized to a service provider – The idea to do that is to avoid hard dependencies on every component. That helps us, for example, to make modification to the database´s structures or even change DB´s provider or the DB itself with no service interruption.

What have we got in Alameda?

Our MVP (minimal valuable product) is covering monitoring for Nova and Glance so far. Of course, we´ll add more openstack projects like Keystone and Neutron into the next versions. They way how we are managing versioning is exposed into the next picture.

Figure: Alameda’s roadmap overview

The first version will manage all the components we´ve just mentioned. We are excited to take this online as soon as possible. We are sure this will be an important contribution to the community.

Share this

All rights reserved© 2017 Sentinelle Labs.  Terms and conditions | Privacy Policy

Click Me