At Glassbeam, we have always believed in eating our own dog food and why not! We have the same use cases that our customers use for our platform. But, before I go deep into the internal use cases that we use Glassbeam for, let me explain how we collect our infrastructure logs and the types of logs we collect.
The first step of any log analytics project is obviously logs! You not only need to have a mechanism of collecting logs periodically, but also have clarity on the content of your logs. Your analytics is only as good as the content of your logs. If you are looking at getting started on collecting logs from your infrastructure, our white paper “Best Practices in Call-Home Data” will definitely help you
Since Glassbeam Analytics is a platform that parses complex and multi-structured logs, either in batches or as streams, our log analytics project focussed more on what we need to log more than how to format our logs. Our log parsing language SPL (Semiotic Parsing Language) can handle diverse machine log formats with ease from servers, software, IoT devices, and complex machines like medical devices or converged infrastructure logs. That’s the secret sauce of Glassbeam Analytics.
However, we did not get the ’what to log' right overnight. It is an iterative process, driven by use cases from our four teams (Glassbeam Support, Engineering, Product Management, & Sales). Every time we had a use case that Glassbeam for Glassbeam could not solve, we logged new information that could satisfy the new use case we had.
The business case to use Glassbeam Analytics internally was very clear from day one:
What Types of Logs We Collect from Our Cloud-based Glassbeam Analytics Infrastructure?
For our analytics, we collect three types of logs from our hosted systems and applications.
What’s the Frequency of Our Log Data Collection?
Event logs are streamed in real time. This helps us react to any issue instantly. Statistical data is captured very frequently, like every 10 seconds and configuration/state information is captured periodically or when specific events occur in our pipeline. SPL allows us to seamlessly integrate and blend data from multiple sources, being generated at different frequencies.
Glassbeam Infrastructure Snapshot
The above diagram shows our setup. We collect data and correlate it across multiple components in the pipeline, from log receivers to load balancers to parsers to data stores and our middleware and UI layers. The volume, variety and velocity of data vary from component to component. Also, from the same servers, we push both real time streaming data and batch configuration and state data, and easily blend all of them through SPL.
What Do Our Internal Teams Expect from the log Analytics Using Glassbeam?
We use Glassbeam for multiple use cases internally. From helping our support to be proactive, to helping engineering improve their reliability, to helping product management make the right decisions about product priorities, to helping our sales team manage their accounts in a better way, Glassbeam is the go to tool for most of our decisions.
Using the Glassbeam Analytics platform to analyse Glassbeam logs is what we call Glassbeam for Glassbeam (or GB4GB, internally). While Glassbeam for Glassbeam has been in use internally from its inception, I am happy to announce that we are now making our solution available to our on-premise installations and to those customers who install our platform on their private cloud.
Let’s Talk Use Cases
In the next few posts, I will cover key use cases that we use Glassbeam for and how Glassbeam for Glassbeam is helping our support, Engineering, Product management and Sales.