Why dynamic columns make sense for glassbeam architecture

ASHOK AGARWAL
Wednesday, August 14, 2013

Glassbeam Engineering is working heads down on our next-gen architecture using Cassandra and related column family structures. There are many reasons for this evolution, but one of the key drivers is a compelling support use case. Here is some background for this topic.

Typical machine data, like logs, vary in their formats over time (new product releases means changed logs), many components of machine data analytics require flexibility in schema design. One of the most common causes for support incidents is configuration change. i.e. configuration settings or aspects related to the software or device has changed. Highlighting what has changed in the context of the issue being reported has to be an integral part of any machine data analytics solution.

In its simplest form, Config change depends on comparing two sets of data elements. The simplest method to implement config analysis is to ensure the device sends all config data together. In traditional RDBMS, after parsing, you can then save this data in one (or more) rows. You could even take an MD5 hash of all config attributes and easily compare two sets to determine whether something has changed.

The problem arises in being unable to guarantee all configuration attributes are received together. When saving partially received data in a RDBMS, distinguishing between what isn’t received vs. what is received but empty is difficult. Sure, you can design a work around, but wouldn’t it be nice to eliminate columns in which no data is received?

Databases like Cassandra, however, allow dynamic columns and present an elegant solution – key value pairs for data that doesn’t come through are not even present. This makes it easy for an application to traverse up the chain and find the last known value for an attribute – and now the need to collectively receive config data vanishes.

Let me give an example of this use-case:

From the devices of one of our customers, we get data at two separate frequencies. Following data comes every day:

—————– Daily data ——————————————————————————–

Server name stlsclstprd01stlscn2prd01
Company Vang Systems
Hardware version NAS Platform (M2SEKW1050134)
MAC ID DD-05-EF-6E-1E-8B

Uptime (stlscn1prd01): 30 days 13 hours 56 minutes 33 seconds
Uptime (stlscn2prd01): 33 days 2 hours 19 minutes 56 seconds
Uptime (stlscn3prd01): 7 days 18 hours 55 minutes 11 seconds
Uptime (stlscn4prd01): 103 days 9 hours 11 minutes 3 seconds

————————————————————————————————————–

 

While this other data comes once a month:

—————— Monthly Data ——————————————————————————

< getmacid for pnode 1 >
MAC ID is DD-05-EF-6E-1E-8B
< cluster-getmac for pnode 1 >
cluster MAC: 8E-D1-B0-E1-F4-9B
< id for pnode 1 >
Server name: stlscn1prd01
Comment: Sys BLI
Company: Vang Systems
Department:
Location: 800 North Lindbergh Ave.
Building J
Saint Louis
Missouri
63167
United States

< ver for pnode 1 >
Model: 110
Software: 7.0.2050.18c (built 2011-04-06 17:31:25+01:00)
Hardware: Mercury (M2SEKW1050134)

——————————————————————————————

 

There are several configuration elements like software version, Location etc. which only appear in the monthly data. While location may not relate to a support incident, software version certainly can. Since configuration is a combination of attributes received in both these logs, in an RDBMS, both logs have to be parsed together in order to get a full picture of the configuration. But in Cassandra, each log can be parsed individually and store only those attributes which are present in a specific log file. This simplifies the parsing process and reduces the storage requirement.

To learn more about our new architecture and how we process complex log data, click HERE. You can also download our white paper on multi-structured log data from our RESOURCES page.