Edge analytics in glassbeam

Ashok Agarwal

So you heard me TALK ABOUT edge computing. Now lets look at edge analytics. In other words, dynamically created business rules implemented at run time? Hmmm, great idea, but difficult to implement. Even more difficult if you further simplify the generation of such rules through an intuitive drag and drop UI. Analytics is another name for early action and Rules are key to making that happen. Rules allow us to:

  1. Trigger alerts based on dynamically changing business needs.
  2. Encapsulate L3 level knowledge in the form of rules and apply these at L1 level (and thereby lower MTTR)
  3. Encapsulate machine learning models as predictive scores and algorithms

Glassbeam implements rules as a DSL (Domain Specific Language). They incorporate a Message Bus based streaming engine which applies the rules as data streams are processed. I am sure you are thinking, wow, that seems ideal for edge computing. You are absolutely right – Edge is all about processing streams and fits well with Glassbeam’s architecture.

Rules are triggered using Finite State Machines (FSM) and are evaluated in parallel. Rules use 2 types of DSLs:

  1. External DSL – This is what the UI shows to a user and user defines rules in this
  2. Internal DSL – This is not exposed to the end users. The UI translates the rules into this language which is then interpreted by the Platform. The internal DSL is very close to the SCALA language and can also be used to define expert level rules

Some representative examples of rules are:

  1. “cpu_usage>80%”, or
  2. “used space+unallocated space>0.9*total space”

And broadly speaking, Complex Event processing in Glassbeam involves

  1. Condition which triggers a Rule
  2. Alert text to generate when the Rule is triggered
  3. Action to take when a Rule is triggered (such as send an email, make an API call, Push to a message bus etc.)
  4. Scope of the Rule – particularly important if the rule has to act on multiple rows in log files

In the application a Rule is visible as SCALA language (Internal DSL). Glassbeam’s architecture uses an AKKA ACTOR model for parallel asynchronous processing. The actor model is a message based system. Each rule translates to an actor. The system uses template actor code, injects the internal representation of the rule into it and compiles the actor on the fly.

Remember streams and FSM’s? That’s how we do it:

  1. The compiled actor is an FSM. Based on the rule it implements, the FSM registers an interest in specific row maps
  2. Each incoming line in the stream is parsed into a structured row map
  3. The row map is passed on to the actors who have registered an interest in them
  4. The FSM actor cycles through various states based on incoming row maps
  5. Once the scope of the rule is exhausted, the actor triggers the action

The logic of a rule can aggregate and filter incoming data and the action of the rule allows passing it on to the next node in the hierarchy. For example,

  1. The Edge device may emit millions of events per hour, but I am interested in events of only certain severity. I can write a rule to trigger a forward when the desired severity occurs.
  2. The edge device is capturing statistical data every second. I don’t want so much granularity in my install base analytics. I am happy with hourly granularity. I can write a rule on the edge to do an average, min, max for the hour and trigger an alert when the hour is over.

Next we will see how learnings at the core be transferred to the edge through the rules engine. Stay tuned …..