As the Internet of Things inevitable starts coming into it’s own, the origin of data has evolved from people to machines to “things”. Technologies emerged from leaders like Google and Facebook to enable analyzing tons of data in massive data farms deployed in the cloud. All that is well and good, but the approach itself needed moving this “ton” of data to a central location, partition it across large number of nodes so that analysis could be parallelized. Imagine, Netflix has over 1,000 nodes in their cluster. Hmmmm, doable, but at some point the laws of physics start to interfere.
Necessity is the mother of invention. Out came the concept of EDGE COMPUTING – bring the computing closer to the devices and operate on data locally. What? What about install base analytics, you ask? Sure, you need that, but moving the computing on the edge allows some filtering and aggregation resulting in lesser data to be moved to a central location. And when you talk about “big data”, even a small percentage can be in gigabytes.
Secondly, the edge platform must be isolated enough to be able to operate independently and still provide meaningful insights. Such as evaluating rules locally. Imagine a situation where if Alert A is succeeded by Alert B within 2 minutes, then the hard disk is likely to fail. I may be oversimplifying it, but such an insight, locally, is PRICELESS!
Finally, analytics is all about structure. No matter whether it is local or central, to do any analytics, you must have structure. One of the primary challenges of big data is to convert unstructured “log data” into structured data for business intelligence. Edge computing allows distributing the process of creating the structure and pass on structured data down the chain.
So here is what you should expect from a good edge framework:
- Distributed computing
- Hierarchical deployment
- Local analytics
- Aggregation & forwarding
- Parsing (Structure)
- Small footprint & resource utilization
- Learning from multi-system analytics fed back to edge
- Secure data transfer between the edge and the core
That’s exactly where Glassbeam fits in – a unified platform deployed hierarchically on the edge and in the cloud for back end analytics. Platform deployed at the Edge parses streaming data from edge devices, filters data, performs analysis, and triggers Rules. Edge devices push only relevant data to core.
The diagram below helps articulate how we effectively implement Edge Computing:
The corner stone for Glassbeam’s Edge computing platform is its RULES AND ALERTS ENGINE. But that’s a discussion for another time. Please stay tuned for how Glassbeam uses the Rules engine and Thingworx Machine Learning platform for the feedback.