Glassbeam for Glassbeam (Part 5) – Data Driven Product Management

Pramod Sridharamurthy

Product Management is not an easy thing to do. Bulk of the company’s resources and direction is driven by the decisions made on the roadmap of the product. Bad decisions lead to wasted effort internally and in the best case lead to unhappy customers and in the worst case, lead to losing customers.

A product manager not only has to listen to the needs of his customers, but more importantly infer on the requirements that the customer is not saying. He has to prioritize between bugs that need to be fixed and features that need to be built. He needs to balance between new features and features that need enhancements. With limited engineering resources and unending list of things to do, wrong or misinformed decisions can be costly and frustrating to both internal and external stakeholders.

Being a product manager myself, in this post, let me explain how I use the Glassbeam Analytics solution for my product management decisions.

One of the most important use cases for analysing logs is the install base-wide analytics. As a product manager, when I take decisions, I don’t want it to be based on just sporadic requests or trendy user features. I’d like to understand how product feature design impacts the majority of our users.

We humans are perception driven and often, what we intuitively feel is right, might end up being incorrect in the long term. While intuition is good, backing up our intuition with data, makes the case for product design features and architecture decisions, even more solid. Time to market is everything, so why take a chance with just intuition!

I use Glassbeam Analytics (GBforGB) for a significant chunk of my product decisions. Since the use cases where I utilize Glassbeam Analytics for is quite exhaustive, in this post, I will only touch upon some key areas of my product management decision making, using the solution. So, let’s get started.
Prioritizing issues to be fixed in the upcoming release
One of the most important aspects of my product management tool kit is prioritizing bugs that need to be fixed in the product.

Here are three key parameters I use to prioritize the bugs:

  1. how widespread this issue is
  2. whether a bug occurs in a particular scenario or a combination of scenarios
  3. whether a bug impacts specific customers or is it generic

Of course, the nature of the issue primarily decides the criticality to fix something, but every issue is not a P1 that needs to be fixed ASAP. So, when I have a bunch of P2, P3 issues at hand, it’s good to understand how often the issue occurs and how widespread it is to help me prioritize. There are two apps from Glassbeam Analytics suite I use to do this:

  1. First, I go to the Glassbeam Explorer app and search for the issue I am interested in. Explorer is a powerful ‘search and interact’ interface on the logs we parse. I quickly get to know the number of times this issue has occurred in a given time frame, frequency, systems being impacted and customers being impacted.

  1. If the historical data is not significant and I want to keep an eye on this for the future, I just set up a rule, create a dashboard based on the alert generated by that rule, and e-mail myself a daily report. Here’s how my alerts setup looks like. You will notice from the chart that there are some serious occurrences of LCP: Timed out errors. That obviously requires greater and perhaps immediate attention.

Understanding feature usage

In the Agile world, we iterate over new features. Often, you don’t know exactly what feature will work with our users. You might also not want to wait until all the sub features of a major feature are completed before the release. Ideally, I’d like to release the product when the feature meets MVP to start with and then based on feedback from our users, iterate to make the feature even better. Once a feature is stable, I’d still want to continue to track the usage of the feature, so that I can stay relevant to the customer and their use cases. To achieve this, there are two key things I focus on:

  1. I pick features that have not been used much, and talk to a sample set of users (directly or through sales) to understand if the feature is unintuitive or we didn’t do a good job of documenting it in our release notes
  2. For features that has been used once, but not again later, I talk to users to understand if the feature was complex to use or it is does not match their needs

Data as described above helps me to improve the feature usage, improve documentation/release notes, or in the worst case, EOL the feature so that we don’t carry a feature baggage. I look at these both at a per customer level and across all customers. Here are some of the reports that I look at.

Understanding user navigation

Typically, a UI is built with an assumption of a certain workflow. Won’t it be cool to understand if users use the product the same way as it was imagined? What is the typical workflow? Which button do they click most often? And many more such questions.

There are multiple use cases for this data. Here are a few examples:

  • First, it helps our Quality Assurance team to build the end-to-end test cases that simulate the majority of our users’ flow
  • If I know from analysing our logs that a user typically goes from Point A in the UI to Point B, I would be interested to understand if the flow from point A to B is intuitive
  • If a user goes from Point A to Point B to Point C and stays in Point B for a very short period, then very likely, point B was an extra click to reach Point C. There is now scope to improve the UI to make it easy to go from Point A to C.

Here is a work flow graph with the time spent in each feature.

It is fairly easy to infer from the above scenarios that having the ability to analyse our platform logs makes me take product decisions like a rock star! I can come up with very interesting feedback to improve our product, prioritize the right features, track the product usage, help sales with upsell and cross sell opportunities, help QA understand real life use cases and a lot more. Most importantly, I can come up with all of this feedback, driven by data and not just intuition.

In my concluding post in the series, I will cover some of the use cases of Glassbeam for Glassbeam that our sales team sees value in.

Refer to other posts in this Glassbeam for Glassbeam series: