At a strategic level, Product Analytics isn’t just about the “How.” That’s important, but is more tactical. It’s more about the “What” and the “Why.” Basically, are your outcomes driving toward your goals.
We are overwhelmed by information, not because there is too much, but because we haven’t learned how to tame it. Information lies stagnant in rapidly expanding pools as our ability to collect and warehouse it increases, but our ability to make sense of and communicate it remains inert, largely without notice.
Stephen Few, Data Visualization Expert
The What and the Why
What are you trying to do? Probably running and growing your business; within guardrails including internal financial and external regulatory concerns. Most analytics are historical lagging indicators. You may have items you consider leading indicators; a sales pipeline or scheduling system, possibly equipment failure estimations, etc. And you may be using predictive models including AI tools. But mostly, your performance reports will be historical. And why study history? Typically it’s to try to predict or change the future.
So What are you trying to do?
- Achieve target goals.
That’s it. That’s the the “What.”
And just Why do we use analytics to help with this? Is it just to see how we’re doing and know where we are? Sure. Partly. But let’s get to a higher level. We’re tracking so we can do at least these two things:
- Make better choices.
- Reduce bias in decision-making with evidenced based opinions.
Our dashboards and spreadsheets are great. We can choose numbers to try to move various needles. But their purpose can be boiled down to two questions implied above:
- Are we making decisions based on things that are true?
- Are we making better choices that will take us towards our goals?
There’s also the reality that data gathering and analysis is usually costly. More so when you do sophisticated experimentation. Beyond basic dashboards, consider how in “How to Measure Anything,” Douglass Hubbard takes us into Decision Theory. And ideas of Expected Opportunity Loss (EOL). Here, we try to discover the ROI value of even collecting certain analytics themselves. To do so, we have to consider the Expected Value of Information (EVI). For a lot of our typical “must track” items, we may not have to go so deep. But for some? We might. Because the costs of collecting some information might be non-trivial. Tooling, collection, warehousing, data science personnel and perhaps machine learning skills can add up fast. See The Concept of EVI: Expected Value of Information by Ron Kohavi.
Categories to Consider Tracking
We have books, websites, and more that list favorite metrics and Key Performance Indicators. That’s not what we’re here to go over. If you’re responsible for company operations, and high-level product analytics, or are senior in the analytics reporting function itself, it’s useful to consider a larger-scale taxonomy of what your analytics efforts should look like before choosing particular numbers. Doing so can frame your efforts and align with your high-level goals and objectives.
Company Wide View
At a Company level, there are several perspectives on the numbers with which you should be concerned.
- Strategic vs. Tactical
- Internal vs. External
Our focus here is internal. While it’s possible an external industry metric might be a factor in a Key Performance Indicator, demographics, macro issues and such won’t be further considered here. We’re going to focus on high-level structural internal analytics, and then consider some frameworks.
Business & Operational
An Analytics Taxonomy
A taxonomy is a system of classification that organizes concepts into categories and subcategories based on their relationships and characteristics. We’re going to categorize types of analytics based on their purposes, creating a structured framework for understanding various approaches to data analysis.
Our goal is to structure a ‘right fit’ for decision-making. Over time, your organization will – ideally – climb in capability to more sophisticated levels. Depending on where you are in your analytics capability maturity, you may find it best to exercise your organizational muscles with the basics and add as you go.
Starting with Use Cases
- Business Level: Strategic (business metrics and KPIs)
- Product Level: Tactical Drivers (Sometimes called Signals) (Often engagement types)
- System Level (From Guardrail Numbers to System heartbeats)
Within each of the Use Cases, we may have varying Purposes:
Typical Purposes
- Descriptive (Statistical) – describe and summarize past and current data to understand what has happened or is happening. (Typical reports.)
- Predictive (Probabilistic) – To predict future outcomes based on historical data using statistical and machine learning techniques. (Regression analysis, time series views.)
- Prescriptive (Machine Learning / AI) – To recommend actions based on predictive analytics and optimization techniques to achieve desired outcomes. (Optimization, Simulation, machine learning.)
- Diagnostic – Diagnose why something happened by exploring data in detail and identifying causes. (drill down, correlation analysis.)
- Real Time – Analyze data as it is generated or received, providing immediate insights and enabling quick decision-making. (fraud detection, approvals.)
Advanced/Emerging Purposes:
- Cognitive – Mimic human thought processes for insights and understanding. (Machine learning for sentiment analysis, insights from unstructured data.)
- Exploratory – Explore data without specific hypotheses to discover patterns and anomalies. (Uncovering insights, market segments.)
Attributes
At the individual data point or KPI level, it’s useful to define the role of a measure. (Several may apply)
- Metric: A simple, quantifiable measure that is used to track and assess the status of a specific process or activity. Metrics provide raw data that can be analyzed to gain insights into performance. E.g., gross sales, or web page views.
- Key Performance Indicator: A specific type of metric that is directly tied to strategic objectives and critical success factors. KPIs are selected to provide a clear picture of performance in areas that are crucial for achieving organizational goals. A simple Metric might be a KPI. But often, a KPI is derived value; e.g., percentage increase in sales month over month.
- Signal/Driver: A signal or driver is an indicator that provides early warning or insight into future performance. Drivers are leading indicators that influence or predict the outcomes measured by other metrics or KPIs. These might be simple metrics, but they’re also key components of KPIs and can be thought of as leading indicators or drivers of leading indicators. E.g., new customer inquiries.
- Guardrail: A guardrail is a predefined threshold or limit set to ensure that performance stays within acceptable boundaries. Guardrails are used to prevent negative outcomes and ensure operations stay aligned with objectives. E.g., defect rates, cash reserve levels, churn rate.
What to Track
This article is intended to spark thought at a strategic level, not list the Top 10 Things you should track. But for the sake of completeness, a handful of common frameworks bear mentioning. And the idea of “North Star Metrics” as well.
These frameworks get into specific measures and may provide you with ideas as to what you might want to fit into the above taxonomies for the analytics you’d like to track. We’re not going deep here. This is for reference and links to elsewhere only.
AARRR!!!
Here’s a framework that’s both useful and has a great name! Dave McClure’s Startup Metrics for Pirates. The five major categories AARRR… Acquisition, Activation, Retention, Referral and Revenue. This model has aged well since 2007. While originally designed for startups, the framework is useful beyond that business phase. See: AARRR Pirate Metrics Framework from ProductPlan, and Pirate Metrics For Product-Led SaaS, from userpilot, among other sources.
HEART
The HEART framework was developed by Google researchers to help measure and improve user experience (UX). HEART stands for Happiness, Engagement, Adoption, Retention, and Task Success. This framework was introduced in a research paper by Kerry Rodden, Hilary Hutchinson, and Xin Fu titled “Measuring the User Experience on a Large Scale: User-Centered Metrics for Web Applications,” which was presented at the ACM Conference on Human Factors in Computing Systems in 2010.
And then we have the more traditional Balanced Scorecard, Objectives and Key Results, SMART Goals, and others. All of these are useful, but may not perfectly fit your business. They’re better used as guidelines from which you pick and choose the best fit for your situation.
About “North Star Metrics”
This is another concept that seems to have arisen from the technology crowd. When and if these are truly properly aligned with business goals and are actionable, they may be useful. Otherwise, they might not be actionable, but just more vanity metrics. The idea is that you should, (some say must), have one of these. In this article by Amplitude, Every Product Needs a North Star Metric: Here’s How to Find Yours, they talk about how to align this metric with values. The thought is having too many metrics/KPIs to look at can become overwhelming. (Which is true if reports, dashboards and such are structured to see just how many numbers you can fit on a screen.) Some companies famously share their various North Stars… We’ve seen Netflix with subscribers’ hours watched per month, which seems sensible enough. Airbnb’s nights booked, Uber rides completed, and so on.
The good news is that write-ups on North Star metrics will sensibly say that sometimes you need more than one. (To their credit, Amplitude actually goes deeper into the framework.) However, there are some problems with the North Star Metric idea. Perhaps the most obvious issue is that it could be overly simplistic, obscuring various complexities within a business. It might also – ironically – not align with long term goals. Teams might prioritize immediate increases in the metric without considering longer-term implications in other areas, which could be either to neglect other areas or ignore negative consequences. Remember that we typically get what we incentivize. There’s a risk that some might find ways to game the system to improve the North Star Metric without genuinely improving the user experience or business health.
Alternatives or Additions to North Star
Consider: North Star Plus Context
Go through an exercise to consider your North Star metrics. Then add some context around them in your dashboards/reports. Specifically, consider the following enhancements.
- A sparkline style graph to show change over time, possibly overlaid with Month-Over-Month or Year-Over-Year performance.
- Cohort Contributor breakdown to show if there’s a disproportionate impact of a particular sub-component.
- Or something else…
The point is to add context to a North Star Metric such that you avoid having a misleading or overly simplified indicator.
Consider: Overall Evaluation Criteria (OEC)
The Overall Evaluation Criteria (OEC) concept is a framework used to assess and guide decision-making processes, particularly in product development, business strategies, and performance measurement. It’s often used in considering how to evaluate experiments; specifically the metric an experiment is intended to improve. Even if this was its earlier purpose, we can still use it for our own needs. An OEC can serve as a comprehensive set of metrics that collectively evaluate the success and effectiveness of a project, initiative, or strategy. This concept ensures that multiple critical aspects of performance are considered and aligned with the organization’s goals. The ideas behind OEC originate from Genichi Taguchi and his Taguchi Method, which focuses on improving quality and performance through robust design and experimentation. He introduced the concept of OEC as a way to evaluate and optimize processes and products. But we can extend this idea wider.
An OEC can offer a more holistic view than a simpler single metric. It can help inform decision-making as it considers a range of relevant metrics. It should also offer both alignment and focus as well as avoid overemphasis on a single metric or perspective.
How do you craft this? In Trustworthy Online Controlled Experiments, by Kohavi, Tang and Xu, they offer several examples from experiments, one of which included the Bing Search Engine. While Bing sensibly used query share and revenue as key considerations, another value to consider was monthly query share which gets to the heart of their value. Other numbers may be important and useful, but also potentially obscure meaningful information or even incentivize bad behavior. (After all, you can easily increase revenue by decreasing quality; you’d get more searches! But of course, over time, you’ll lose customers.) Hence the monthly query share OEC…
n users/month * sessions/user * distinct_queries/session
Here’s another view of Bing’s considerations in The Perils of Experimenting with the Wrong Metrics, by Jon Noronha,
The point is this: You can consider the OEC concept to craft a value that encompasses multiple criteria in an attempt to get you to what’s important without oversimplifying in such a way that you end up with a number that might be interesting, but not deeply meaningful. This might result in a value that’s useful as a so-called North Star metric.
Specialty Concerns
Almost every business will have similar core analytics needs for basic operations. Everyone has balance sheets, cash flow, and so on. After that, needs may differ wildly by industry category, market type, distribution methods, and so on. These specialty issues will likely be well known by practitioners in these vertical markets.
There are also, however, some still nascent technologies that bear special mention. Search, the Internet of Things (IoT), Machine Learning (ML) and Artificial Intelligence (AI), products with “network effects” all have their own complexities and relationships to your bottom and top line. Getting deeply into these special categories warrants a separate article or perhaps even a book or two. Suffice it to say that if any – or all of these – are a factor in your business, you’ll need to specify how you’re going to incorporate them into your reporting. However, expect that you may have to do a fair bit of experimentation to determine causality among these elements and your main business drivers as these areas – while not necessarily brand new – are still evolving in terms of our understanding as to how to measure.
Summing Up: Evaluating Your Metrics
Once you’re all done considering your analytics taxonomies, frameworks and such, you can consider evaluating each along the following dimensions. Are they…
- Aligned with vision/mission/goals & Objectives
- Actionable/relevant
- Sensitive (meaning it responds quickly and noticeably to changes in the intended measured thing.)
- Tamper/Game Resistant (Ideally less of a factor for internal analytics. Again, we tend to get what we incentivize.)
If you and your analytics team spend the time to think these issues through beyond what comes out of the box with your analytics dashboard tools, you’ll ideally find you can more effectively gain insights that can point you towards how to best attempt moving things in the direction you want.